text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Groac%27h] | [TOKENS: 4423] |
Contents Groac'h A groac'h (Breton for "fairy", "witch" or "crone", pl. groagez) is a kind of Breton water-fairy. Seen in various forms, often by night, many are old, similar to ogres and witches, sometimes with walrus teeth. Supposed to live in caverns, under the beach and under the sea, the groac'h has power over the forces of nature and can change its shape. It is mainly known as a malevolent figure, largely because of Émile Souvestre's story La Groac'h de l'île du Lok, in which the fairy seduces men, changes them into fish and serves them as meals to her guests, on one of the Glénan Islands. Other tales present them as old solitary fairies who can overwhelm with gifts the humans who visit them. Several place-names of Lower Brittany are connected with the groac'h, especially the names of some megaliths in Côtes-d'Armor, as well as the island of Groix in Morbihan and the lighthouse of La Vieille. The origin of those fairies that belong to the archetype of "the crone" is to be found in the ancient female divinities demonized by Christianity. The influence of Breton writers in the 19th century brought them closer to the classical fairy figure. The groac'h has several times appeared in recent literary works, such as Nicolas Bréhal's La Pâleur et le Sang (1983). Etymology According to Philippe Le Stum, originally groac'h seems to have been the Breton word for fairies in general. It evolved to mean an old creature of deceptive beauty. It is often spelled "groah", the final consonant being pronounced like the German ch. One of the possible plurals is groagez. According to Joseph Rio the assimilation of the groac'h with the fairy is more the result of the influence of Émile Souvestre's tale, and commentaries on it, than a belief deriving from the popular traditions of Lower Brittany: "The Groac'h of Lok Island", a story intended for literate audiences, uses a writing technique based on the interchangeable use of the words "fairy" and "groac'h". Anatole Le Braz comments on this name that "Groac'h is used in good and bad senses by turns. It can mean an old witch or simply an old woman." Characteristics The groagez are the fairies most often encountered in Brittany, generally in forests and near springs: they are essentially the fairies of Breton wells. Likewise, a certain number of "sea fairies" bear the name of groac'h, sometimes interchangeably with those of "morgen" or "siren". Joseph Mahé [fr] speaks (1825) of a malicious creature that he was frightened of as a child, reputed to inhabit wells in which it drowned those children that fell in. It is possible that Souvestre drew the evil characteristics of "his" groac'h from Mahé, and indeed he admits in his notes a certain reinvention of tradition. Because of their multiform character, the groagez are hard to define. One of them is said to frequent the neighborhood of Kerodi, but the descriptions vary: an old woman bent and leaning on a crutch, or a richly dressed princess, accompanied by korrigans. Often the descriptions insist on its likeness to an old woman; Françoise Morvan mentions the name "beetle-fairy". She notes cases where the groagez have exceptionally long "walrus" teeth, which may be the length of a finger or may even drag along the ground, though in other cases they have no such teeth, or at any rate nothing is said of them. Sometimes they are hunchbacked. The storyteller Pierre Dubois describes them as shapeshifters capable of taking on the most flattering or the most repugnant appearance: swans or wrinkled, peering hobgoblins. He attributes green teeth to them, or more rarely red, as well as "a coat of scales". For Morvan, the variety of these descriptions is a result of two phenomena. On the one hand, it is possible that these fairies change their appearance as they age, to become like warty frogs. On the other hand, a Russian tradition reported by Andrei Sinyavsky has the fairies go through cycles of rejuvenation and ageing according to the cycles of the moon: a similar tradition may have existed in Brittany. Pierre Dubois compares the groac'h to an ogress or a "water-witch". André-François Ruaud [fr] relates it rather to undines, Richard Ely and Amélie Tsaag Valren to witches, Édouard Brasey describes it as a "lake fairy". Be that as it may, the groac'h is one of the most powerful fairies in Breton waters. In its aquatic habitat, as on land, it has power over the elements. The groac'h of Lanascol Castle could shake the dead autumn leaves and turn them to gold, or bend the trees and make the ponds ripple as it passed. Although it is mostly known by negative representations, the groac'h is not necessarily bad. It may politely receive humans in its lair and offer treasure, magical objects (most often in threes), and cures. Like many other fairies, it also takes care of laundry and spinning. They are overbearing, but generally full of good intentions. Most often, the groagez are described as being solitary in their retreats under the sea, in a rock or in the sands, but some stories tell of an entirely female family life. They do not abandon their children or leave changelings. Sometimes they are accompanied by a green water horse and a pikeman. They are more inconstant and more sensitive than other Breton fairies, taking offence easily. In Finistère, groagez reveal to miners the existence of silver-bearing lead. Stories and collected legends Several collections report a groac'h in one or another place in Brittany. Souvestre evokes one of these fairies, likened to a naiad, in a well in Vannes: this legend seems to have been quite popular in its day, and could have the same sources as the tale of the fairy of the well. It belongs to the theme of "spinners by the fountain" in the Aarne-Thompson classification. A story collected by Anatole Le Braz makes one of these fairies the personification of the plague: an old man from Plestin finds a groac'h who asks for his help in crossing a river. He carries it, but it becomes more and more heavy, so that he sets it back down it where he found it, thereby preventing an epidemic of plague in the Lannion district. François-Marie Luzel also brings together several traditions around the groagez, that people would shun them as they would Ankou. Some are known to have the power of changing into foals, or again to haunt the forest of Coat-ann-noz (the wood of the night). The duke's pond in Vannes would house a groac'h, a former princess who threw herself into the water to flee a too importunate lover, and who would sometimes be seen combing her long blonde hair with a golden comb. The most famous story evoking a groac'h is La Groac'h de l'Île du Lok, collected, written and arranged by Émile Souvestre for his book Le Foyer breton (1844). Houarn Pogamm and Bellah Postik, orphan cousins, grow up together in Lannilis and fall in love, but they are poor, so Houarn leaves to seek his fortune. Bellah gives him a little bell and a knife, but keeps a third magic object for herself, a wand. Houarn arrives at Pont-Aven and hears about the groac'h of Lok Island [fr], a fairy who inhabits a lake on the largest of the Glénan Islands, reputed to be as rich as all the kings on earth put together. Houarn goes to the island of Lok and gets into an enchanted boat in the shape of a swan, which takes him underwater to the home of the groac'h. This beautiful woman asks him what he wants, and Houarn replies that he is looking for the wherewithal to buy a little cow and a lean hog. The fairy offers him some enchanted wine to drink and asks him to marry her. He accepts, but when he sees the groac'h catch and fry fish which moan in the pan he begins to be afraid and regrets his decision. The groac'h gives him the dish of fried fish and goes away to look for wine. Houarn draws his knife, whose blade dispels enchantments. All the fish stand up and become little men. They are victims of the groac'h, who agreed to marry her before being metamorphosed and served as dinner to the other suitors. Houarn tries to escape but the groac'h comes back and throws at him the steel net she wears on her belt, which turns him into a frog. The bell that he carries on his neck rings, and Bellah hears it at Lannilis. She takes hold of her magic wand, which turns itself into a fast pony, then into a bird to cross the sea. At the top of a rock, Bellah finds a little black korandon, the groac'h's husband, and he tells her of the fairy's vulnerable point. The korandon offers Bellah men's clothes to disguise herself in. She goes to the groac'h, who is very happy to receive such a beautiful boy and yields to the request of Bellah, who would like to catch her fish with the steel net. Bellah throws the net on the fairy, cursing her thus: "Become in body what you are in heart!". The groac'h changes into a hideous creature, the queen of mushrooms, and is thrown into a well. The metamorphosed men and the korandon are saved, and Bellah and Houarn take the treasures of the fairy, marry and live happily ever after. For the scholar Joseph Rio this tale is important documentary evidence on the character of the groac'h. Souvestre explained why he chose to place it on the island of Lok by the multiplicity of versions of the storytellers which do so. La Groac'h de l'île du Lok was even more of a success in Germany than it had been in Brittany. Heinrich Bode published it under the title of Die Wasserhexe in 1847, and it was republished in 1989 and 1993. The story was likewise translated into English (The Groac'h of the Isle) and published in The Lilac Fairy Book in 1910. Between 1880 and 1920 it served as study material for British students learning French. This tale, collected by Joseph Frison around 1914, tells of a young girl who goes one night to a spring to help her mother. She discovers that a groac'h lives there. The fairy tells her never to come back by night, otherwise she will never see her mother again. The mother falls ill, and the girl returns to draw some water in the night in spite of the prohibition. The groac'h catches the girl and keeps her in its cave, which has every possible comfort. Although she is separated from her family the girl is happy there. A young groac'h comes to guard her while the groac'h of the spring is away visiting one of its sisters. She dies while with her sister, having first sent a message to the young groac'h: the girl is free to leave if she wishes. Knowing that the home of the groac'h is much more comfortable than her own, the girl asks for a key so that she can enter or leave at her own convenience. The young groac'h has her wait for one month, while the elder sister dies. She then gives her two keys, with instructions never to stay outside after sunset. The little girl meets one of her family while out walking, but resolves to return early to keep her promise. Later she meets a very handsome young man, whom she leaves, promising to come back the next day. The groac'h advises her to marry him, assuring her that this will lift the prohibition on her returning after sunset. She follows this advice and lives happily ever after with her new husband. According to this recent story (collected by Théophile Le Graët in 1975), a widower with a daughter marries a black-skinned woman who has a daughter, also black. The new bride treats her stepdaughter very badly, and demands she spin all day long. One day, when near a well, the girl encounters an old walrus-toothed fairy who offers her new clothes, heals her fingers, goes to her place and offers to share its house with her. She eagerly moves in and is very happy there. When eventually she announces that she wants to leave, the fairy gives her a magic stone. She goes back to her stepmother's home where, with her new clothes, no-one recognizes her. With the fairy stone she can get everything she wants. The black girl becomes jealous and throws herself down the well in the hope of getting the same gifts, but the fairy only gives her a thistle. The black girl wishes for the greatest prince in the world to appear so that he can ask for her hand in marriage, but it is the Devil who appears and carries her away. In the end the good girl returns to her home in the well, and sometimes she can be heard singing. This tale takes place on the island of Groagez (the "island of women" or the "fairy island"), which Paul Sébillot describes as being the home of an old woman who is a spinner and a witch; it is in Trégor, one kilometer from Port-Blanc. According to this tale, collected by G. Le Calvez at the end of the 19th century, a vor Groac'h, "sea fairy", lives in a hollow rock on the island. A woman happens to pass by, and comes across the old fairy spinning with her distaff. The groac'h invites the woman to approach it and gives her its distaff, instructing her that it will bring her her fortune, but that she must tell no-one about it. The woman goes home and quickly becomes rich thanks to the distaff, the thread of which never runs out and is much finer in quality than all others. But the temptation to speak about it becomes too great for her. The moment she reveals that the distaff comes from a fairy all the money she has earned from it disappears. This story was collected by Anatole Le Braz, who makes reference to the belief in fairies among people of his acquaintance living near his friend Walter Evans-Wentz. A ruined manor house called Lanascol Castle is said to have housed a fairy known as Lanascol groac'h. One day, the landowners put up for sale a part of the estate where they no longer live. A notary from Plouaret conducts the auction, during which prices go up very high. Suddenly, a gentle yet imperious female voice makes a bid raising the price by a thousand francs. All the attendants look to see who spoke, but there is no woman in the room. The notary then asks loudly who bid, and the female voice answers groac'h Lanascol!. Everyone flees. Since then, according to Le Braz, the estate has never found a buyer. Localities, place-names and religious practices Many place-names in Lower Brittany are attributed to a groac'h. The Grand Menhir, called Men Er Groah, at Locmariaquer probably owes its name to an amalgamation of the Breton word for "cave", groh, with the word groac'h. Pierre Saintyves cites from the same commune a "table of the old woman", a dolmen called daul ar groac'h. At Maël-Pestivien three stones two meters high placed next to each other in the village of Kermorvan, are known by the name of Ty-ar-Groac'h, or "the house of the fairy". In 1868, an eight-meter menhir called Min-ar-Groach was destroyed in Plourac'h. In Cavan, the tomb of the "groac'h Ahès", or "Be Ar Groac'h", has become attributed not to the groac'h but to the giant Ahès [fr]. There is a Tombeau de la Groac'h Rouge (Tomb of the Red Groac'h) in Prat, attributed to a "red fairy" that brought the stones in her apron. This megalith is however almost destroyed. According to Souvestre and the Celtomaniac Alfred Fouquet [fr] (1853), the island of Groix got its name (in Breton) from the groagez, described by them as "druidesses" now seen as old women or witches. For the writer Claire de Marnier this tradition, which makes the islanders sons of witches, is a "remarkable belief" peculiar to "the Breton soul". The rock of Croac'h Coz, or "the island of the old fairy", in the commune of Plougrescant, was the home of an old groac'h who would engage in spinning from time to time. Sébillot relates that the fishermen of Loguivy (in Ploubazlanec) once feared to pass near the cave named Toul ar Groac'h, "fairy hole", and preferred to spend the night under their beached boats until the next tide, rather than risk angering the fairy. Similarly, Anatole Le Braz cites Barr-ann-Heol near Penvénan, as a dangerous place where a groac'h keeps watch, ready to seize benighted travellers at a crossroads. In Ushant many place-names refer to it, including the Pointe de la Groac'h and the lighthouse of La Vieille, in reference, according to Georges Guénin, to "a kind of witch". Some traces of possible religious invocation of these fairies are known. Paul-Yves Sébillot [fr] says that the sick once came to rub the pre-Christian statue called Groac'h er goard (or Groac'h ar Goard ) so as to be healed. The seven-foot-tall granite statue known as the Venus of Quinipily represents a naked woman of "indecent form" and could be a remnant of the worship of Venus or Isis. Analysis According to Marc Gontard, the groac'h demonstrates the demonization of ancient goddesses under the influence of Christianity: it was changed to a witch just as other divinities became lost girls and mermaids. Its palace under the waves is a typical motif of fairy tales and folk-stories, which is also found in, for example, the texts of the Arthurian legend, Irish folklore and several Hispanic tales. Pierre Dubois likens the groac'h to many maleficent water-fairies, like Peg Powler, Jenny Greenteeth, the mère Engueule and the green ogresses of Cosges, who drag people underwater to devour them. Joseph Rio included it in a global evolution of Breton fairies between 1820 and 1850, so that from small, dark-skinned, wrinkled creatures close to the korrigans, in the texts of the scholars of the time they more and more often become pretty women of normal size, probably to compete with the Germanic fairies. The groac'h has been likened to the enigmatic and archetypal character of "the Crone", studied by various folklorists. This name, in French la Vieille, is often applied to megaliths. Edain McCoy equates the groac'h with la Vieille, citing especially the regular translation of the word as "witch". She adds that several Breton tales present this creature in a negative way, while none draw a flattering portrait. In literature A groac'h appears in the novel La Pâleur et le Sang published by Nicolas Bréhal in 1983. This horrible witch, feared by the fishermen, lays a curse on the Bowley family. A "mystical and fantastic" novel, La Pâleur et le Sang includes the groac'h among the mysterious and almost diabolical forces that assail the island of Vindilis. This old woman is portrayed as having "magical and evil powers", and as threatening with reprisals those characters who offend her. Her murder is one of the causes of the misfortunes that hit the island. A groac'h also appears in Absinthes & Démons, a collection of short stories by Amber Dubois published in 2012. In Jean Teulé's novel Fleur de tonnerre (2013), groac'h is a nickname given to Hélène Jégado when she is a little girl, in Plouhinec. References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Robot_Battle#Robot_scripting_language] | [TOKENS: 1034] |
Contents GarageGames GarageGames was a game technology and software developer. GarageGames was the parent company of GG Interactive, developers of educational technology in the areas of computer science, video game development and programming. In addition, the company has been a video game developer and publisher. GarageGames created several game engines targeted for indie development. Founded in Eugene, Oregon, the company had offices in Las Vegas, Nevada, United States and its headquarters in Vancouver, Washington. In 2007, GarageGames was acquired by IAC and the company was renamed TorquePowered. In 2011, the company was purchased by Graham Software Development and reverted to the original name GarageGames. History GarageGames was founded in Eugene, Oregon in 2000 by Jeff Tunnell, Tim Gift, Rick Overman, and Mark Frohnmayer. Working in their garage on severance checks, the founders derived the name GarageGames as a play off the term "garage band", and is meant to evoke a similar attitude in game development. The stated goal of the original founders of GarageGames was to offer licensing of game engines to virtually anyone, allowing independent game-makers more options in developing and publishing video games. In 2001, GarageGames released the Torque game engine. It was used to create the Tribes game series and was released at an initial price point to allow independent game developers access. Later the company expanded its product lines with additional tools, and more advanced engines and introduced tiered licensing. In 2005, the company introduced Enterprise licenses for large companies and educational institutions available for annual fees ranging from tens of thousands to hundreds of thousands of dollars per year. In 2006, its developer community surpassed 100,000 users. Over its history, the company launched several of its own games, including Marble Blast Ultra for Microsoft Windows and Xbox Live Arcade.[citation needed] In 2006, GarageGames acquired BraveTree Technologies, developers of Think Tanks and real-time networked multiplayer physics technology.[citation needed] In 2007, Barry Diller and InterActive Corporation (NASD: IACI) acquired a majority interest in GarageGames for an estimated $80–100M in cash and renamed the company InstantAction. InterActive Corporation later bought out the remainder of GarageGames' equity for an undisclosed sum and on July 15, 2009, Louis Castle, notable for his Command & Conquer series, would become the CEO of GarageGames and InstantAction. The company headquarters were moved to Las Vegas and some employees relocated to Portland, Oregon. Shortly after the move, the "GarageGames" brand was retired.[citation needed] On November 11, 2010 it was announced that IAC was shutting down InstantAction, and the intellectual property for the Torque game engine would be sold off. On January 20, 2011, the Torque engine and GarageGames brand was purchased and the company was re-launched, as GarageGames again, with new CEO Eric Preisz. The company moved to a new office in Las Vegas, Nevada. In 2011, GarageGames began doing game and technology-based service work. The company created the Microsoft Digital Literacy Program for Windows 8 and an undisclosed project for a World Famous Theme Park. The company also created game-based learning courses for online colleges in the areas of criminal justice, customer service and career development.[citation needed] In 2014, GarageGames CEO Eric Preisz announced the establishment of GG|Interactive, a subsidiary of GarageGames that would focus on bringing game design, game programming and game development courses to middle schools, high schools and colleges. Under the product name Dev|Pro: Game Development Curriculum, the company offers digital education courses in the areas of computer science, game design and programming. Offices for GG|Interactive were established in Vancouver, Washington while the Las Vegas offices remained open.[citation needed] Torque GarageGames offered the Torque Game Engine for sale in 2000, offering the technology under a per-seat "Indie" license. GarageGames also offered "Commercial" licensing options to companies with more than $250,000 in annual revenues. In 2012, GarageGames announced that both the Torque 2D Engine and Torque 3D Engine would be offered free as an open-source MIT license. The source code was released on GitHub on September 20, 2012. Torque is primarily a video game development technology. Various versions of the engine have been used to develop more than 200 published games. It has been licensed by Electronic Arts, NC Soft, Sony, Disney, Vivendi Universal, Hasbro, and many other game teams and publishers and it has officially supported middleware for Microsoft and Nintendo.[citation needed] Torque is also used for non-game applications like serious games and virtual worlds. It has been licensed by NASA, L3, Lockheed Martin and it has been used for dozens of virtual worlds applications like Onverse and by IBM for internal and external training simulations. Torque is currently used for education in more than 200 schools and universities worldwide. Game development References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Paul_Nakasone] | [TOKENS: 1056] |
Contents Paul Nakasone Paul Miki Nakasone (Japanese: 仲宗根 幹, born 19 November 1963): 2 is a retired four-star general in the United States Army who served as the commander of United States Cyber Command. He concurrently served as the director of the National Security Agency and as chief of the Central Security Service. Nakasone took command of the United States Second Army and Army Cyber Command in October 2016, until the Second Army's inactivation in March 2017. In May 2018, he became head of the National Security Agency, the Central Security Service, and the United States Cyber Command. He is on the board of directors of WitnessAI and OpenAI. Early life and education Born in White Bear Lake, Minnesota. He is the son of Edwin M. Nakasone, a second-generation Japanese American and a retired United States Army colonel who served in the Military Intelligence Service during World War II, and Mary Anne Nakasone (née Costello). His paternal grandparents came from Misato village in the Nakagami District, Okinawa. Nakasone grew up in White Bear Lake, Minnesota, and attended White Bear High School. He is married to Susan S. (née Sternberg),: 2 and has four (4) children. Nakasone attended St. John's University, where he received a commission as military intelligence officer in 1986 through the Army Reserve Officers' Training Corps program. Nakasone also attended the University of Southern California earning a M.S. in Systems Management, the National Defense Intelligence College, and the United States Army War College, earning Master's degrees from those institutions as well. He also is a graduate of the United States Army Command and General Staff College. Military career Nakasone has commanded at the company, battalion, and brigade levels. He also served in foreign assignments in Iraq, Afghanistan and Korea, and has served as a senior intelligence officer at the battalion, division, and corps levels. Nakasone served on the Joint Chiefs of Staff as deputy director for trans-regional policy in 2012 when he was promoted to the rank of brigadier general and previously served as a staff officer for General Keith B. Alexander. Prior to promotion to lieutenant general in 2016, Nakasone was the deputy commanding general of United States Army Cyber Command and later commander of the Cyber National Mission Force at Cyber Command. Nakasone has twice served as a staff officer for the Joint Chiefs of Staff and was the director of intelligence, J2, for the International Security Assistance Force in Afghanistan. On 14 October 2016, he took command of the United States Second Army and United States Army Cyber Command. Nakasone was also given control of United States Cyber Command's Joint Task Force-ARES, a task-force designed to coordinate electronic counter-terrorist activities against the Islamic State. He served as commander of the Second Army until it was inactivated for the fourth time in its history on 31 March 2017, and continued to serve as commander of United States Army Cyber Command. In January 2018, it was reported that Nakasone was on the list of potential replacements for outgoing NSA Director Michael S. Rogers. In February 2018, he was nominated for promotion to general. In April 2018, Nakasone was unanimously confirmed by the United States Senate as director of the National Security Agency and head of the United States Cyber Command. He was also promoted to the rank of general. In May 2022, Nakasone was asked to remain as the head of U.S. Cyber Command and the National Security Agency until 2023. In those roles, he has attracted attention for disclosing that the U.S. government took unspecified cyber offensive action against ransomware gangs operating outside the United States that targeted American infrastructure, as well as against Russian targets associated with the invasion of Ukraine. Retirement and later life Nakasone retired from the military on 1 February 2024. General Timothy D. Haugh succeeded him as Director of the NSA and head of Cyber Command. On 14 February 2024, Nakasone published an opinion article in the Washington Post, arguing for Congress to re-approve the Foreign Intelligence Surveillance Act, which was due to expire in spring 2024. Congress reauthorized the bill on 20 April, hours before it would have expired. In May 2024, Nakasone was named Founding Director of Vanderbilt University's new Institute of National Security. Nakasone will also hold a Research Professorship within Vanderbilt's School of Engineering, as well as serving as special advisor to the chancellor. Also in May 2024, Nakasone was elected to the board of trustees of Saint John's University, his alma mater. Nakasone was awarded an honorary Doctor of Laws degree from Dartmouth College in 9 June 2024. Nakasone joined the board of OpenAI in June 2024. In June 2025, Nakasone spoke at the WORLD.MINDS meeting in Washington DC about China, AI and the transatlantic relationship. Awards and decorations References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Weak_artificial_intelligence#cite_note-6] | [TOKENS: 594] |
Contents Weak artificial intelligence Weak artificial intelligence (weak AI) is artificial intelligence that implements a limited part of the mind, or, as narrow AI, artificial narrow intelligence (ANI), is focused on one narrow task. Weak AI is contrasted with strong AI, which can be interpreted in various ways: Narrow AI can be classified as being "limited to a single, narrowly defined task. Most modern AI systems would be classified in this category." Artificial general intelligence is conversely the opposite. Applications and risks Some examples of narrow AI are AlphaGo, self-driving cars, robot systems used in the medical field, and diagnostic doctors. Narrow AI systems are sometimes dangerous if unreliable. And the behavior that it follows can become inconsistent. It could be difficult for the AI to grasp complex patterns and get to a solution that works reliably in various environments. This "brittleness" can cause it to fail in unpredictable ways. Narrow AI failures can sometimes have significant consequences. It could for example cause disruptions in the electric grid, damage nuclear power plants, cause global economic problems, and misdirect autonomous vehicles. Medicines could be incorrectly sorted and distributed. Also, medical diagnoses can ultimately have serious and sometimes deadly consequences if the AI is faulty or biased. Simple AI programs have already worked their way into society, oftentimes unnoticed by the public. Autocorrection for typing, speech recognition for speech-to-text programs, and vast expansions in the data science fields are examples. Narrow AI has also been the subject of some controversy, including resulting in unfair prison sentences, discrimination against women in the workplace for hiring, resulting in death via autonomous driving, among other cases. Despite being "narrow" AI, recommender systems are efficient at predicting user reactions based their posts, patterns, or trends. For instance, TikTok's "For You" algorithm can determine a user's interests or preferences in less than an hour. Some other social media AI systems are used to detect bots that may be involved in propaganda or other potentially malicious activities. Weak AI versus strong AI John Searle contests the possibility of strong AI (by which he means conscious AI). He further believes that the Turing test (created by Alan Turing and originally called the "imitation game", used to assess whether a machine can converse indistinguishably from a human) is not accurate or appropriate for testing whether an AI is "strong". Scholars such as Antonio Lieto have argued that the current research on both AI and cognitive modelling are perfectly aligned with the weak-AI hypothesis (that should not be confused with the "general" vs "narrow" AI distinction) and that the popular assumption that cognitively inspired AI systems espouse the strong AI hypothesis is ill-posed and problematic since "artificial models of brain and mind can be used to understand mental phenomena without pretending that that they are the real phenomena that they are modelling" (as, on the other hand, implied by the strong AI assumption). See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Pornography] | [TOKENS: 20303] |
Contents Pornography Pornography (colloquially called porn or porno) is sexually suggestive material, such as a picture, video, text, or audio, intended for sexual arousal.[a] Made for consumption by adults, pornographic depictions have evolved from cave paintings, some forty millennia ago, to modern-day virtual reality presentations. A general distinction of adults-only sexual content is made, classifying it as pornography or erotica. The oldest artifacts considered pornographic were discovered in Germany in 2008 and are dated to be at least 35,000 years old.[b] Human enchantment with sexual imagery representations has been a constant throughout history. However, the reception of such imagery varied according to the historical, cultural, and national contexts. The Indian Sanskrit text Kama Sutra (3rd century CE) contained prose, poetry, and illustrations regarding sexual behavior, and the book was celebrated; while the British English text Fanny Hill (1748), considered "the first original English prose pornography," has been one of the most prosecuted and banned books. In the late 19th century, a film by Thomas Edison that depicted a kiss was denounced as obscene in the United States, whereas Eugène Pirou's 1896 film Bedtime for the Bride was received very favorably in France. Starting from the mid-twentieth century on, societal attitudes towards sexuality became lenient in the Western world where legal definitions of obscenity were made limited. In 1969, Blue Movie by Andy Warhol became the first film to depict unsimulated sex that received a wide theatrical release in the United States. This was followed by the "Golden Age of Porn" (1969–1984). The introduction of home video and the World Wide Web in the late 20th century led to global growth in the pornography business. Beginning in the 21st century, greater access to the Internet and affordable smartphones made pornography more mainstream. Pornography has been vouched to provision a safe outlet for sexual desires that may not be satisfied within relationships and be a facilitator of sexual fulfillment in people who do not have a partner. Pornography consumption is found to induce psychological moods and emotions similar to those evoked during sexual intercourse and casual sex. Pornography usage is considered a widespread recreational activity in-line with other digitally mediated activities such as use of social media or video games.[c] People who regard porn as sex education material were identified as more likely not to use condoms in their own sex life, thereby assuming a higher risk of contracting sexually transmitted infections (STIs); performers working for pornographic studios undergo regular testing for STIs unlike much of the general public. Comparative studies indicate higher tolerance and consumption of pornography among adults tends to be associated with their greater support for gender equality. Among feminist groups, some seek to abolish pornography believing it to be harmful, while others oppose censorship efforts insisting it is benign. A longitudinal study ascertained pornography use is not a predictive factor in intimate partner violence.[d] Porn Studies, started in 2014, is the first international peer-reviewed, academic journal dedicated to critical study of pornographic "products and services". Pornography is a major influencer of people's perception of sex in the digital age; a few pornographic websites rank among the top 50 most visited websites worldwide. Called an "erotic engine", pornography has been noted for its key role in the development of various communication and media processing technologies. For being an early adopter of innovations and a provider of financial capital, the pornography industry has been cited to be a contributing factor in the adoption and popularization of media related technologies. The exact economic size of the porn industry in the early twenty-first century is unknown. In 2023, estimates of the total market value stood at over US$172 billion. The legality of pornography varies across countries. People hold diverse views on the availability of pornography. From the mid-2010s, unscrupulous pornography such as deepfake pornography and revenge porn have become issues of concern. Etymology and definition The word pornography is a conglomerate of two ancient Greek words: πόρνος (pórnos) "fornicators", and γράφειν (gráphein) "writing, recording, or description". In Greek language, the term pornography connotes depiction of sexual activity; no date is known for the first use of the term pornography, the earliest attested, most related word found is πορνογράφος (pornographos) i.e. "someone writing about harlots" in the 3rd century CE work Deipnosophists by Athenaeus. The oldest published reference to the word pornography as in 'new pornographie,' is dated back to 1638 and is credited to Nathaniel Butter in a history of the Fleet newspaper industry. The modern word pornography entered the English language as the more familiar word in 1842 via French "pornographie," from Greek "pornographos". The term porn is an abbreviation of pornography. The related term πόρνη (pórnē) "prostitute" in Greek, originally meant "bought, purchased" similar to pernanai "to sell", from the proto-Indo-European root per-, "to hand over" — alluding to act of selling. The word pornography was originally used by classical scholars as "a bookish, and therefore inoffensive term for writing about prostitutes", but its meaning was quickly expanded to include all forms of "objectionable or obscene material in art and literature". In 1864, Webster's Dictionary published "a licentious painting" as the meaning for pornography, and the Oxford English Dictionary: "obscene painting" (1842), "description of obscene matters, obscene publication" (1977 or earlier). Definitions for the term "pornography" are varied, with people from both pro- and anti-pornography groups defining it either favorably or unfavorably, thus making any definition very stipulative. Nevertheless, academic researchers have defined pornography as sexual subject material such as a picture, video, text, or audio that is primarily intended to assist sexual arousal in the consumer, and is created and commercialized with "the consent of all persons involved".[a] Arousal is considered the primary objective, the raison d'etre a material must fulfill for it to be treated as pornographic. As some people can feel aroused by an image that is not meant for sexual arousal and conversely cannot feel aroused by material that is clearly intended for arousal, the material that can be considered as pornography becomes subjective. Pornography throughout history Pornography is viewed by historians as a complex cultural formation. Depictions of a sexual nature existed since prehistoric times as seen in Venus figurines and rock art. People across various civilizations created works that depicted explicit sex; these include artifacts, music, poetry, and murals among other things that are often intertwined with religious and supernatural themes. The oldest artifacts, including the Venus of Hohle Fels, which is considered to be borderline pornographic, were discovered in 2008 CE at a cave near Stuttgart in Germany, radiocarbon dating suggests they are at least 35,000 years old, from the Aurignacian period.[b] Vast number of artifacts discovered in ancient Mesopotamia region had explicit depictions of heterosexual sex. Glyptic art from the Sumerian Early Dynastic Period frequently showed scenes of frontal sex in the missionary position. In Mesopotamian votive plaques from the early second millennium (c. 2000 – c. 1500 BCE), a man is usually shown penetrating a woman from behind while she bends over drinking beer through a straw. Middle Assyrian lead votive figurines often portrayed a man standing and penetrating a woman as she rests on an altar. Scholars have traditionally interpreted all these depictions as scenes of hieros gamos (an ancient sacred marriage between a god and a goddess), but they are more likely to be associated with Inanna, the Mesopotamian goddess of sex and sacred prostitution. Many sexually explicit images, including models of male and female sexual organs were found in the temple of Inanna at Assur. Depictions of sexual intercourse were not part of the general repertory of ancient Egyptian formal art, but rudimentary sketches of heterosexual intercourse have been found on pottery fragments and in graffiti. The final two thirds of the Turin Erotic Papyrus (Papyrus 55001), an Egyptian papyrus scroll discovered at Deir el-Medina, consists of a series of twelve vignettes showing men and women in various sexual positions. The scroll was probably painted in the Ramesside period (1292–1075 BCE) and its high artistic quality indicates that it was produced for a wealthy audience. No other similar scrolls have yet been discovered. Archaeologist Nikolaos Stampolidis had noted that the society of ancient Greece held lenient attitudes towards sexual representation in the fields of art and literature. The Greek poet Sappho's Ode to Aphrodite (600 BCE) is considered an earliest example of lesbian poetry. Red-figure pottery invented in Greece (530 BCE) often portrayed images that displayed eroticism. The fifth-century BC comic Aristophanes elaborated 106 ways of describing the male genitalia and in 91 ways the female genitalia. Lysistrata (411 BCE) is a sex-war comedy play performed in ancient Greece. In India, Hinduism embraced an inquisitive attitude towards sex as an art and a spiritual ideal. Some ancient Hindu temples incorporated various aspects of sexuality into their art work. The temples at Khajuraho and Konark are particularly renowned for their sculptures, which had detailed representations of human sexual activity. These depictions were viewed with a spiritual outlook as sexual arousal is believed to indicate the embodying of the divine.[e] "pornography is sometimes characterised as the symptom of a degenerate society, but anyone even noddingly familiar with Greek vases or statues on ancient Hindu temples will know that so-called unnatural sex acts, orgies and all manner of complex liaisons have for millennia past been represented in art for the pleasure and inspiration of the viewer everywhere. The desire to ponder images of love-making is clearly innate in the human – perhaps particularly the male – psyche." — Tom Hodgkinson Kama, the word used to connote sexual desire, was explored in Indian literary works such as the Kama Sutra, which dealt with the practical as well as the psychological aspects of human courtship and sexual intercourse. The Sanskrit text Kama sutra was compiled by the sage Vatsyayana into its final form sometime during the second half of the third century CE. This text, which included prose, poetry, as well as illustrations regarding erotic love and sexual behavior, is one of the most celebrated Indian erotic works. Koka shastra is another medieval Indian work that explored kama. When large-scale archaeological excavations were undertaken in the ancient Roman city of Pompeii during the 18th century, much of the erotic art in Pompeii and Herculaneum came to light, shocking the authorities who endeavored to hide them away from the general public. In 1821, the moveable objects were locked away in the Secret Museum in Naples, and what could not be removed was either covered or cordoned off from public view. Other examples of early art and literature of sexual nature include: Ars Amatoria (Art of Love), a second-century CE treatise on the art of seduction and sensuality by the Roman poet Ovid; the artifacts of the Moche people in Peru (100 CE to 800 CE); The Decameron, a collection of short stories, some of which are sexual in nature by the 14th-century Italian author Giovanni Boccaccio; and the fifteenth-century Arabic sex manual The Perfumed Garden. A highly developed culture of visual erotica flourished in Japan during the early modern era. From at least the 17th century, erotic artworks became part of the mainstream social culture. Depictions of sexual intercourse were often presented on pictures that were meant to provide sex education for medical professionals, courtesans, and married couples. Makura-e (pillow pictures) were made for entertainment as well as for the guidance of married couples. The ninth-century Japanese art form "Shunga", which depicted sexual acts on woodblock prints and paintings became so popular by the 18th century that the Japanese government began to issue official edicts against them. Even so, Japanese erotica flourished with the works of artists such as Suzuki Harunobu achieving worldwide fame. Japanese censorship laws enacted in 1870 made the production of erotic works difficult. The laws remained in effect until the end of the Pacific War in 1945; nevertheless, pornography flourished through the sale of "erotic, grotesque, nonsense" (ero-guro-nansensu) periodicals, particularly in the Taishō era (1912–1926). From the 1960s, pink films, which portrayed sexual themes became popular in Japan. In 1981 the first Japanese Adult video (AV) was released. The Japanese pornography industry peaked in the early 2000s when about 30,000 AVs were made a year. From the mid-2010s, increased availability of free porn on the Internet led to a decline in the production of AVs. Other forms of adult entertainment, such as hentai, which refers to pornographic manga and anime, and erotic video games have become popular in recent decades. In Europe, the Italian Renaissance work from the 16th century - I Modi (The Ways) also known as The Sixteen Pleasures became famous for its engravings that explicitly depicted sex positions. The publication of this book was considered the beginning of print pornography in Rome. The second edition of this book was published in 1527, titled Aretino Postures, which combined erotic images with text - a first in the Western culture. The Vatican called for the complete destruction of all the copies of the book and imprisonment of its author Marcantonio Raimondi. With the development of printing press in Europe, the publication of written and visual material, which was essentially pornographic began. Heptaméron written in French by Marguerite de Navarre and published posthumously in 1558 is one of the earliest examples of salacious texts from this era. Beginning with the Age of Enlightenment and advances in printing technology, the production of erotic material became popular enough that an underground marketplace for such works developed in England with a separate publishing and bookselling business. Historians have identified the 18th century as an age of pornographic opulence. Written by anonymous authors, the titles: The Progress of Nature (1744); The History of the Human Heart: or, the Adventures of a Young Gentleman (1749), which had descriptions of female ejaculation; and The Child of Nature (1774) have been noted as prominent pornographic fictional works from this period. The book Fanny Hill (1748), is considered "the first original English prose pornography, and the first pornography to use the form of the novel." An erotic literary work by John Cleland, Fanny Hill was first published in England as Memoirs of a Woman of Pleasure. The novel has been one of the most prosecuted and banned books in history. The author John Cleland was charged for "corrupting the King's subjects." At around the same time, erotic graphic art that began to be extensively produced in Paris came to be known in the Anglosphere as "French postcards". Enlightenment-era France had been noted by historians as the center of origin for modern-era pornography. The works of French pornography, which often concentrated on the education of an ingénue into libertine, dominated the sale of sexually explicit content. The French sought to interlace narratives of sexual pleasure with philosophical and anti-establishment basis. Political pornography began with the French Revolution (1789–99). Apart from the sexual component, pornography became a popular medium for protest against the social and political norms of the time. Pornography during this period was used to explore the ideas of sexual freedom for women and men, the various methods of contraception, and to expose the offenses of powerful royals and elites. The working and lower classes in France produced pornographic material en masse with themes of impotency, incest, and orgies that ridiculed the authority of the Church-State, aristocrats, priests, monks, and other royalty. One of the most important authors of socially radical pornography was the French aristocrat Marquis de Sade (1740–1814), whose name helped derive the words "sadism" and "sadist". He advocated libertine sexuality and published writings that were critical of authorities, many of which contained pornographic content. His work Justine (1791) interlaced orgiastic scenes along with extensive debates on the ills of property and traditional hierarchy in society. During the Victorian era (1837–1901), the invention of the rotary printing press made publication of books easier, many works of lascivious nature were published during this period often under pen names or anonymity. In 1837, the Holywell Street (known as "Booksellers' Row") in London had more than 50 shops that sold pornographic material. Many of the works published in the Victorian era are considered bold and graphic even by today's lenient standards. The English novel The Adventures, Intrigues, and Amours, of a Lady's Maid! written by anonymous "Herself" (c. 1838) professed the notion that homosexual acts are more pleasurable for women than heterosexuality which is linked to painful and uncomfortable experiences. Some of the popular publications from this era include: The Pearl (magazine of erotic tales and poems published from 1879 to 1881); Gamiani, or Two Nights of Excess (1870) by Alfred de Musset; and Venus in Furs (1870) by Leopold von Sacher-Masoch, from whose name the term "masochism" was derived. The Sins of the Cities of the Plain (1881) is one of the first sole male homosexual literary work published in English, this work is said to have inspired another gay literary work Teleny, or The Reverse of the Medal (1893), whose authorship has often been attributed to Oscar Wilde. The Romance of Lust, written anonymously and published in four volumes during 1873–1876, contained graphical descriptions of themes detailing incest, homosexuality, and orgies. Other publications from the Victorian era that included fetish and taboo themes such as sadomasochism and 'cross-generational sex' are: My Secret Life (1888–1894) and Forbidden Fruit (1898). On accusations of obscenity many of these works had been outlawed until the 1960s. The world's first law that criminalized pornography was the UK Obscene Publications Act 1857, enacted at the urging of the Society for the Suppression of Vice. The act passed by the British Parliament in 1857 applied to the United Kingdom and Ireland. The act made the sale of obscene material a statutory offense, and gave the authorities the power to seize and destroy any material which they considered as obscene. For centuries before, sexually explicit material was considered a domain that is exclusive to aristocratic classes. When pornographic material flourished in the Victorian-era England, the affluent classes believed they are sensible enough to deal with it, unlike the lower working classes whom they thought would get distracted by such material and cease to be productive. Beliefs that masturbation would make people ill, insane, or become blind also flourished. The obscenity act gave government officials the power to interfere in the private lives of people unlike any other law before. Some of the people suspected for masturbation were forced to wear chastity devices. "Cures" and "treatment" for masturbation involved such measures like giving electric shock and applying carbolic acid to the clitoris. The law was criticized for being established on still yet unproven claims that sexual material is noxious for people or public health. In 1865, the US postal service was seen as a "vehicle" for the transmission of materials that were deemed obscene by the American lawmakers. An act relating to the postal services was passed, which made people pay a fine of $500 for knowingly mailing any "obscene book, pamphlet, picture print, or other publication". From 1865 to up until the first three months of 1872, a total number of nine people were held for various charges of obscenity, with one person sentenced to prison for a year; while in the next ten months fifteen people were arrested under this law. This was partly due to the efforts of Anthony Comstock, who became a major figure in 1872 and held great power to control sexual related activities of people including the choice of abortion. The Comstock Act of 1873 is the American equivalent of the British Obscene Publications Act. The anti-obscenity bill, drafted by Anthony Comstock, was debated for less than an hour in the US Congress before being passed into law. Apart from the power to seize and destroy any material alleged to be obscene, the law made it possible for the authorities to make arrests over any perceived act of obscenity, which included possession of contraceptives by married couples. Reportedly 15 tons of books and 4 million pictures were destroyed, and about 15 people were driven to suicide with 4,000 arrests. At least 55 people whom Comstock identified as abortionists were indicted under the Comstock Act. The laws regarding pornography have differed in various historical, cultural, and national contexts. The English Act did not apply to Scotland where the common law continued to apply. Before the English Act, publication of obscene material was treated as a common law misdemeanor, this made effectively prosecuting authors and publishers difficult even in cases where the material was clearly intended as pornography. However, neither the English, nor the United States Act defined what constituted "obscene", leaving this for the courts to determine. For implementing the Comstock act, the US courts used the British Hicklin test to define obscenity, the definition of which was first proposed in 1868, ten years after the passing of the English obscene act. The definition became cemented in 1896 and continued until the mid-twentieth century. Starting from 1957 to 1997, the US Supreme Court made numerous judgments that redefined obscenity. The nineteenth-century legislation eventually outlawed the publication, retail and trafficking of certain writings and images that were deemed pornographic. Although laws ordered the destruction of shop and warehouse stock meant for sale, the private possession and viewing of (some forms of) pornography was not made an offense until the twentieth century. Historians have explored the role of pornography in determining social norms. The Victorian attitude that pornography was only for a select few is seen in the wording of the Hicklin test, stemming from a court case in 1868, where it asked: "whether the tendency of the matter charged as obscenity is to deprave and corrupt those whose minds are open to such immoral influences". Although officially prohibited, the sale of sexual material nevertheless continued through "under the counter" means. Magazines specialising in a genre called "saucy and spicy" became popular during this time (1896 to 1955), titles of few popular magazines include; Wink: A Whirl of Girls, Flirt: A FRESH Magazine, and Snappy. Cover stories in these magazines featured segments such as "perky pin-ups" and "high-heel cuties". Some of the popular erotic literary works from the twentieth century include the novels: Story of the Eye (1928), Tropic of Cancer (1934), Tropic of Capricorn (1938), the French Histoire d'O (Story of O) (1954); and the short stories: Delta of Venus (1977), and Little Birds (1979). After the invention of photography, the birth of erotic photography followed. The oldest surviving image of a pornographic photo is dated back to about 1846, described as to depict "a rather solemn man gingerly inserting his penis into the vagina of an equally solemn and middle-aged woman". At one point of time, it was more expensive to purchase an erotic photograph than to hire a prostitute. The Parisian demimonde included Napoleon III's minister, Charles de Morny, an early patron who delighted in acquiring and displaying erotic photos at large gatherings. Pornographic film production commenced almost immediately after the invention of the motion picture in 1895. A pioneer of the motion picture camera, Thomas Edison, released various films, including The Kiss that were denounced as obscene in late 19th century America. Two of the earliest pioneers of pornographic films were Eugène Pirou and Albert Kirchner. Kirchner directed the earliest surviving pornographic film for Pirou under the trade name "Léar". The 1896 film, Le Coucher de la Mariée, showed Louise Willy performing a striptease. Pirou's film inspired a genre of risqué French films that showed women disrobing, and other filmmakers realized profits could be made from such films. Sexually explicit films opened producers and distributors to be liable for prosecution. Such films were produced illicitly by amateurs, starting in the 1920s, primarily in France and the United States. Processing the film was risky as was their distribution, which was strictly private. In the Western world, during the 1960s, social attitudes towards sex and pornography slowly changed. In 1967, Denmark repealed the obscenity laws on literature; this led to a decline in the sale of pornographic and erotic literature. Hoping for a similar effect, in the summer of 1969, legislators in Denmark abolished censorship on picture pornography, thereby effectively becoming, from July 1, 1969, the first country that legalized pornography, including child pornography, which was later prohibited in 1980. The 1969 legislation, instead of resulting in a decline in pornography production, led to an explosion of investment in, and commercial production of pornography in Denmark, which made the country's name synonymous with sex and pornography. The total retail turnover of pornography in Denmark for the year 1969 was estimated at $50 million. Much of the pornographic material produced in Denmark was smuggled into other countries around the world. In the United States, pornography is protected by the First Amendment to the United States Constitution unless it constitutes obscenity or child pornography that is produced with real children. Nevertheless, in Stanley v. Georgia (1969), the U.S. Supreme Court upheld the right of an adult to possess obscene material in private. Subsequently, however, the Supreme Court rejected the claim that under Stanley there is a constitutional right to provide obscene material for private use or to acquire it for private use. The right to possess obscene material does not imply the right to provide or acquire it, because the right to possess it "reflects no more than ... the law's 'solicitude to protect the privacies of the life within [the home]'". In 1969, Blue Movie by Andy Warhol became the first feature film to depict explicit sexual intercourse that received a wide public theatrical release in the United States. Blue Movie was real. But it wasn't done as pornography—it was done as an exercise, an experiment. But I really do think movies should arouse you, should get you excited about people, should be prurient. Prurience is part of the machine. It keeps you happy. It keeps you running." Film scholar Linda Williams remarked that prurience "is a key term in any discussion of moving-image sex since the sixties. Often it is the "interest" to which no one wants to own up". In 1968, the Motion Picture Association of America created a new film ratings system in which any film that was not approved by the association was released with an "X" rating. When pornographers began to release their productions with the rating X, the association adopted NC-17 rating for adults only films, leaving the X rating to pornography. Later the invented gimmick rating "XXX" became a standard for pornographic material. In 1970, the United States President's Commission on Obscenity and Pornography, set up to study the effects of pornography, reported that there was "no evidence to date that exposure to explicit sexual materials plays a significant role in the causation of delinquent or criminal behavior among youths or adults". The report further recommended against placing any restriction on the access of pornography by adults and suggested that legislation "should not seek to interfere with the right of adults who wish to do so to read, obtain, or view explicit sexual materials". Regarding the notion that sexually explicit content is improper, the Commission found it "inappropriate to adjust the level of adult communication to that considered suitable for children". The Supreme Court supported this view. In 1971, Sweden removed its obscenity clause. Further relaxation of legislations during the early 1970s in the US, West Germany and other countries led to rise in pornography production. The 1970s had been described by Linda Williams as 'the "Classical" Era of Theatrically Exhibited Porn', a time period now called the Golden Age of Porn. In 1979, the British Committee on Obscenity and Film Censorship better known as the Williams Committee, formed to review the laws concerning obscenity reported that pornography could not be harmful and to think anything else is to see pornography "out of proportion". The committee declared that existing variety of laws in the field should be scrapped and so long as it is prohibited from children, adults should be free to consume pornography as they see fit. The Meese Report in 1986 argued against loosening restrictions on pornography in the US. The report was criticized as biased, inaccurate, and not credible. In 1988, the Supreme Court of California ruled in the People v. Freeman case that "filming sexual activity for sale" does not amount to procuring or prostitution and shall be given protection under the first amendment. This ruling effectively legalized the production of X-rated adult content in the Los Angeles county, which by 2005 had emerged as the largest center in the world for the production of pornographic films. Pornographic films appeared throughout the twentieth century. First as stag films (1900–1940s), then as porn loops or short films for peep shows (1960s), followed by as feature films for theatrical release in adult movie theaters (1970s), and as home videos (1980s). Pornographic magazines published during the mid-twentieth century have been noted for playing an important role in the sexual revolution and the liberalization of laws and attitudes towards sexual representation in the Western world. Hugh Hefner, in 1953 published the first US issue of the Playboy, a magazine which as Hefner described is a "handbook for the urban male". The magazine contained images of nude women along with articles and interviews covering politics and culture. Twelve years later, in 1965, Bob Guccione in the UK started his publication Penthouse, and published its first American issue in 1969 as a direct competitor to Playboy. In its early days, the images of naked women published in Playboy did not show any pubic hair or genitals. Penthouse became the first magazine to show pubic hair in 1970. Playboy followed the lead and there ensued a competition between the two magazines over publication of more racy pictures, a contest that eventually got labeled as the "Pubic Wars". "We were the first to show full frontal nudity. The first to expose the clitoris completely. I think we made a very serious contribution to the liberalization of laws and attitudes. HBO would not have gone as far as it does if it was not for us breaking the barriers. Much that has happened now in the Western world with respect to sexual advances is directly due to steps that we took." — Bob Guccione, Penthouse founder in 2004. The tussle between Playboy and Penthouse paled into obscurity when Larry Flynt started Hustler, which became the first magazine to publish labial "pink shots" in 1974. Hustler projected itself as the magazine for the working classes as opposed to the urban centered Playboy and Penthouse. During the same time in 1972, Helen Gurley Brown, editor of the Cosmopolitan magazine, published a centerfold that featured actor Burt Reynolds in nude. His popular pose has been later emulated by many other famous people. The success of Cosmo led to the launch of Playgirl in 1973. At their peak, Playboy sold close to six million copies a month in the US, while Penthouse nearly five million. In the 2010s, as the market for printed versions of pornographic magazines declined, with Playboy selling about a million and Penthouse about a hundred thousand, many magazines became online publications. As of 2005, the best-selling US adult magazines maintained greater reach compared to most other non-pornographic magazines, and often ranked among top-sellers. Modern-day pornography began to take shape from the mid-1980s when the first desktop computers and public computer networks were released. Since the 1990s, the Internet has made pornography more accessible and culturally visible. Before the 90s, Usenet newsgroups served as the base for what has been called the "amateur revolution" where non-professionals from the late 1980s and early 1990s, with the help of digital cameras and the Internet, created and distributed their own pornographic content independent of mainstream networks. The use of the World Wide Web became popular with the introduction of Netscape navigator in 1994. This development led to newer methods of pornography distribution and consumption. The Internet turned out to be a popular source for pornography and was called the "Triple A-Engine" for offering consumers "anonymity, affordability, and accessibility", while driving the business of pornography. The notion of Internet being a medium abound with porn became popular enough that in 1995 Time published a cover story titled "CYBERPORN" with the face of a shocked child as the cover photo. In the Reno v. ACLU (1997) ruling, the US Supreme Court upheld the legality of pornography distribution and consumption by adults over the Internet. The Court noted that government may not reduce the communication between adults to "only what is fit for children". With the introduction of broadband connections, much of the distribution networks of pornography moved online giving consumers anonymous access to a wide range of pornographic material. To have better control over their content on the Internet some professional pornographers maintain their own websites. Danni's Hard Drive started in 1995 by Danni Ashe is considered one of the earliest online pornographic websites, coded by Ashe – a former stripper and nude model, the website was reported by CNN to had generated revenues of $6.5 million by 2000. According to some leading pornography providers on the Internet, customer subscription rates for a website would be about one in a thousand people who visit the site for a monthly fees averaging around $20. Ashe said in an interview that her website employs 45 people and she expects to earn $8 million in 2001 alone. The total number of pornographic websites in 2000 were estimated to be more than 60,000. The development of streaming sites, peer-to-peer file sharing (P2P) networks, and tube sites led to a subsequent decline in the sale of DVDs and adult magazines. Starting in the 21st century, greater access to the Internet and affordable smartphones made pornography more accessible and culturally mainstream. The total number of pornographic websites in 2012 was estimated to be around 25 million comprising 12% of all the websites. About 75 percent of households in the US gained Internet access by 2012. Data from 2015 suggests an increase in pornography consumption over the past few decades which is attributed to the growth of Internet pornography. Technological advancements such as digital cameras, laptops, smartphones, and Wi-Fi have democratized the production and consumption of pornography. Subscription-based service providers such as OnlyFans, founded in 2016, are becoming popular as the platforms for pornography trade in the digital era. Apart from the professional pornographers, content creators on such platforms include others like; a physics teacher, a race car driver, a woman undergoing cancer treatment. In 2022, the total pornographic content accessible online was estimated to be over 10,000 terabytes. AVN and XBIZ are the industry-specific organizations based in the US that provide information about the adult entertainment business. XBIZ Awards and AVN Awards, analogous to the Golden globes and Oscars, are the two prominent award shows of the adult entertainment industry. Free Speech Coalition (FSC) is a trade association and Adult Performer Advocacy Committee (APAC) is a labor union for the adult entertainment industry based in the US. The scholarly study of pornography notably in cultural studies is limited. Porn Studies, which began in 2014, is the first international peer-reviewed, academic journal that is exclusively dedicated to the critical study of the "products and services" identified to constitute pornography. Classifications Adult content is generally classified as either pornography or erotica. Considerations of distinctness between pornography and erotica is mostly subjective. Pornographic content is categorized as softcore or hardcore. Softcore pornography contains depictions of nudity but without explicit depiction of sexual activity. Hardcore pornography includes explicit depiction of sexual activity. Hardcore porn is more regulated than softcore porn. Softcore porn was popular between the 1970s and 1990s. Pornography productions cater to consumers of various sexual orientations. Nonetheless, pornography featuring heterosexual acts made for heterosexual consumers, comprise the bulk of what is called the "mainstream porn", marking the industry more or less as "heteronormative". Mainstream pornography involves professional performers who work for various corporate film studios in their respective productions. Mainstream pornography productions are usually classified as feature or gonzo. Features involve storylines, characterizations, scripted dialog, elegant costumes, detailed sets, and soundtracks, which make the productions look similar to mainstream Hollywood productions but with the depictions of explicit sexual activity included. Features contain both original narratives as well as parodies that parody mainstream feature films, TV shows, celebrities, video games or literary works. Gonzo is a form of content creation that attempts to put the viewer into the scene, this is commonly achieved by close-up camera work or performers talking to the audience; also called "wall-to-wall", gonzo involves some aspects of "breaking the fourth wall" between the audience and performers. The term "gonzo" is often misused as a genre to identify demeaning depictions, however gonzo is a film-making style and not a genre. Gonzo style is variably incorporated in the creation of all types or genres of adult content. Gonzos do not involve the expensive sets or the costly production values of features, which makes their production relatively inexpensive. From the mid-2010s about 95 percent of porn productions are gonzo. Pornography productions that are independent of mainstream pornographic studios are classified as indie (or) independent pornography. These productions cater to more specific audience, and often feature different scenarios and sexual activity compared to the mainstream porn. The performers in indie porn include real-life couples and regular people, who sometimes work in partnership with other performers. Apart from content creation the performers do the background work such as videography, editing, web development themselves, and distribute under their own brand. Paysites like Clips4Sale.com, MakeLoveNotPorn.tv, and PinkLabel.tv provide a platform to the web-based content of independent pornographers. Websites such as OnlyFans have caused significant present-day growth in the independent pornography industry, with The Economist claiming that OnlyFans has "transformed porn." In 2024, OnlyFans saw $7.2 billion dollars in payments between the 377.5 million users and 4.6 million creators registered to the platform. Pornography encompasses a wide variety of genres providing for an enormous range of consumer tastes. Most of the genres or types are named according to the depiction of sexual activity, these include: anal, creampie, cum shot, double penetration, fisting, threesome. Categorizations based on the age of the performers include: teens, milf, mature. Other categorizations based on gender and sexual identity include: lesbian, gay, bisexual, transsexual, queer, shemale; while those based on race include: ethnic, interracial. Others include: Mormon, zombie. Pornography also features numerous fetishes like: "'fat' porn, amateur porn, disabled porn, porn produced by women, queer porn, BDSM and body modification."[f] Commercialism Pornography is commercialized mainly through the sale of pornographic films. Many adult films had theatrical releases during the 1970s corresponding with the Golden Age of Porn. A 1970 federal study estimated that the total retail value of hardcore pornography in the United States was no more than $5 million to $10 million. The release of the VCR by Sony Corporation for the mass market in 1975 marked the shift of people from watching porn in adult movie theaters, to the confines of their houses. The introduction of VHS brought down the production quality through the 1980s. Starting in the 1990s, Internet eased the access to pornography. The pay-per-view model enabled people to buy adult content directly from cable and satellite TV service providers. According to Showtime Television network report, in 1999 adult pay-per-view services made $367 million, which was six times more than the $54 million earned in 1993. Although this development resulted in a decline in rentals, the revenues generated over the Internet, provided much financial gains for pornography producers and credit card companies among others. By the mid-1990s, the adult film industry had agents for performers, production teams, distributors, advertisers, industry magazines, and trade associations. The introduction of home video and the World Wide Web in the late twentieth century led to global growth in the pornography business. Performers got multi-film contracts. In 1998, Forrester Research reported that online "adult content" industry's estimated annual revenue is at $750 million to $1 billion. Retail stores or sex shops engaged in the sale of adult entertainment material ranging from videos, magazines, sex toys and other products, significantly contributed to the overall commercialization of pornography. Sex shops sell their products on both online shopping platforms such as Amazon and on specialized websites. In 2000, the total annual revenue from the sales and rentals of pornographic material in the US was estimated to be over $4 billion. The hotel industry through the sale of adult movies to their customers as part of room service, over pay-per-view channels, had generated an annual income of about $180-$190 million. Some of the major companies and hotel chains that were involved in the sale of adult films over pay-per-view platforms include; AT&T, Time Warner, DirecTV from General Motors, EchoStar, Liberty Media, Marriott International, Westin and Hilton Worldwide. The companies said their services are in response to a growing American market that wanted pornography delivered at home. Studies in 2001 had put the total US annual revenue (including video, pay-per-view, Internet and magazines) between $2.6 billion and $3.9 billion. magazines artwork photography literature audio internet film/videos adult movie theaters animation sex shops video games From the mid-2000s, emergence of tube sites led to an increase in free streaming and a decrease in traditional studio sales. Many performers turned to subscription-based platforms like OnlyFans, which continue to provide financial independence for some, but also increase market saturation, making it harder for new creators to establish themselves. Additionally, dependence on third-party platforms leaves creators vulnerable to policy changes and financial restrictions. In 2020, Visa and Mastercard implemented restrictions on processing payments for adult content due to concerns over illegal material. Additionally, the arrival of AI-generated adult content, including deepfake pornography, poses ethical and legal dilemmas. The production and distribution of pornography are economic activities of some importance. In Europe, Budapest is regarded as the industry center. Other pornography production centers in the world are located in Florida (US), Brazil, Czech Republic, and Japan. In the United States, the pornography industry employs about 20,000 people including 2,000 to 3,000 performers, and is centered in the San Fernando Valley of Los Angeles. By 2005, it became the largest pornography production center in the world. Apart from regular media coverage, the industry in the US receives considerable attention from private organizations, government agencies, and political organizations. As of 2011, pornography was becoming one of the biggest businesses in the United States. In 2014, the porn industry was believed to bring in at least $13 billion on a yearly basis in the United States. Through the 2010s, many pornography production companies and top pornographic websites such as Pornhub, RedTube and YouPorn have been acquired by MindGeek, a company that has been described as "a monopoly" in the pornography business. This development was identified as a problem. According to Marina Adshade, a professor from the Vancouver School of Economics and the author of Dollars and Sex: How economics influences sex and love, having a monopoly in the pornography business has forced the producers to reduce their charges, and radically changed the work of performers "who are now under greater pressure to perform acts that they would have been able to refuse in the past", all at a lower price without profits for themselves. Some pornographic productions are often linked to prostitution. Online pornography is available both for a fee and free of charge. The availability of free porn on the Internet has led to a decline in the business of mainstream pornography. Piracy is estimated to result in losses of some $2 billion a year for the porn industry. Budgets of many studios reduced considerably and contracts for performers became less common. Reportedly, applications by established pornography companies for porn-shoot permits in Los Angeles County fell by 95 percent during the period 2012 to 2015. According to Mark Spiegler, an adult talent agent, in the early 2000s female performers made about $100,000 a year. By 2017, the amount is about $50,000. The technological era led to decline of the studio and "the rise of the pornography worker herself". Newer ways of monetization have opened for the pornography workers who are taking the path of entrepreneurship. In 1995, Jenna Jameson signed her first contract with the porn studio Wicked Pictures. After building a brand image for herself she started her own company ClubJenna, which by 2005 was reportedly earning an annual revenue of $30-$35 million. "Performers are hustlers now," said Chanel Preston (a performer who was also chairperson of the Adult Performer Advocacy Committee), while noting that performers have to be creative to sustain their income and reach audience, both of which, she said are mainly achieved through "feature dancing, selling merchandise, webcamming", among other activities. "Custom" pornography made according to the requests of customer clients has emerged as one new business niche. The average career for the new age performer lasts about four to six months. Before moving on to the business side, adult performers use studio works to advertise and build a brand image for themselves. They acquire an audience who would later pay at personal website or webcam performances. Commercial webcamming, which emerged in the 1990s as a niche sector in the adult entertainment industry, grew to become a multibillion-dollar business by the mid-2020s. The exact economic size of the porn industry in the early-twenty-first century is unknown to anyone. Kassia Wosick, a sociologist from New Mexico State University, estimated the global porn market value at $97 billion in 2015, with the US revenue estimated at $10 and $12 billion. IBISWorld, a leading researcher of various markets and industries, calculated total US revenue to reach $3.3 billion by 2020. On the basis of a research report by a market analysis firm, USA Today published that the estimated worth of the adult entertainment industry market in 2023 is over $172 billion. Pornographers have taken advantage of each major technological advancement for the production and distribution of their services. Pornography has been called an "erotic engine" and a driving force in the development of various media related technologies from the printing press, through photography (still and motion), to satellite TV, Home video, and streaming media. One of the world's leading anti-pornography campaigners, Gail Dines, has stated that "the demand for porn has driven the development of core cross-platform technologies for data compression, search, transmission and micro-payments." Many of the technological developments that had been led by pornography have benefited other fields of human activity too. In the early 2000s, Wicked Pictures pushed for the adoption of the MPEG-4 file format ahead of others, this later became the most commonly used format across high-speed Internet connections. In 2009, Pink Visual became one of the first companies to license and produce content with a software introduced by a small Toronto-based company called "Spatial view", which later made it possible to view 3D content on iPhones. As an early adopter of innovations, the pornography industry has been cited to be a crucial factor in the development and popularization of various media processing and communication technologies. From innovative smaller film cameras, to the VCRs, and the Internet, the porn industry has employed newer technologies much ahead than other commercial industries, this early adoption provided the developers their early financial capital, which aided in the further development of these technologies. The success of innovative technologies is predicted by their greater use in the porn industry. The way you know if your technology is good and solid is if it's doing well in the porn world. Pornographic content accounted for most videotape sales during the late 1970s. The pornography industry has been considered an influential factor in deciding the format wars in media, including being a factor in the VHS vs. Betamax format war (the videotape format war) and the Blu-ray vs. HD DVD format war (the high-def format war). Piracy, the illegal copying and distribution of material, is of great concern to the porn industry. The industry has been the subject of many litigations and formalized anti-piracy efforts. Many of the innovative data rendering procedures, enhanced payment systems, customer service models, and security methods developed by pornography companies have been co-opted by other mainstream businesses. Pornography companies served as the basis for a large number of innovations in web development. Much of the IT work in porn companies is done by people who are referred to as a "porn webmaster", often paid well in what are small businesses, they have more freedom to test innovations compared to other IT employees in larger organizations who tend to be risk-averse. Some pornography is produced without human actors at all. The idea of computer-generated pornography was conceived very early as one of the obvious areas of application for computer graphics. Until the late 1990s, digitally manipulated pornography could not be produced cost-effectively. In the early 2000s, it became a growing segment as the modeling and animation software matured, and the rendering capabilities of computers improved. Further advances in technology allowed increasingly photorealistic 3D figures to be used in interactive pornography. The first pornographic film to be shot in 3D was 3D Sex and Zen: Extreme Ecstasy, released on 14 April 2011, in Hong Kong. The various mediums for pornography depictions have evolved throughout the course of history, starting from prehistoric cave paintings, about forty millennia ago, to futuristic virtual reality renditions. Experts in the pornography business predict more people in the future would consume porn through virtual reality headsets, which are expected to give consumers better personal experiences than they can have in the real world. Speculations are rife about an increased presence of sex robots in the future pornography productions. Consumption Pornography is a product made by adults-for the consumption by adults, the consumption of which has become more common among people due to the expansive use of the Internet. About 90% of pornography is consumed on the Internet with consumers preferring content that is in tune with their sexuality. Pornography has been found to be a significant influencer of people's ideas about sex in the digital age. Pornographic websites rank among the top 50 most visited websites worldwide. XVideos and Pornhub are the two most visited pornographic websites worldwide. Pornography consumption in people is found to induce "psychological moods and emotions" similar to those evoked during actual sexual intercourse and casual sex. Researchers identified four broad motivating factors for pornography consumption: an innate sexual drive or desire, to learn about sex and improve ones own sexual performance, peer pressure or social groups, lack of sexual relationship or absence of partner. Majority of pornography consumers tend to be male, unmarried, with higher levels of education. Younger people are more frequent consumers of porn than older people. There's been a gradual increase in the consumption rates across different age groups with the increased availability of free porn over the Internet. Researchers at McGill University ascertained that on viewing pornographic content, men reached their maximum arousal in about 11 minutes and women in about 12 minutes. An average visit to a pornographic website lasts for 11.6 minutes. Both marriage and divorce are found to be associated with lower subscription rates for adult entertainment websites. Subscriptions are more widespread in regions that have higher measures of social capital. Pornographic websites are most often visited during office hours. As per a recent CNBC report, seventy per cent of online-porn access in the US happens between nine-to-five hours. Sexual arousal and sexual enhancement tend to be the primary motivations among the self-reported reasons by users for their pornography consumption. Studies had found that greater levels of psychological distress leads to higher rates of pornography consumption. Pornography may provide a temporary relief from stress, or anxiety. A need to assuage coping and boredom is also found to result in higher consumption of pornography. A study of Austrian adults found that men consume pornography more frequently than women. The intent for consumption may vary, with men being more likely to use pornography as a stimulant for sexual arousal during solitary sexual activity, while women are more likely to use pornography as a source of information or entertainment, and rather prefer using it together with a partner to enhance sexual stimulation during partnered sexual activity. Studies have found that sexual functioning defined as "a person's ability to respond sexually or to experience sexual pleasure" is greater in women who consume pornography frequently than in women who do not. No such association was noticed in men. Women who consume pornography are more likely to know about their own sexual interests and desires, and in turn be willing and able to communicate them during partnered sexual activity, it has been reported that in women the ability to communicate their sexual preferences is associated with greater sexual satisfaction for themselves. Pornographic material is found to expand the sexual repertoire in women by making them learn new rewarding sexual behaviors such as clitoral stimulation and enhance their overall "sexual flexibility". Women who consume pornography frequently are more easily aroused during partnered sex and are more likely to engage in oral sex compared to the women who do not view pornography. Women users of pornography had reported (almost 50%) to have had engaged in cunnilingus, which research suggests is related to female orgasm, and to have had experienced orgasms more frequently than women who do not use pornography (87% vs. 64%). Most people, probably do not consider pornography use by a partner as indulging in infidelity. A 2024 Economic and Social Research Institute (ESRI) study found that pornography consumption among 20-year-olds was highly gendered, with 64% of young men reporting that they watched pornography compared to 13% of young women, indicating that young men were nearly five times more likely to consume pornography than young women. Researchers have attributed higher rates of pornography consumption among young men to a combination of various factors, including higher average sexual curiosity and libido, gendered social norms that are more permissive of male sexual behaviour, greater stigma surrounding female pornography use, and differences in patterns of internet and media consumption. Annually since 2013, Pornhub Insights has released a "Year In Review" report (except in 2020 due to the COVID-19 pandemic). The data found that the lesbian category has been consistently the most popular among female viewers since 2014 when gender statistics were first gathered, and that women in general regardless of sexual orientation are more likely to search for lesbian-associated terms such as "scissoring" than men. Several articles; including those by Cosmopolitan, Glamour, and Women's Health magazines; have supported these findings through research of their own. Furthermore, gay male pornography ranked as the second most preferred category for female visitors before its statistics were separated into its own section starting in 2016. According to data scientist Seth Stephens-Davidowitz, 25% of female searches for heterosexual content on the site involved keywords for painful, humiliating, or non-consensual sex. Research has also shown that a significant portion of viewers of gay male pornography are heterosexual men, with a 2018 study by YouPorn revealing that around a quarter of straight men report watching gay porn at least occasionally. A two year long survey (2018–2020) conducted to assess the role of pornography in the lives of highly educated medical university students, with median age of 24, in Germany found that pornography served as an inspiration for many students in their sex life. Pornography use among students was higher in males than in females, among the male students those who did not cheat on their partner or contracted an STI were found to be more frequent consumers of pornography. Although pornography use was more common among men, associations between pornography use and sexuality were more apparent in women. Among the female students, those who reported to be satisfied with their physical appearance have consumed three times as much pornography than the female students who had reported to be dissatisfied with their body. A feeling of physical inadequacy was found to be a restraining factor in the consumption of pornography. Female students who consume pornography more often had reported to have had multiple sexual partners. Both female and male students who enjoyed the experience of anal intercourse in their life were reported to be frequent consumers of pornography. Sexual content depicting bondage, domination, or violence was consumed by only a minority of 10%. More sexual openness and less sexual anxiety was observed in students who regularly consumed pornography. No association was noticed between regular pornography use and experience of sexual dissatisfaction in either female or male students. This finding was in concurrence with another finding from a longitudinal study, which demonstrated most pornography consumers differentiate pornographic sex from real partnered sex and do not experience diminishing satisfaction with their sex life. A vast majority of men and considerable number of women in the US use porn.[g] A study in 2008 found that among University students aged 18 to 26 located in six college sites across the United States, 67% of young men and 49% of young women approved pornography viewing, with nearly 9 out of 10 men (87%) and 31% women reportedly using pornography. The Huffington Post reported in 2013 that porn websites registered higher number of visitors than Netflix, Amazon, and Twitter combined. A 2014 poll, which asked Americans when they had "last intentionally looked at pornography", elicited a result that 46% of men and 16% of women in the age group of 18–39 did so in the past week. A 2016 study reported that about 70% of men and 34% of women in romantic relationships use pornography annually. Gallup poll surveys conducted over the years 2011 to 2018 noted a gradual increase in the acceptance rates of pornography among the general American public. Since the late 1960s, attitudes towards pornography have become more positive in Nordic countries; in Sweden and Finland the consumption of pornography has increased over the years. A 2006 study of Norwegian adults found that over 80% of the respondents used pornography at some point in their lives, a difference of 20% was observed between men and women in their respective use. A 2015 study in Finland noted that 75% of the 30-40-y.o. women and above 90% of the 30-40-y.o. men found porn "very exciting". Of those who had watched porn during the latest year, 71% of the 18-24-y.o. women, almost 60% of the 18-49-y.o. women, and a tenth of 65+ women did so; among men, the numbers were above 90% of the men under 50-y.o., 3/4 of the 18-64-y.o. and most of the 65+ y.o.; the numbers were quickly increasing, particularly for women, partially due to increased masturbation. In 2012 and 2013, interviews with large number of Australians revealed that in the past year 63% of men and 20% of women had viewed pornography. A 2020 Egyptian study surveying 15,027 individuals in Arab countries noted a prevalence of pornography use "nearly similar to Danish, German, and American ones". In 2021, it was estimated that in modern countries, 46–74% of men and 16–41% of women are regular users of pornography. In 2022, a national survey in Japan, of men and women aged 20 to 69 revealed that 76% of men and 29% of women had used pornography as part of their sexual activity. A 2023 study reported that in Netherlands, young men who watched porn in the previous six months ranged between 65% (13–15-y.o.) to 96% (22–24-y.o.), and among young women between 22% (13–15-y.o.) to 75% (22–24-y.o.). Legality and regulations The legal status of pornography varies widely from country to country. Regulating hardcore pornography is more common than regulating softcore pornography. Child pornography is illegal in almost all countries, and some countries have restrictions on rape pornography and zoophilic pornography. Pornography in the United States is legal provided it does not depict minors, and is not obscene. The community standards, as indicated in the Supreme Court decision, of the 1973 Miller v. California case determine what constitutes as "obscene". The US courts do not have jurisdiction over content produced in other countries, but anyone distributing it in the US is liable to prosecution under the same community standards. As the courts consider community standards foremost in deciding any obscenity charge, the changing nature of community standards over the course of time and place makes instances of prosecution limited. In the United States, a person receiving unwanted commercial mail that he or she deems pornographic (or otherwise offensive) may obtain a Prohibitory Order. Many online sites require the user to tell the website they are a certain age and no other age verification is required. A total of 16 states and the Republican Party have passed resolutions declaring pornography a "public health" threat. These resolutions are symbolic and do not put any restrictions but are made to sway the public opinion on pornography. The notion of pornography as a threat to public health is not supported by any international health organization. The adult film industry regulations in California requires that all performers in pornographic films use condoms. However, the use of condoms in pornography is rare. As porn does better financially when actors are without condoms many companies film in other states. Twitter is the popular social media platform used by the performers in porn industry as it does not censor content unlike Instagram and Facebook. Pornography in Canada, as in the US, criminalizes the "production, distribution, or possession" of materials that are deemed obscene. Obscenity, in the Canadian context, is defined as "the undue exploitation of sex" provided it is connected to images of "crime, horror, cruelty, or violence". As to what is considered "undue" is decided by the courts, which assess the community standards in deciding whether exposure to the given material may result in any harm, with harm defined as "predisposing people to act in an anti-social manner". Pornography in the United Kingdom does not have the concept of community standards. Following the highly publicized murder of Jane Longhurst, the UK government in 2009 criminalized the possession of what it terms as "extreme pornography". The courts decide whether any material is legally extreme or not, conviction for penalty include fines or incarceration up to three years. Content banned includes representations that are considered "grossly offensive, disgusting, or otherwise of an obscene character". While there are no restrictions on depiction of male ejaculation, any depiction of female ejaculation in pornography is completely banned in the UK, as well as in Australia. In most of Southeast Asia, Middle East, and China, the production, distribution or possession of pornography is illegal and outlawed. In Russia and Ukraine, webcam modeling is allowed provided it contains no explicit performances; in other parts of the world commercial webcamming is banned as a form of pornography. Disseminating pornography to a minor is generally illegal. There are various measures to restrict minors' access to pornography, including protocols for pornographic stores. Pornography can infringe into basic human rights of those involved, especially when sexual consent was not obtained. Revenge porn is a phenomenon where disgruntled sexual partners release images or video footage of intimate sexual activity of their partners, usually on the Internet, without authorization or consent of the individuals involved. In many countries there has been a demand to make such activities specifically illegal carrying higher punishments than mere breach of privacy, or image rights, or circulation of prurient material. As a result, some jurisdictions have enacted specific laws against "revenge porn". In the US, a July 2014 criminal case decision in Massachusetts — Commonwealth v. Rex, 469 Mass. 36 (2014), made a legal determination as to what was not to be considered "pornography" and in this particular case "child pornography". It was determined that photographs of naked children that were from sources such as National Geographic magazine, a sociology textbook, and a nudist catalog were not considered pornography in Massachusetts even while in the possession of a convicted and (at the time) incarcerated sex offender. Drawing the line depends on time, place and context. Occidental mainstream culture has been increasingly getting "pornified" (i.e. influenced by pornographic themes, with mainstream films often including unsimulated sexual acts). Since the very definition of pornography is subjective, material that is considered erotic or even religious in one society may be denounced as pornography in another. When European travellers visited India in the 19th century, they were dismayed at the religious representation of sexuality on the Hindu temples and deemed them as pornographic. Similarly many films and television programs that are unobjectionable in contemporary Western societies are labeled as "pornography" in Muslim societies. In the United States, some courts have applied US copyright protection to pornographic materials. Some courts have held that copyright protection effectively applies to works, whether they are obscene or not, but not all courts have ruled the same way. The copyright protection rights of pornography in the United States has again been challenged as late as February 2012. STIs prevention and safer sex practices Performers working for pornographic film studios undergo regular testing for sexually transmitted infections (STIs) every two weeks. They have to test negative for: HIV, trichomoniasis, chlamydia, gonorrhea, syphilis, and hepatitis B and C before showing up on a set and are then inspected for sores on their mouths, hands, and genitals before commencing work. The industry believes this method of testing to be a viable practice for safer sex as its medical consultants claim that since 2004, about 350,000 pornographic scenes have been filmed without condoms and HIV has not been transmitted even once because of performance on set. However, some studies suggest that adult film performers have high rates of chlamydia or gonorrhea infection, and many of these cases may be missed by industry screening because these bacteria can colonize many sites on the body. In the initial years, studios assessed performers suitability on the results from their blood and urine tests. According to a 2019 study by the American College of Emergency Physicians, swab tests offer better insight than urine samples for detecting bacterial STIs like chlamydia and gonorrhea. Performers such as Cherie DeVille have emphasized swab tests for safer sex. According to performer Angela White, studios will not allow them to work unless they are completely clean, insisting on performers testing regularly, she said "So for me, because I work so much, I'm testing every 12 days – and that is a full sweep of STIs such as chlamydia, gonorrhoea, syphilis, HIV and trichomoniasis. We're doing throat swabs, vaginal swabs and anal swabs." Allan Ronald, a Canadian doctor and HIV/AIDS specialist who did groundbreaking studies on the transmission of STIs among prostitutes in Africa, said there's no doubt about the efficiency of the testing method, but he felt a little uncomfortable: "because it's giving the wrong message — that you can have multiple sex partners without condoms — but I can't say it doesn't work." Relatedly, it has been found that individuals who received little sex education or perceive pornography as a source of information about sex are less apt to use condoms in their own sex life, making themselves more susceptible to contract STIs. In 2020—the US National Sex Education Standards—released recommendations to incorporate "porn literacy" to students from grade 6 to 12 as part of sex education in the US. Veteran performer and former nurse Nina Hartley, who has a degree in nursing, stated that the amount of time involved in shooting a scene can be very long, and with condoms in place it becomes a painful proposition as their usage is uncomfortable despite the use of lube, causes friction burn, and opens up lesions in the genital mucosa. Advocating the testing method for performers, Hartley said, "Testing works for us, and condoms work for outsiders." "We're tested every fourteen days. That is literally twenty-three more times than the average American. If that person makes it to their yearly physical. I have met tons of people that haven't been to the doctor in years. That scares me because they have no idea what their status is.... I don't hook up with people outside of the porn industry because I'm terrified. And I'm not the only one. There's many performers that know: if you go out into the wild, you will come back with something." — Ash Hollywood (Porn actress). Emphasizing that performers in the industry take necessary precautions like PrEP and are at lower risk to contract HIV than most sexually active persons outside the industry, many prominent female performers have vehemently opposed regulatory measures like Measure B that sought to make the use of condoms mandatory in pornographic films. Professional female performers have called the use of condoms on a daily basis at work an occupational hazard as they cause micro-tears, friction burn, swelling, and yeast infections, which altogether, they say, makes them more susceptible to contract STIs.[h] Views on pornography Pornography has been vouched to provide a safe outlet for sexual desires that may not be satisfied within relationships and be a facilitator of sexual fulfillment in people who cannot or do not want to have real-life partners. Pornography is viewed by people in general for various reasons; varying from a need to enrich their sexual arousal, to facilitate orgasm, as an aid for masturbation, learn about sexual techniques, reduce stress, alleviate boredom, enjoy themselves, see representation of people like themselves, know their sexual orientation, improve their romantic relationships, or simply because their partner wants them to. Pornography is noted for engrossing people "on more than masturbatory levels". Aesthetic philosophers argue whether pornographic representations can be considered as expressions of art. Pornography has been equated with journalism as both offer a view into the unknown or the hidden aspects. French philosopher Michel Foucault remarked that, "it is in pornography that we find information about the hidden, the forbidden and the taboo". Scholars such as Linda Williams, Jennifer Nash, and Tim Dean believe pornography "is a form of thinking", comprised with ideas that are way more reflective about sexuality and gender than what the creators or consumers of pornography intend. Pornography has been referred by people as a means to explore their sexuality. People have reported porn being helpful in learning about human sexuality in general. Studies recommend clinical practitioners to use pornography as an instruction aid to show their clients new and alternative sexual behaviors as part of psychosexual therapy. British psychologist, Oliver James, known for his work on 'happiness', stated that "a high proportion of men use porn as a distraction or to reduce stress ... It serves an anti-depressant purpose for the unhappy." British-American novelist, Salman Rushdie opined that pornography presence in society is "a kind of standard-bearer for freedom, even civilisation". As per evaluation by medical professionals, pornography can neither be good nor bad as it does not endorse or advocate a single set of values regarding sex. As such, individuals may introspect their own values with regards to sex while evaluating pornography. The relationship between pornography and its audience is found to be complex. While many users reported their use to have had positive effects, others especially women were found to be troubled with body image issues, the cause of which is attributed to the unrealistic image of "beauty" that pornography portrays. The increasing prevalence of alleged beauty enhancing procedures such as breast augmentation and labiaplasty among the common populace has been attributed to the popularity of pornography. Data from pornographic websites regarding the viewing habits of people is studied by academics to analyze the sexual preferences and mating choices. More often men look for women who have larger chest and hips, with a smaller waist–hip ratio. Women are found to prefer men who are taller, stronger, appear highly masculine, and are in roles that can provide resources while being protective (CEO, doctor, athlete, lawmen). Studies on harmful effects of pornography include finding any potential influence of pornography on rape, domestic violence, sexual dysfunction, difficulties with sexual relationships, and child sexual abuse. A longitudinal study had ascertained that pornography use cannot be a perpetrating factor in intimate partner violence.[d] A 2020 study that analyzed depictions in video-pornography found that normative sexual behaviors (e.g., vaginal intercourse, fellatio) were the most commonly depicted, while depictions of extreme acts of violence and rape were very rare. There is no clear evidence to assume that pornography is a cause of rape. Several studies conclude that liberalization of porn in society may be associated with decreased rates of rape and sexual violence, while others have suggested no effect, or are inconclusive. No correlation has been found between pornography use and the practice of sexual consent or lack thereof. Mental health experts are divided over the issue of pornography use being a problem for people. While some literature reviews suggest pornography use can be addictive, insufficient evidence exists to draw conclusions. According to clinical psychologist and certified sex therapist David Ley, calling pornography an "addiction" has been "an area of substantial, protracted controversy and debate". Ley explained pornography doesn't effect an adult brain or body in the way alcohol or drugs do, he said "An alcoholic going cold turkey can have seizures and die because their brain has become physiologically dependent on the alcohol, but no one has ever had seizures or died from not getting to watch porn when they want to." Scholars note that pornography use has no implication on public health as it does not meet the definition of a public health crisis. Neuroscience has noted that minds of the young are in developmental stages and exposure to emotionally charged material such as pornography would likely have an impact on them unlike on adults, and has suggested caution while enabling potential access to such material. Opposition to pornography use is associated with sexual satisfaction, gender violence, and marital quality (wives watching pornography more frequently scored much better than the rest). Some issues of doxing and revenge porn had been linked to a few pornography websites. Since the mid-2010s deepfake pornography has become an issue of concern. Feminist movements in the late 1970s and 1980s dealt with the issues of pornography and sexuality in debates that are referred to as the "sex wars". While some feminist groups seek to abolish pornography believing it to be harmful, other feminist groups oppose censorship efforts insisting it is benign. A large scale study of data from the General Social Survey (2010–2018) refuted the argument that pornography is inherently anti-woman or anti-feminist and that it drives sexism. The study did not find a relationship between "pornography viewing" and "pornography tolerance" with higher sexism—a posit that was held by some feminists; it instead found higher pornography consumption and pornography tolerance among men to be associated with their greater support for gender equality. The study concluded that "pornography is more likely to be about the sex rather than the sexism". People who supported regulated pornography expressed lesser attitudes of sexism than people who sought to abolish pornography. Notably, non-feminists are found more likely to support a ban on pornography than feminists. Many feminists, both male and female, have reflected that the effects of pornography on society are neutral. Adult users of pornography were found more egalitarian than nonusers, they are more likely to hold favorable attitudes towards women in positions of power and in workplaces outside home than the nonusers. A 2016 study authored by Black feminists criticized the American adult entertainment industry for alleged omission and exclusion of Black women in pornographic representations, particularly in the interracial genres. As pornography becomes a kind of manual on how bodies in pleasure can look, and is "one of the few places where we see our bodies--and other people's bodies," it becomes imperative on pornography to represent "variety of forms", stated the feminist scholars. Anti-pornography feminists argue that aesthetics of pornography demote Black women with undertones of racism. Gender studies scholars Mireille Miller-Young and Jennifer Christine Nash, in their writings on intersectionality of race and pornography, noted that Black people have been depicted as being hypersexual and Black women—more objectified. The scholars also noted major discrepancies in pay rates of the performers, White women have historically made 75 percent more per scene and sometimes still make 50 percent more compared to Black women. Feminist resentment about pornography tend to focus on two concerns: that pornography depicts violence and aggression, and that pornography objectifies women. Multiple analyses of pornographic videos found that Women have been overwhelmingly at the receiving end of aggression from male performers; with the reaction of Women being either positive or neutral towards aggression, which is at odds considering a report that found only 14.2% of US adult women find pain during sex as appealing. Two studies in the 1990s found that Black women were the targets of aggression and faced more violence from both Black and White men than did White women. However, more recent research from 2018 found that Black women were the least likely group of women to suffer nonconsensual aggression and are more likely to receive affection from their male partners. While Black men engaged in fewer intimate behaviors than White men; White women were found more likely to experience violence during sexual activity with White men than with Black men. Concerning Asian women, a 2016 study based on a sample of 3053 videos from Xvideos.com, found that in the 170 videos of the Asian women category, there was much less aggression, less objectification, but also the women had less agency. However, another study found that in a sample of 172 videos from Pornhub, the 25+ videos of the Asian/Japanese category had considerably more aggression than those of other categories. A 2002 study of "internet rape sites" found that among the 56 clear pictures they found, 34 had Asian women, and nearly half the sites had either an image or a text reference to an Asian woman. Findings on depictions of Asian women in pornography are inconsistent in scientific literature. The prevalence of aggression in pornography appears to be changing. A 2018 study of popular videos on Pornhub found that segments of aggression towards women are fewer now, and they have reduced gradually over the past decade with viewers preferring content where women genuinely experience pleasure. Prominent anti-pornography feminists such as Andrea Dworkin and Catharine MacKinnon argue that all pornography is demeaning to women, or that it contributes to violence against women–both in its production and in its consumption. The production of pornography, they argue, entails the physical, psychological, or economic coercion of the women who perform in it. They charged that pornography eroticizes the domination, humiliation, and coercion of women, while reinforcing sexual and cultural attitudes that are complicit in rape and sexual harassment. Other sex work exclusionary feminists have insisted that pornography presents a severely distorted image of sexual consent, and it reinforces sexual myths like: women are readily available–and desire to engage in sex at any time–with any man–on men's terms–and always respond positively to men's advances. In contrast to the objections, other feminist scholars "ranging from Betty Friedan and Kate Millett to Karen DeCrow, Wendy Kaminer and Jamaica Kincaid" have supported the right to consume pornography. The anti-porn feminist stranglehold began to loosen when sex-positive feminists like Susie Bright, performers Nina Hartley, and Candida Royalle affirmed the rights of women to consume and produce porn. The works of Camille Paglia established that westerners have been "pagan celebrants" for long and pornography has been an inseparable part of western culture. Wendy McElroy has noted that both feminism and pornography are mutually related, with both thriving in environments of tolerance, and both repressed anytime regulations are placed on sexual expression. Societies where pornography and sexual expression is prohibited are more likely to be the places where women are often subjected to violence and sexual abuse. Women's rights are far stronger in societies with liberal attitudes to sex – think of conservative countries such as Afghanistan, Yemen or China, and the place of women there. And yet, anti-porn campaigners neglect such issues entirely. A recent study by the US department of justice compared the four states that had highest broadband access and found there was a 27 per cent decrease in rape and attempted rape, and the four with the lowest had a 53 per cent increase over the same period. The lesbian feminist movement of the 1980s is considered a seminal moment for the women in porn industry as more women entered into the developmental side, allowing women to gear porn more towards women as they knew what women wanted, both from the perspective of actresses as well as the female audience. This involved making lesbian pornography that is not merely geared towards heterosexual males—a change considered good, as for a long time the porn industry had been directed by men for men. Furthermore, the advent of the VCR, home video, and affordable video cameras allowed for the possibility of feminist pornography. Feminist porn directors are interested in challenging representations of men and women, as well as in providing sexually-empowering imagery that features many kinds of bodies. Angela White started her own production company, AWG Entertainment, in which she has complete creative control over the content—from her partners, to the location, costumes, and the "vibe" of the video, "I am a feminist, so what I create is feminist, and I produce ethical porn, which is when everything is consensual", she said. Women are more likely to consume porn that is "female-centered" and feature acts such as cunnilingus, a study of pornographic videos found that when men spend more time performing cunnilingus they have higher volumes of ejaculate, an increase in sexual arousal resulting from exposure to the vaginal secretion 'copulins' during cunnilingus is reasoned to be the cause. Female-centric porn is mostly made by women, in these works the initiation of sexual activity is done by the female. Porn for women is identified by factors like greater attention on "sensual surroundings" and "soft focus camerawork", rather than on explicit depiction of sexual activity, making the productions more warm and humane compared to the traditional porn made for heterosexual men. "If feminists define pornography, per se, as the enemy, the result will be to make a lot of women ashamed of their sexual feelings and afraid to be honest about them. And the last thing women need is more sexual shame, guilt, and hypocrisy—this time served up by feminism" — Ellen Willis. Porn industry has been noted for being one of the few industries where women enjoy a power advantage in the workplace. "Actresses have the power," Alec Metro, one of the men in line, ruefully noticed of the X-rated industry. A former firefighter who claimed to have lost a bid for a job to affirmative action, Metro was already divining that porn might not be the ideal career choice for escaping the forces of what he called "reverse discrimination". Female performers can often dictate which male actors they will and will not work with. Porn—at least, porn produced for a heterosexual audience—is one of the few contemporary occupations where the pay gap operates in the favor of women. The average actress makes fifty to a hundred per cent more money than her male counterpart. Psychologists consider pornography to be of particular relevance in the study of intimate relationships and the development of adolescent sexuality. Mainstream psychology is mostly concerned with the study of effects of pornography, while critical psychology and applied psychology is engaged in more nuanced and academic study of pornography. Problematic pornography use is assessed in clinical psychology. A 2013 study refuted the notion that porn actresses have higher rates of psychological problems than regular women. The study compared 177 porn actresses with regular women of similar age, ethnicity, and marital status, and found that the porn actresses had "higher levels of self-esteem, positive feelings, social support, sexual satisfaction, and spirituality" compared to the regular women. In analytical psychology, humans sexual and religious-spiritual instincts are considered tightly associated with each other, with both sharing a common objective, which, as Carl Jung acknowledged, is the striving of the psyche for "wholeness". The psyche of a person is understood to be differentiated, being made-up of traits that are feminine and masculine in nature. According to Jung, this differentiation allowed the formation of opposite polarities that make "consciousness possible". According to psychologist and author Giorgio Tricarico, as an individual moves through various life experiences, their psyche approaches wholeness or the state of "non-differentiated"—a realm of higher nondual consciousness—considered belonging to the sacred or divine. In Hindu tantric view, the guiding image of a male and a female conjoined in sexual intercourse represent the embodiment of nondual consciousness. Men and women, however they appear, are considered microcosmic compounds of the macrocosmic principles—Shiva (matter) and Shakti (spirit). Shiva and Shakti, together in "perpetual union", form the nondual "Absolute". Sigmund Freud called the feminine Shakti "libido that cannot be simply repressed." Self-realization or becoming aware of the " 'deep' femininity" entails dealing with the powerful sexual energy. Tantric rituals like maithuna harness the sexual energy in order to make the male and the female principles that appear seemingly opposite-arrive into unity or a harmonious whole in the "divine feminine" or "unified divine consciousness", an idea analogous with analytical psychology's coincidentia oppositorum. In classical Hindu thought, the nature of the self in males and females is assessed as being androgyne, with sexuality being a creative function of the divine to align the human self with its bipolar nature. The masculine and the feminine principles of the self were identified with Shiva and Shakti, who make-up the two sexual polarities; by establishing a connection between the two, for the flow of erotic energy (as in an electric circuit between positive and negative terminals for the flow of electric current), in one's own being—by the means of sexual stimulation—through "erotic visualization" or "ritual copulation", the self would "divest" from its body identity and realign into the "bipolar being", which, then represents a unit microcosm mirroring the nondual macrocosm; thus an individual in being one with the absolute experiences bliss-considered as the power of the goddess (Shakti) in a tangible form.[i] In the Hindu tantric view, the women who participate in union rituals, thereby enabling men to attain self-realization are regarded as shakti or the goddess, as they are believed to embody her.[j] Recognition of the deity in an objective woman is centered upon a man's acceptance of the subjective feminine and the primacy of her desires. Tricarico professed that modern-day pornography in its essence is a "desacralised, technological, and consumerist" equivalent of the ancient sacred prostitution – a custom that involved honoring of the sacred feminine and worship of the prostitutes as goddess. The feminine is believed to embody particular qualities of the sacred or divine more broadly and deeply than the masculine, consequently in women, the ability to incorporate nondualistic awareness is assumed to be higher. Tricarico argued that women in porn, through their performances of many sexual acts, would inadvertently approach the non-differentiated state, an effect which he called the "intimation of hierophany". "Porn actresses may embody the medium to enter what used to be the realm of the sacred", he said. The actresses have been likened to the "descendants of the lost goddesses" who are now offering the gift of the "numinous" to all through their performances, but are unacknowledged or devalued for their contributions. The use of epithets like "bitch", "whore", "slut" for sexually active women has been attributed to the denial of the subjective feminine by men. The subdued acceptance of female sexuality, as a value in its own right, is manifested when a man's admiration for the "bitch" gets subtended if she happens to be his wife or girlfriend. Along with showing "admiration, lust, gratitude, and desire", men show brazen hate and disgust towards women, this behavioral dichotomy had been ascribed to the "patriarchal hypocrisy" embedded in men. The unconscious perception by men of the greater ability in women to reach the undifferentiated state of psyche is reasoned to be a cause for their intentional humiliation, wilful devaluation, and deliberate belittlement of women. According to Julia Kristeva, a psychoanalyst scholar, the psychological rejection and fear of the mother figure in males is the root cause for their behaviors that seek to subjugate women. Men in their infancy live in a state of "undifferentiated physical and psychic fusion" with the mother, experiencing "emotional exhilaration and jouissance". However, as they mature, sensing their separateness from the mother, they seek to become independent subjects and take recourse to paternal images and patriarchal behaviors with the hope of eliminating any possible further "undifferentiated/psychotic fusion" with the mother as they feel threatened by it. According to psychoanalyst Melanie Klein, the rejection and fear of the maternal image, in females, leads them to reject their own femininity. Tricarico hoped that porn becomes a place where men discard patriarchal antics, women embrace the sacred aspects, and audience incorporate porn as a joyful experience for the body – a genuine interaction that helps them approach the non-differentiated state. In being with other, we differentiate ourselves, and experience jouissance whilst becoming a unit being. Many religions have long and vehemently opposed a wide range of sexual behaviors, as a result religious people are found highly susceptible to experience great distress in their use of pornography. Religious people who use pornography tend to feel sexually ashamed. Sexual shame—which arises from a person's perception of their self in other peoples mind, and a negative assessment of their own sexuality—is considered a powerful factor that over time governs an individual's behavior. As sexuality is interwoven into one's personal identity, sexual shame or sexual embarrassment are found to attack the person's very sense of self. When a sexual shaming event occurs, the person attributes causation to oneself resulting in self condemnation, and experience feelings of sadness, loneliness, anger, unworthiness, and rejection, along with a perceived judgment of their self by others. In this mental landscape, a fear arises that ones sexual self needs to be hidden. This psychological process initiates and fuels further shame and lowers one's self-esteem. Sexual shame constricts the "psychic space for free play with one's sexuality". Sexual shame in people begets more shame, and leads to a cycle of powerlessness culminating in deepening negative emotions. Those who tend to feel shame easily are found to be at greater risk for depression and anxiety disorders. According to clinical psychologist Gershen Kaufman, all Sexual disorders are majorly "disorders of shame". The cause of attributing shame to sexuality is traced back to the biblical interpretation of nakedness being shameful.[k] Much of the Christian mythology presented sexuality as an obstacle to be surmounted in the way of salvation. The major Abrahamic religions condemn and consider all forms of nonmarital and nonreproductive sexual pleasure as unacceptable. In Hinduism, bhoga (sexual pleasure) is celebrated as a value in itself and is considered one of the two ways to nirvana, the other being the more demanding yoga.[l] In the Hindu tantric view, watching coitus as an act of Shiva and Shakti is believed to unfurl the Kundalini, and is considered equivalent to one's engaging in maithuna or the fifth M of the panchamakara.[m] A central concept in Hinduism, purushartha, advocates pursuit of the four main goals for happiness: dharma (virtue), artha (riches), kama (pleasure), and moksha (freedom). The pursuit of Kama was elaborated by the sage Vatsyayana in his treatise Kama Sutra, which states that sexual pleasure and food are essential for the well-being of the body, and on both of them depend virtue and prosperity. Food, despite causing indigestion sometimes, would still be consumed regularly, and so it must be with pleasure, which must be pursued with caution while eliminating unwanted or harmful effects. As no one abstains from cooking food worrying about beggars who ask for it, or restrain from sowing wheat fearing animals that destroy the crop, similarly, instructs Vatsyayana, that men and women acquire knowledge of Kama by the time they reach youth and pursue it even though dangers exist; and those who become accomplished in Dharma, Artha, and Kama would attain highest happiness in this world and hereafter. According to the Buddha, happiness is of two types: one derived from "domestic life" and the other from "monastic life", and between the two, monastic kind is "superior". As a result of the Buddha's effective advocacy for monasticism, in Buddhist communities, marriage and divorce remained civil matters and never acquired sacremental significance. Counsel over sex life for householders was minimal, while for the monks it was extensive as in Vinaya since all sexual behaviors were meant to be suppressed for the sake of enlightenment. The early Buddhist texts castigated women as detrimental beings. The Buddha himself said often that a woman's body is "a vessel of impurity, full of stinking filth. It is like a rotten pit ... like a toilet, with nine holes pouring all sorts of filth." Once when it came to his notice that a monk, Suddina, transgressed celibacy with his wife for the sake of progeny, the Buddha chided him saying, "It were better for you, foolish man, that your male organ should enter the mouth of a terrible and poisonous snake, than that it should enter a woman." Per the Buddha, all sexual desires are incompatible with enlightenment. In Buddhism, people who even derive pleasure from watching others engage in sexual activity were relegated as pandaka (pusillanimous). The Buddha said sexuality is a fetter that must be evaded completely and men who engage with it are "impure" and will not be freed from "old age". After the Buddha died in old age, subsequent generations of Buddhists resolved their problematic attitudes towards sex by accommodating different views. According to Indonesia's foremost Islamic preacher, Abdullah Gymnastiar, shame is a noble emotion commanded in the Quran and was held high by Muhammad, who had been quoted as saying "Faith is compiled of seventy branches... and shame is one of them." To cultivate shame in Muslims, their sexual gaze needs to be checked, as unchecked gaze is believed to be the door through which Satan enters and soils the heart. In 2006, when anti-pornography protests erupted in Indonesia, the world's most populous Muslim-majority country, over the publication of the inaugural Indonesian edition of Playboy – Abdullah called for a legislation to ban pornography and embarked on a mission to shroud the state with a sense of shame, giving the slogan "the more shameful, the more faithful". During these protests, Indonesia's foremost Islamic newspaper, Republika, published daily front-page editorials which featured a logo of the word pornografi crossed out with a red X. The Jakarta office of Playboy Indonesia was ransacked by the members of Islamic Defenders Front (Front Pembela Islam or FPI), and bookstore owners were threatened not to sell any issue of the magazine. Consequently, in December 2008, Indonesian lawmakers signed an anti-pornography bill into law with overwhelming political support. Highly religious people are more likely to support policies against pornography such as censorship. Ironically, regions with highly religious and conservative people were found to search for more pornography online. Religious people are prone to having obsessive thoughts regarding sin and punishment by God over their pornography use causing them to feel ashamed, and perceive themselves to have pornography addiction while also suffering from OCD related symptoms. A study of sexually active religious people found those who are highly spiritually matured have less shame, while those not spiritually matured have high shame. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Geodesics_in_general_relativity] | [TOKENS: 10883] |
Contents Geodesics in general relativity In general relativity, a geodesic generalizes the notion of a "straight line" to curved spacetime. Importantly, the world line of a particle free from all external, non-gravitational forces is a particular type of geodesic. In other words, a freely moving or falling particle always moves along a geodesic. In general relativity, gravity can be regarded as not a force but a consequence of a curved spacetime geometry where the source of curvature is the stress–energy tensor (representing matter, for instance). Thus, for example, the path of a planet orbiting a star is the projection of a geodesic of the curved four-dimensional (4-D) spacetime geometry around the star onto three-dimensional (3-D) space. Mathematical expression The full geodesic equation is d 2 x μ d s 2 + Γ μ α β d x α d s d x β d s = 0 {\displaystyle {d^{2}x^{\mu } \over ds^{2}}+\Gamma ^{\mu }{}_{\alpha \beta }{dx^{\alpha } \over ds}{dx^{\beta } \over ds}=0\ } where s is a scalar parameter of motion (e.g. the proper time), and Γ μ α β {\displaystyle \Gamma ^{\mu }{}_{\alpha \beta }} are Christoffel symbols (sometimes called the affine connection coefficients or Levi-Civita connection coefficients) symmetric in the two lower indices. Greek indices may take the values: 0, 1, 2, 3 and the summation convention is used for repeated indices α {\displaystyle \alpha } and β {\displaystyle \beta } . The quantity on the left-hand-side of the sum in this equation is the acceleration of a particle, so this equation is analogous to Newton's laws of motion, which likewise provide formulae for the acceleration of a particle. The Christoffel symbols are functions of the four spacetime coordinates and so are independent of the velocity or acceleration or other characteristics of a test particle whose motion is described by the geodesic equation. Equivalent mathematical expression using coordinate time as parameter So far the geodesic equation of motion has been written in terms of a scalar parameter s. It can alternatively be written in terms of the time coordinate, t ≡ x 0 {\displaystyle t\equiv x^{0}} (here we have used the triple bar to signify a definition). The geodesic equation of motion then becomes: d 2 x μ d t 2 = − Γ μ α β d x α d t d x β d t + Γ 0 α β d x α d t d x β d t d x μ d t . {\displaystyle {d^{2}x^{\mu } \over dt^{2}}=-\Gamma ^{\mu }{}_{\alpha \beta }{dx^{\alpha } \over dt}{dx^{\beta } \over dt}+\Gamma ^{0}{}_{\alpha \beta }{dx^{\alpha } \over dt}{dx^{\beta } \over dt}{dx^{\mu } \over dt}\ .} This formulation of the geodesic equation of motion can be useful for computer calculations and to compare General Relativity with Newtonian Gravity. It is straightforward to derive this form of the geodesic equation of motion from the form which uses proper time as a parameter using the chain rule. Notice that both sides of this last equation vanish when the mu index is set to zero. If the particle's velocity is small enough, then the geodesic equation reduces to this: d 2 x n d t 2 = − Γ n 00 . {\displaystyle {d^{2}x^{n} \over dt^{2}}=-\Gamma ^{n}{}_{00}.} Here the Latin index n takes the values [1,2,3]. This equation simply means that all test particles at a particular place and time will have the same acceleration, which is a well-known feature of Newtonian gravity. For example, everything floating around in the International Space Station will undergo roughly the same acceleration due to gravity. Derivation directly from the equivalence principle Physicist Steven Weinberg has presented a derivation of the geodesic equation of motion directly from the equivalence principle. The first step in such a derivation is to suppose that a free falling particle does not accelerate in the neighborhood of a point-event with respect to a freely falling coordinate system ( X μ {\displaystyle X^{\mu }} ). Setting T ≡ X 0 {\displaystyle T\equiv X^{0}} , we have the following equation that is locally applicable in free fall: d 2 X μ d T 2 = 0. {\displaystyle {d^{2}X^{\mu } \over dT^{2}}=0.} The next step is to employ the multi-dimensional chain rule. We have: d X μ d T = d x ν d T ∂ X μ ∂ x ν {\displaystyle {dX^{\mu } \over dT}={dx^{\nu } \over dT}{\partial X^{\mu } \over \partial x^{\nu }}} Differentiating once more with respect to the time, we have: d 2 X μ d T 2 = d 2 x ν d T 2 ∂ X μ ∂ x ν + d x ν d T d x α d T ∂ 2 X μ ∂ x ν ∂ x α {\displaystyle {d^{2}X^{\mu } \over dT^{2}}={d^{2}x^{\nu } \over dT^{2}}{\partial X^{\mu } \over \partial x^{\nu }}+{dx^{\nu } \over dT}{dx^{\alpha } \over dT}{\partial ^{2}X^{\mu } \over \partial x^{\nu }\partial x^{\alpha }}} We have already said that the left-hand-side of this last equation must vanish because of the Equivalence Principle. Therefore: d 2 x ν d T 2 ∂ X μ ∂ x ν = − d x ν d T d x α d T ∂ 2 X μ ∂ x ν ∂ x α {\displaystyle {d^{2}x^{\nu } \over dT^{2}}{\partial X^{\mu } \over \partial x^{\nu }}=-{dx^{\nu } \over dT}{dx^{\alpha } \over dT}{\partial ^{2}X^{\mu } \over \partial x^{\nu }\partial x^{\alpha }}} Multiply both sides of this last equation by the following quantity: ∂ x λ ∂ X μ {\displaystyle {\partial x^{\lambda } \over \partial X^{\mu }}} Consequently, we have this: d 2 x λ d T 2 = − d x ν d T d x α d T [ ∂ 2 X μ ∂ x ν ∂ x α ∂ x λ ∂ X μ ] . {\displaystyle {d^{2}x^{\lambda } \over dT^{2}}=-{dx^{\nu } \over dT}{dx^{\alpha } \over dT}\left[{\partial ^{2}X^{\mu } \over \partial x^{\nu }\partial x^{\alpha }}{\partial x^{\lambda } \over \partial X^{\mu }}\right].} Weinberg defines the affine connection as follows: Γ λ ν α = [ ∂ 2 X μ ∂ x ν ∂ x α ∂ x λ ∂ X μ ] {\displaystyle \Gamma ^{\lambda }{}_{\nu \alpha }=\left[{\partial ^{2}X^{\mu } \over \partial x^{\nu }\partial x^{\alpha }}{\partial x^{\lambda } \over \partial X^{\mu }}\right]} which leads to this formula: d 2 x λ d T 2 = − Γ ν α λ d x ν d T d x α d T . {\displaystyle {d^{2}x^{\lambda } \over dT^{2}}=-\Gamma _{\nu \alpha }^{\lambda }{dx^{\nu } \over dT}{dx^{\alpha } \over dT}.} This completes our derivation, since the proper time is defined as the local time at a point that follows the line of motion in question (in this case the geodesic line of a free falling particle). Let us continue in order to derive the equations using the coordinate time as parameter. By applying the one-dimensional chain rule: d 2 x λ d t 2 ( d t d T ) 2 + d x λ d t d 2 t d T 2 = − Γ ν α λ d x ν d t d x α d t ( d t d T ) 2 . {\displaystyle {d^{2}x^{\lambda } \over dt^{2}}\left({\frac {dt}{dT}}\right)^{2}+{dx^{\lambda } \over dt}{\frac {d^{2}t}{dT^{2}}}=-\Gamma _{\nu \alpha }^{\lambda }{dx^{\nu } \over dt}{dx^{\alpha } \over dt}\left({\frac {dt}{dT}}\right)^{2}.} d 2 x λ d t 2 + d x λ d t d 2 t d T 2 ( d T d t ) 2 = − Γ ν α λ d x ν d t d x α d t . {\displaystyle {d^{2}x^{\lambda } \over dt^{2}}+{dx^{\lambda } \over dt}{\frac {d^{2}t}{dT^{2}}}\left({\frac {dT}{dt}}\right)^{2}=-\Gamma _{\nu \alpha }^{\lambda }{dx^{\nu } \over dt}{dx^{\alpha } \over dt}.} As before, we can set t ≡ x 0 {\displaystyle t\equiv x^{0}} . Then the first derivative of x0 with respect to t is one and the second derivative is zero. Replacing λ with zero gives: d 2 t d T 2 ( d T d t ) 2 = − Γ ν α 0 d x ν d t d x α d t . {\displaystyle {\frac {d^{2}t}{dT^{2}}}\left({\frac {dT}{dt}}\right)^{2}=-\Gamma _{\nu \alpha }^{0}{dx^{\nu } \over dt}{dx^{\alpha } \over dt}.} Subtracting d xλ / d t times this from the previous equation gives: d 2 x λ d t 2 = − Γ ν α λ d x ν d t d x α d t + Γ ν α 0 d x ν d t d x α d t d x λ d t {\displaystyle {d^{2}x^{\lambda } \over dt^{2}}=-\Gamma _{\nu \alpha }^{\lambda }{dx^{\nu } \over dt}{dx^{\alpha } \over dt}+\Gamma _{\nu \alpha }^{0}{dx^{\nu } \over dt}{dx^{\alpha } \over dt}{dx^{\lambda } \over dt}} which is the form of the geodesic equation of motion using the coordinate time as parameter. The geodesic equation of motion can alternatively be derived using the concept of parallel transport. Deriving the geodesic equation via an action We can (and this is the most common technique) derive the geodesic equation via the action principle. Consider the case of trying to find a geodesic between two timelike-separated events. Let the action be S = ∫ d s {\displaystyle S=\int ds} where d s = − g μ ν ( x ) d x μ d x ν {\displaystyle ds={\sqrt {-g_{\mu \nu }(x)\,dx^{\mu }\,dx^{\nu }}}} is the line element. There is a negative sign inside the square root because the curve must be timelike. To get the geodesic equation we must vary this action. To do this let us parameterize this action with respect to a parameter λ {\displaystyle \lambda } . Doing this we get: S = ∫ − g μ ν d x μ d λ d x ν d λ d λ {\displaystyle S=\int {\sqrt {-g_{\mu \nu }{\frac {dx^{\mu }}{d\lambda }}{\frac {dx^{\nu }}{d\lambda }}}}\,d\lambda } We can now go ahead and vary this action with respect to the curve x μ {\displaystyle x^{\mu }} . By the principle of least action we get: 0 = δ S = ∫ δ ( − g μ ν d x μ d λ d x ν d λ ) d λ = ∫ δ ( − g μ ν d x μ d λ d x ν d λ ) 2 − g μ ν d x μ d λ d x ν d λ d λ {\displaystyle 0=\delta S=\int \delta \left({\sqrt {-g_{\mu \nu }{\frac {dx^{\mu }}{d\lambda }}{\frac {dx^{\nu }}{d\lambda }}}}\right)\,d\lambda =\int {\frac {\delta \left(-g_{\mu \nu }{\frac {dx^{\mu }}{d\lambda }}{\frac {dx^{\nu }}{d\lambda }}\right)}{2{\sqrt {-g_{\mu \nu }{\frac {dx^{\mu }}{d\lambda }}{\frac {dx^{\nu }}{d\lambda }}}}}}d\lambda } Using the product rule we get: 0 = ∫ ( d x μ d λ d x ν d τ δ g μ ν + g μ ν d δ x μ d λ d x ν d τ + g μ ν d x μ d τ d δ x ν d λ ) d λ = ∫ ( d x μ d λ d x ν d τ ∂ α g μ ν δ x α + 2 g μ ν d δ x μ d λ d x ν d τ ) d λ {\displaystyle 0=\int \left({\frac {dx^{\mu }}{d\lambda }}{\frac {dx^{\nu }}{d\tau }}\delta g_{\mu \nu }+g_{\mu \nu }{\frac {d\delta x^{\mu }}{d\lambda }}{\frac {dx^{\nu }}{d\tau }}+g_{\mu \nu }{\frac {dx^{\mu }}{d\tau }}{\frac {d\delta x^{\nu }}{d\lambda }}\right)\,d\lambda =\int \left({\frac {dx^{\mu }}{d\lambda }}{\frac {dx^{\nu }}{d\tau }}\partial _{\alpha }g_{\mu \nu }\delta x^{\alpha }+2g_{\mu \nu }{\frac {d\delta x^{\mu }}{d\lambda }}{\frac {dx^{\nu }}{d\tau }}\right)\,d\lambda } where d τ d λ = − g μ ν d x μ d λ d x ν d λ {\displaystyle {\frac {d\tau }{d\lambda }}={\sqrt {-g_{\mu \nu }{\frac {dx^{\mu }}{d\lambda }}{\frac {dx^{\nu }}{d\lambda }}}}} Integrating by-parts the last term and dropping the total derivative (which equals to zero at the boundaries) we get that: 0 = ∫ ( d x μ d τ d x ν d τ ∂ α g μ ν δ x α − 2 δ x μ d d τ ( g μ ν d x ν d τ ) ) d τ = ∫ ( d x μ d τ d x ν d τ ∂ α g μ ν δ x α − 2 δ x μ ∂ α g μ ν d x α d τ d x ν d τ − 2 δ x μ g μ ν d 2 x ν d τ 2 ) d τ {\displaystyle 0=\int \left({\frac {dx^{\mu }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}\partial _{\alpha }g_{\mu \nu }\delta x^{\alpha }-2\delta x^{\mu }{\frac {d}{d\tau }}\left(g_{\mu \nu }{\frac {dx^{\nu }}{d\tau }}\right)\right)\,d\tau =\int \left({\frac {dx^{\mu }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}\partial _{\alpha }g_{\mu \nu }\delta x^{\alpha }-2\delta x^{\mu }\partial _{\alpha }g_{\mu \nu }{\frac {dx^{\alpha }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}-2\delta x^{\mu }g_{\mu \nu }{\frac {d^{2}x^{\nu }}{d\tau ^{2}}}\right)\,d\tau } Simplifying a bit we see that: 0 = ∫ ( − 2 g μ ν d 2 x ν d τ 2 + d x α d τ d x ν d τ ∂ μ g α ν − 2 d x α d τ d x ν d τ ∂ α g μ ν ) δ x μ d τ {\displaystyle 0=\int \left(-2g_{\mu \nu }{\frac {d^{2}x^{\nu }}{d\tau ^{2}}}+{\frac {dx^{\alpha }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}\partial _{\mu }g_{\alpha \nu }-2{\frac {dx^{\alpha }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}\partial _{\alpha }g_{\mu \nu }\right)\delta x^{\mu }d\tau } so, 0 = ∫ ( − 2 g μ ν d 2 x ν d τ 2 + d x α d τ d x ν d τ ∂ μ g α ν − d x α d τ d x ν d τ ∂ α g μ ν − d x ν d τ d x α d τ ∂ ν g μ α ) δ x μ d τ {\displaystyle 0=\int \left(-2g_{\mu \nu }{\frac {d^{2}x^{\nu }}{d\tau ^{2}}}+{\frac {dx^{\alpha }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}\partial _{\mu }g_{\alpha \nu }-{\frac {dx^{\alpha }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}\partial _{\alpha }g_{\mu \nu }-{\frac {dx^{\nu }}{d\tau }}{\frac {dx^{\alpha }}{d\tau }}\partial _{\nu }g_{\mu \alpha }\right)\delta x^{\mu }\,d\tau } multiplying this equation by − 1 2 {\textstyle -{\frac {1}{2}}} we get: 0 = ∫ ( g μ ν d 2 x ν d τ 2 + 1 2 d x α d τ d x ν d τ ( ∂ α g μ ν + ∂ ν g μ α − ∂ μ g α ν ) ) δ x μ d τ {\displaystyle 0=\int \left(g_{\mu \nu }{\frac {d^{2}x^{\nu }}{d\tau ^{2}}}+{\frac {1}{2}}{\frac {dx^{\alpha }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}\left(\partial _{\alpha }g_{\mu \nu }+\partial _{\nu }g_{\mu \alpha }-\partial _{\mu }g_{\alpha \nu }\right)\right)\delta x^{\mu }\,d\tau } So by Hamilton's principle we find that the Euler–Lagrange equation is g μ ν d 2 x ν d τ 2 + 1 2 d x α d τ d x ν d τ ( ∂ α g μ ν + ∂ ν g μ α − ∂ μ g α ν ) = 0 {\displaystyle g_{\mu \nu }{\frac {d^{2}x^{\nu }}{d\tau ^{2}}}+{\frac {1}{2}}{\frac {dx^{\alpha }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}\left(\partial _{\alpha }g_{\mu \nu }+\partial _{\nu }g_{\mu \alpha }-\partial _{\mu }g_{\alpha \nu }\right)=0} Multiplying by the inverse metric tensor g μ β {\displaystyle g^{\mu \beta }} we get that d 2 x β d τ 2 + 1 2 g μ β ( ∂ α g μ ν + ∂ ν g μ α − ∂ μ g α ν ) d x α d τ d x ν d τ = 0 {\displaystyle {\frac {d^{2}x^{\beta }}{d\tau ^{2}}}+{\frac {1}{2}}g^{\mu \beta }\left(\partial _{\alpha }g_{\mu \nu }+\partial _{\nu }g_{\mu \alpha }-\partial _{\mu }g_{\alpha \nu }\right){\frac {dx^{\alpha }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}=0} Thus we get the geodesic equation: d 2 x β d τ 2 + Γ β α ν d x α d τ d x ν d τ = 0 {\displaystyle {\frac {d^{2}x^{\beta }}{d\tau ^{2}}}+\Gamma ^{\beta }{}_{\alpha \nu }{\frac {dx^{\alpha }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}=0} with the Christoffel symbol defined in terms of the metric tensor as Γ β α ν = 1 2 g μ β ( ∂ α g μ ν + ∂ ν g μ α − ∂ μ g α ν ) {\displaystyle \Gamma ^{\beta }{}_{\alpha \nu }={\frac {1}{2}}g^{\mu \beta }\left(\partial _{\alpha }g_{\mu \nu }+\partial _{\nu }g_{\mu \alpha }-\partial _{\mu }g_{\alpha \nu }\right)} (Note: Similar derivations, with minor amendments, can be used to produce analogous results for geodesics between light-like[citation needed] or space-like separated pairs of points.) Equation of motion may follow from the field equations for empty space Albert Einstein believed that the geodesic equation of motion can be derived from the field equations for empty space, i.e. from the fact that the Ricci curvature vanishes. He wrote: It has been shown that this law of motion — generalized to the case of arbitrarily large gravitating masses — can be derived from the field equations of empty space alone. According to this derivation the law of motion is implied by the condition that the field be singular nowhere outside its generating mass points. and One of the imperfections of the original relativistic theory of gravitation was that as a field theory it was not complete; it introduced the independent postulate that the law of motion of a particle is given by the equation of the geodesic. A complete field theory knows only fields and not the concepts of particle and motion. For these must not exist independently from the field but are to be treated as part of it. On the basis of the description of a particle without singularity, one has the possibility of a logically more satisfactory treatment of the combined problem: The problem of the field and that of the motion coincide. Both physicists and philosophers have often repeated the assertion that the geodesic equation can be obtained from the field equations to describe the motion of a gravitational singularity, but this claim remains disputed. According to David Malament, “Though the geodesic principle can be recovered as theorem in general relativity, it is not a consequence of Einstein’s equation (or the conservation principle) alone. Other assumptions are needed to derive the theorems in question.” Less controversial is the notion that the field equations determine the motion of a fluid or dust, as distinguished from the motion of a point-singularity. Extension to the case of a charged particle In deriving the geodesic equation from the equivalence principle, it was assumed that particles in a local inertial coordinate system are not accelerating. However, in real life, the particles may be charged, and therefore may be accelerating locally in accordance with the Lorentz force. That is: d 2 X μ d s 2 = q m F μ β d X α d s η α β . {\displaystyle {d^{2}X^{\mu } \over ds^{2}}={q \over m}{F^{\mu \beta }}{dX^{\alpha } \over ds}{\eta _{\alpha \beta }}.} with η α β d X α d s d X β d s = − 1. {\displaystyle {\eta _{\alpha \beta }}{dX^{\alpha } \over ds}{dX^{\beta } \over ds}=-1.} The Minkowski tensor η α β {\displaystyle \eta _{\alpha \beta }} is given by: η α β = ( − 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ) {\displaystyle \eta _{\alpha \beta }={\begin{pmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}}} These last three equations can be used as the starting point for the derivation of an equation of motion in General Relativity, instead of assuming that acceleration is zero in free fall. Because the Minkowski tensor is involved here, it becomes necessary to introduce something called the metric tensor in General Relativity. The metric tensor g is symmetric, and locally reduces to the Minkowski tensor in free fall. The resulting equation of motion is as follows: d 2 x μ d s 2 = − Γ μ α β d x α d s d x β d s + q m F μ β d x α d s g α β . {\displaystyle {d^{2}x^{\mu } \over ds^{2}}=-\Gamma ^{\mu }{}_{\alpha \beta }{dx^{\alpha } \over ds}{dx^{\beta } \over ds}\ +{q \over m}{F^{\mu \beta }}{dx^{\alpha } \over ds}{g_{\alpha \beta }}.} with g α β d x α d s d x β d s = − 1. {\displaystyle {g_{\alpha \beta }}{dx^{\alpha } \over ds}{dx^{\beta } \over ds}=-1.} This last equation signifies that the particle is moving along a timelike geodesic; massless particles like the photon instead follow null geodesics (replace −1 with zero on the right-hand side of the last equation). It is important that the last two equations are consistent with each other, when the latter is differentiated with respect to proper time, and the following formula for the Christoffel symbols ensures that consistency: Γ λ α β = 1 2 g λ τ ( ∂ g τ α ∂ x β + ∂ g τ β ∂ x α − ∂ g α β ∂ x τ ) {\displaystyle \Gamma ^{\lambda }{}_{\alpha \beta }={\frac {1}{2}}g^{\lambda \tau }\left({\frac {\partial g_{\tau \alpha }}{\partial x^{\beta }}}+{\frac {\partial g_{\tau \beta }}{\partial x^{\alpha }}}-{\frac {\partial g_{\alpha \beta }}{\partial x^{\tau }}}\right)} This last equation does not involve the electromagnetic fields, and it is applicable even in the limit as the electromagnetic fields vanish. The letter g with superscripts refers to the inverse of the metric tensor. In General Relativity, indices of tensors are lowered and raised by contraction with the metric tensor or its inverse, respectively. Geodesics as curves of stationary interval A geodesic between two events can also be described as the curve joining those two events which has a stationary interval (4-dimensional "length"). Stationary here is used in the sense in which that term is used in the calculus of variations, namely, that the interval along the curve varies minimally among curves that are nearby to the geodesic. In simply connected Minkowski space there is only one geodesic that connects any given pair of events, and for a time-like geodesic, this is the curve with the longest proper time between the two events. In curved spacetime, it is possible for a pair of widely separated events to have more than one time-like geodesic between them. In such instances, the proper times along several geodesics will not in general be the same. For some geodesics in such instances, it is possible for a curve that connects the two events and is nearby to the geodesic to have either a longer or a shorter proper time than the geodesic. For a space-like geodesic through two events, there are always nearby curves which go through the two events that have either a longer or a shorter proper length than the geodesic, even in Minkowski space. In Minkowski space, the geodesic will be a straight line. Any curve that differs from the geodesic purely spatially (i.e. does not change the time coordinate) in any inertial frame of reference will have a longer proper length than the geodesic, but a curve that differs from the geodesic purely temporally (i.e. does not change the space coordinates) in such a frame of reference will have a shorter proper length. The interval of a curve in spacetime is l = ∫ | g μ ν x ˙ μ x ˙ ν | d s . {\displaystyle l=\int {\sqrt {\left|g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }\right|}}\,ds\ .} Then, the Euler–Lagrange equation, d d s ∂ ∂ x ˙ α | g μ ν x ˙ μ x ˙ ν | = ∂ ∂ x α | g μ ν x ˙ μ x ˙ ν | , {\displaystyle {d \over ds}{\partial \over \partial {\dot {x}}^{\alpha }}{\sqrt {\left|g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }\right|}}={\partial \over \partial x^{\alpha }}{\sqrt {\left|g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }\right|}}\ ,} becomes, after some calculation, 2 ( Γ λ μ ν x ˙ μ x ˙ ν + x ¨ λ ) = U λ d d s ln | U ν U ν | , {\displaystyle 2\left(\Gamma ^{\lambda }{}_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }+{\ddot {x}}^{\lambda }\right)=U^{\lambda }{d \over ds}\ln |U_{\nu }U^{\nu }|\ ,} where U μ = x ˙ μ . {\displaystyle U^{\mu }={\dot {x}}^{\mu }.} The goal being to find a curve for which the value of l = ∫ d τ = ∫ d τ d ϕ d ϕ = ∫ ( d τ ) 2 ( d ϕ ) 2 d ϕ = ∫ − g μ ν d x μ d x ν d ϕ d ϕ d ϕ = ∫ f d ϕ {\displaystyle l=\int d\tau =\int {d\tau \over d\phi }\,d\phi =\int {\sqrt {(d\tau )^{2} \over (d\phi )^{2}}}\,d\phi =\int {\sqrt {-g_{\mu \nu }dx^{\mu }dx^{\nu } \over d\phi \,d\phi }}\,d\phi =\int f\,d\phi } is stationary, where f = − g μ ν x ˙ μ x ˙ ν {\displaystyle f={\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}}} such goal can be accomplished by calculating the Euler–Lagrange equation for f, which is d d τ ∂ f ∂ x ˙ λ = ∂ f ∂ x λ . {\displaystyle {d \over d\tau }{\partial f \over \partial {\dot {x}}^{\lambda }}={\partial f \over \partial x^{\lambda }}.} Substituting the expression of f into the Euler–Lagrange equation (which makes the value of the integral l stationary), gives d d τ ∂ − g μ ν x ˙ μ x ˙ ν ∂ x ˙ λ = ∂ − g μ ν x ˙ μ x ˙ ν ∂ x λ {\displaystyle {d \over d\tau }{\partial {\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}} \over \partial {\dot {x}}^{\lambda }}={\partial {\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}} \over \partial x^{\lambda }}} Now calculate the derivatives: d d τ ( − g μ ν ∂ x ˙ μ ∂ x ˙ λ x ˙ ν − g μ ν x ˙ μ ∂ x ˙ ν ∂ x ˙ λ 2 − g μ ν x ˙ μ x ˙ ν ) = − g μ ν , λ x ˙ μ x ˙ ν 2 − g μ ν x ˙ μ x ˙ ν ( 1 ) d d τ ( g μ ν δ μ λ x ˙ ν + g μ ν x ˙ μ δ ν λ 2 − g μ ν x ˙ μ x ˙ ν ) = g μ ν , λ x ˙ μ x ˙ ν 2 − g μ ν x ˙ μ x ˙ ν ( 2 ) d d τ ( g λ ν x ˙ ν + g μ λ x ˙ μ − g μ ν x ˙ μ x ˙ ν ) = g μ ν , λ x ˙ μ x ˙ ν − g μ ν x ˙ μ x ˙ ν ( 3 ) − g μ ν x ˙ μ x ˙ ν d d τ ( g λ ν x ˙ ν + g μ λ x ˙ μ ) − ( g λ ν x ˙ ν + g μ λ x ˙ μ ) d d τ − g μ ν x ˙ μ x ˙ ν − g μ ν x ˙ μ x ˙ ν = g μ ν , λ x ˙ μ x ˙ ν − g μ ν x ˙ μ x ˙ ν ( 4 ) ( − g μ ν x ˙ μ x ˙ ν ) d d τ ( g λ ν x ˙ ν + g μ λ x ˙ μ ) + 1 2 ( g λ ν x ˙ ν + g μ λ x ˙ μ ) d d τ ( g μ ν x ˙ μ x ˙ ν ) − g μ ν x ˙ μ x ˙ ν = g μ ν , λ x ˙ μ x ˙ ν ( 5 ) {\displaystyle {\begin{aligned}{d \over d\tau }\left({-g_{\mu \nu }{\partial {\dot {x}}^{\mu } \over \partial {\dot {x}}^{\lambda }}{\dot {x}}^{\nu }-g_{\mu \nu }{\dot {x}}^{\mu }{\partial {\dot {x}}^{\nu } \over \partial {\dot {x}}^{\lambda }} \over 2{\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}}}\right)&={-g_{\mu \nu ,\lambda }{\dot {x}}^{\mu }{\dot {x}}^{\nu } \over 2{\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}}}&&(1)\\[1ex]{d \over d\tau }\left({g_{\mu \nu }\delta ^{\mu }{}_{\lambda }{\dot {x}}^{\nu }+g_{\mu \nu }{\dot {x}}^{\mu }\delta ^{\nu }{}_{\lambda } \over 2{\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}}}\right)&={g_{\mu \nu ,\lambda }{\dot {x}}^{\mu }{\dot {x}}^{\nu } \over 2{\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}}}&&(2)\\[1ex]{d \over d\tau }\left({g_{\lambda \nu }{\dot {x}}^{\nu }+g_{\mu \lambda }{\dot {x}}^{\mu } \over {\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}}}\right)&={g_{\mu \nu ,\lambda }{\dot {x}}^{\mu }{\dot {x}}^{\nu } \over {\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}}}&&(3)\\[1ex]{{\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}}{d \over d\tau }(g_{\lambda \nu }{\dot {x}}^{\nu }+g_{\mu \lambda }{\dot {x}}^{\mu })-(g_{\lambda \nu }{\dot {x}}^{\nu }+g_{\mu \lambda }{\dot {x}}^{\mu }){d \over d\tau }{\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}} \over -g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}&={g_{\mu \nu ,\lambda }{\dot {x}}^{\mu }{\dot {x}}^{\nu } \over {\sqrt {-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}}}&&(4)\\[1ex]{(-g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }){d \over d\tau }(g_{\lambda \nu }{\dot {x}}^{\nu }+g_{\mu \lambda }{\dot {x}}^{\mu })+{1 \over 2}(g_{\lambda \nu }{\dot {x}}^{\nu }+g_{\mu \lambda }{\dot {x}}^{\mu }){d \over d\tau }(g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }) \over -g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }}&=g_{\mu \nu ,\lambda }{\dot {x}}^{\mu }{\dot {x}}^{\nu }&&(5)\end{aligned}}} ( g μ ν x ˙ μ x ˙ ν ) ( g λ ν , μ x ˙ ν x ˙ μ + g μ λ , ν x ˙ μ x ˙ ν + g λ ν x ¨ ν + g λ μ x ¨ μ ) = ( g μ ν , λ x ˙ μ x ˙ ν ) ( g α β x ˙ α x ˙ β ) + 1 2 ( g λ ν x ˙ ν + g λ μ x ˙ μ ) d d τ ( g μ ν x ˙ μ x ˙ ν ) ( 6 ) {\displaystyle {\begin{aligned}&(g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu })(g_{\lambda \nu ,\mu }{\dot {x}}^{\nu }{\dot {x}}^{\mu }+g_{\mu \lambda ,\nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }+g_{\lambda \nu }{\ddot {x}}^{\nu }+g_{\lambda \mu }{\ddot {x}}^{\mu })\\&=(g_{\mu \nu ,\lambda }{\dot {x}}^{\mu }{\dot {x}}^{\nu })(g_{\alpha \beta }{\dot {x}}^{\alpha }{\dot {x}}^{\beta })+{1 \over 2}(g_{\lambda \nu }{\dot {x}}^{\nu }+g_{\lambda \mu }{\dot {x}}^{\mu }){d \over d\tau }(g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu })\qquad \qquad (6)\end{aligned}}} g λ ν , μ x ˙ μ x ˙ ν + g λ μ , ν x ˙ μ x ˙ ν − g μ ν , λ x ˙ μ x ˙ ν + 2 g λ μ x ¨ μ = x ˙ λ d d τ ( g μ ν x ˙ μ x ˙ ν ) g α β x ˙ α x ˙ β ( 7 ) {\displaystyle g_{\lambda \nu ,\mu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }+g_{\lambda \mu ,\nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }-g_{\mu \nu ,\lambda }{\dot {x}}^{\mu }{\dot {x}}^{\nu }+2g_{\lambda \mu }{\ddot {x}}^{\mu }={{\dot {x}}_{\lambda }{d \over d\tau }(g_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }) \over g_{\alpha \beta }{\dot {x}}^{\alpha }{\dot {x}}^{\beta }}\qquad \qquad (7)} 2 ( Γ λ μ ν x ˙ μ x ˙ ν + x ¨ λ ) = x ˙ λ d d τ ( x ˙ ν x ˙ ν ) x ˙ β x ˙ β = U λ d d τ ( U ν U ν ) U β U β = U λ d d τ ln | U ν U ν | ( 8 ) {\displaystyle 2(\Gamma _{\lambda \mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }+{\ddot {x}}_{\lambda })={{\dot {x}}_{\lambda }{d \over d\tau }({\dot {x}}_{\nu }{\dot {x}}^{\nu }) \over {\dot {x}}_{\beta }{\dot {x}}^{\beta }}={U_{\lambda }{d \over d\tau }(U_{\nu }U^{\nu }) \over U_{\beta }U^{\beta }}=U_{\lambda }{d \over d\tau }\ln |U_{\nu }U^{\nu }|\qquad \qquad (8)} This is just one step away from the geodesic equation. If the parameter s is chosen to be affine, then the right side of the above equation vanishes (because U ν U ν {\displaystyle U_{\nu }U^{\nu }} is constant). Finally, we have the geodesic equation Γ λ μ ν x ˙ μ x ˙ ν + x ¨ λ = 0 . {\displaystyle \Gamma ^{\lambda }{}_{\mu \nu }{\dot {x}}^{\mu }{\dot {x}}^{\nu }+{\ddot {x}}^{\lambda }=0\ .} Derivation using autoparallel transport The geodesic equation can be alternatively derived from the autoparallel transport of curves. The derivation is based on the lectures given by Frederic P. Schuller at the We-Heraeus International Winter School on Gravity & Light. Let ( M , O , A , ∇ ) {\displaystyle (M,O,A,\nabla )} be a smooth manifold with connection and γ {\displaystyle \gamma } be a curve on the manifold. The curve is said to be autoparallely transported if and only if ∇ v γ v γ = 0 {\displaystyle \nabla _{v_{\gamma }}v_{\gamma }=0} . In order to derive the geodesic equation, we have to choose a chart ( U , x ) ∈ A {\displaystyle (U,x)\in A} : ∇ γ ˙ i ∂ ∂ x i ( γ ˙ m ∂ ∂ x m ) = 0 {\displaystyle \nabla _{{\dot {\gamma }}^{i}{\frac {\partial }{\partial x^{i}}}}\left({\dot {\gamma }}^{m}{\frac {\partial }{\partial x^{m}}}\right)=0} Using the C ∞ {\displaystyle C^{\infty }} linearity and the Leibniz rule: γ ˙ i ( ∇ ∂ ∂ x i γ ˙ m ) ∂ ∂ x m + γ ˙ i γ ˙ m ∇ ∂ ∂ x i ( ∂ ∂ x m ) = 0 {\displaystyle {\dot {\gamma }}^{i}\left(\nabla _{\frac {\partial }{\partial x^{i}}}{\dot {\gamma }}^{m}\right){\frac {\partial }{\partial x^{m}}}+{\dot {\gamma }}^{i}{\dot {\gamma }}^{m}\nabla _{\frac {\partial }{\partial x^{i}}}\left({\frac {\partial }{\partial x^{m}}}\right)=0} Using how the connection acts on functions ( γ ˙ m {\displaystyle {\dot {\gamma }}^{m}} ) and expanding the second term with the help of the connection coefficient functions: γ ˙ i ∂ γ ˙ m ∂ x i ∂ ∂ x m + γ ˙ i γ ˙ m Γ i m q ∂ ∂ x q = 0 {\displaystyle {\dot {\gamma }}^{i}{\frac {\partial {\dot {\gamma }}^{m}}{\partial x^{i}}}{\frac {\partial }{\partial x^{m}}}+{\dot {\gamma }}^{i}{\dot {\gamma }}^{m}\Gamma _{im}^{q}{\frac {\partial }{\partial x^{q}}}=0} The first term can be simplified to γ ¨ m ∂ ∂ x m {\displaystyle {\ddot {\gamma }}^{m}{\frac {\partial }{\partial x^{m}}}} . Renaming the dummy indices: γ ¨ q ∂ ∂ x q + γ ˙ i γ ˙ m Γ i m q ∂ ∂ x q = 0 {\displaystyle {\ddot {\gamma }}^{q}{\frac {\partial }{\partial x^{q}}}+{\dot {\gamma }}^{i}{\dot {\gamma }}^{m}\Gamma _{im}^{q}{\frac {\partial }{\partial x^{q}}}=0} We finally arrive to the geodesic equation: γ ¨ q + γ ˙ i γ ˙ m Γ i m q = 0 {\displaystyle {\ddot {\gamma }}^{q}+{\dot {\gamma }}^{i}{\dot {\gamma }}^{m}\Gamma _{im}^{q}=0} See also Bibliography References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Grindylow] | [TOKENS: 216] |
Contents Grindylow In English folklore, Grindylow or Grundylow is a creature in the counties of Yorkshire and Lancashire. The name is thought to be connected to Grendel, a name or term used in Beowulf and in many Old English charters where it is seen in connection with meres, bogs and lakes. Grindylows are supernatural creatures that appear in the folklore of England, most notably the Lancaster area. They are described as diminutive humanoids with scaly skin, a greenish complexion, sharp claws and teeth, and long, wiry arms with lengthy fingers at the end. They are said to dwell in ponds and marshes waiting for unsuspecting children, which they grab with their shockingly strong grip, and then drag under the surface of the waters. Grindylows have been used as shadowy figures to frighten children away from pools, marshes, or ponds where they could drown. Peg Powler, Nelly Longarms, and Jenny Greenteeth are similar water spirits. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Effects_of_violence_in_mass_media] | [TOKENS: 5153] |
Contents Effects of violence in mass media The study of violence in mass media analyzes the degree of correlation between themes of violence in media sources (particularly violence in video games, television and films) with real-world aggression and violence over time. Many social scientists support the correlation, however, some scholars argue that media research has methodological problems and that findings are exaggerated.[excessive citations] Other scholars have suggested that the correlation exists, but can be unconventional to the current public belief. Complaints about the possible detrimental effects of mass media appear throughout history; Plato was concerned about the effects of plays on youth. Various media/genres, including dime novels, comic books, jazz, rock and roll, role playing/computer games, television, films, internet (by computer or cell phone) and many others have attracted speculation that consumers of such media may become more aggressive, rebellious or immoral. This has led some scholars to conclude that statements made by some researchers merely fit into a cycle of media-based moral panics. The advent of television prompted research into the effects of this new medium in the 1960s. Much of this research has been guided by social learning theory, developed by Albert Bandura. Social learning theory suggests that one way in which human beings learn is by the process of modeling. Another popular theory is George Gerbner's cultivation theory, which suggests that viewers cultivate a lot of violence seen on television and apply it to the real world. Other theories include social cognitive theory, the catalyst model, and moral panic theory. Media effects theories Social learning theory was proposed by Albert Bandura and suggests that people learn by observing the outcome of others' actions, and that meritable actions were more likely to be imitated than punishable ones. The behavior was observed in Bandura's Bobo Doll experiments. Bandura presented children with an Aggressive Model: The model played with 'harmless' tinker toys for a minute or so but then progressed onto the Bobo doll, in which the model laid the doll down and was violent toward it (punched its nose, hit it with a mallet, tossed it in the air, and kicked it). In addition, verbal comments were made in relation to the scenario. They then put the children in a room with a Bobo doll to see if he/she would imitate the behavior previously seen on the video. Albert Bandura's Bobo Doll experiments demonstrated that children learn behaviors, including aggression, through observation and imitation. With the imitating the aggressive actions they observed, children often displayed novel forms of aggression which highlights the powerful influence of modeled behavior. The study also revealed that children were more likely to imitate aggression if the model was rewarded, but far less likely if the model was punished, emphasizing the role of consequences in shaping observational learning. The findings of this experiment suggest that children tended to model the behavior they witnessed in the video. This has been often taken to imply that children may imitate aggressive behaviors witnessed in media. However, Bandura's experiments have been criticized on several grounds. First, it is difficult to generalize the aggression toward a bo-bo doll (which is intended to be hit) to person-on-person violence. Secondly, it may be possible that the children were motivated simply to please the experimenter rather than to be aggressive. In other words, the children may have viewed the videos as instructions, rather than incentives to feel more aggressive. Third, in a later study Bandura included a condition in which the adult model was punished for hitting the bo-bo doll by himself being physically punished. Specifically the adult was pushed down in the video by the experimenter and hit with a newspaper while being berated. This actual person-on-person violence actually decreased aggressive acts in the children, probably due to vicarious reinforcement. Nonetheless these last results indicate that even young children don't automatically imitate aggression, but rather consider the context of aggression. Children with aggression have difficulty communicating compassionately. Over time, "teen gamers" can become unaware of their surroundings and lack social interaction in real life. A 2019 article by Hygen Beate mentions that video game violence can impact an individual's essential social skills such as regulating emotions, maintaining good behavior towards others, listening and understanding, responding and communicating, knowing verbal and non-verbal cues, sharing their thoughts, and cooperating with others. According to the survey in medical journal JAMA Network Open written by Chang published May 31, 2019, kids who repeatedly played violent video games learned to think viciously that could eventually influence their behavior and cause them to become aggressive in nature. Given that some scholars estimate that children's viewing of violence in media is quite common, concerns about media often follow social learning theoretical approaches. Social cognitive theories build upon social learning theory, but suggest that aggression may be activated by learning and priming aggressive scripts. Desensitization and arousal/excitation are also included in latter social cognitive theories. The concept of desensitization has particularly gotten much interest from the scholarly community and general public. It is theorized that with repeated exposure to media violence, a psychological saturation or emotional adjustment takes place such that initial levels of anxiety and disgust diminish or weaken. For example, in a study conducted in 2016, a sample of college students were assigned at random to play either a violent or non-violent video game for 20 minutes. They were then asked to watch a 10-minute video of real life violence. According to the American Psychological Association's report titled technical report on the review of the violent video game literature written by Caldwell in February 2020, the revision to the 2015 resolution, playing video games, is often popularly associated with adolescence. "Children younger than age eight who play video games spend a daily average of 69 minutes on handheld console games, 57 minutes on computer games, and 45 minutes on mobile games, including tablets." The students who had played the violent video games were observed to be significantly less affected by a simulated aggressive act than those who did not play the violent video games. However, the degree to which the simulation was "believable" to the participants, or to which the participants may have responded to "demand characteristics" is unclear (see § Criticism below). Nonetheless, social cognitive theory was arguably the most dominant paradigm of media violence effects for many years, although it has come under recent criticism. Recent scholarship has suggested that social cognitive theories of aggression are outdated and should be retired. Some scholars also argue that the continuous viewing of violent acts makes teenagers more susceptible to becoming violent themselves. Children of young age are good observers; they learn by mimicking and adapting the behavior. Playing violent video games created a fear in everyone's heart of violence in real life, which was only valid for adolescents with underlying psychological problems. According to the journal article written by McGloin in 2015, media violence can trigger aggressive behavioral change in the highly characterized aggressive individual. An individual can face severe consequences with media violence, which can increase "bullying behavior." This theory was created by George Gerbner as an alternative way to look at the correlation of violence as seen on television and the individual. Gerbner describes the violence seen on television that most of the population was viewing as "happy violence". He called it such because he noticed that most of the violence seen on television was always followed by a happy ending. Gerbner gave this importance as he believed that the world does not contain 'happy violence' as sometimes there is violence for no reason. However, because the violence seen on television is so captivating for viewers. Gerbner believes that the population will think that violence, whether fictional in movies and shows or non-fictional from the news, will directly effect them. Gerbner named this theory the "magic bullet theory". This theory described the violence seen on television as a "magic bullet" that reaches beyond the screen right into every individual viewer. After this occurs, cultivation theory begins when the individual begins to develop a perception of real world violence around them. Over time, consumers of media will cultivate the violence seen on television and will consider this to be how the real world actually is. This leads to assumptions of local and national crime rates increasing when they are actually decreasing. Additionally, this will lead to negative assumptions about certain groups as they are mainly shown to be violent on television. The main example being illegal immigrants coming from Mexico into the United States. Frequent news watcher will view cases of violent crimes conducted by illegal immigrants. This leads them to believe that all illegal immigrants act this way, thus leading to their disapproval of them. Gerbner calls this the "Mean World Syndrome" which is the product of all the previous theories, stating that a viewer will eventually believe that they live in a world full of deviance and aberrancy. One alternative theory is the catalyst model which has been proposed to explain the etiology of violence. The catalyst model is a new theory and has not been tested extensively. According to the catalyst model, violence arises from a combination of genetic and early social influences (family and peers in particular). According to this model, media violence is explicitly considered a weak causal influence. Specific violent acts are catalyzed by stressful environment circumstances, with less stress required to catalyze violence in individuals with greater violence predisposition. Some early work has supported this view Research from 2013 with inmates has, likewise, provided support for the catalyst model. Specifically, as suggested by the catalyst model, perpetrators of crimes sometimes included stylistic elements or behaviors in their crimes they had seen in media, but the motivation to commit crimes itself was unrelated to media viewing and instead internal. A final theory relevant to this area is the moral panic. Elucidated largely by David Gauntlett, this theory postulates that concerns about new media are historical and cyclical. In this view, a society forms a predetermined negative belief about a new medium—typically not used by the elder and more powerful members of the society. Research studies and positions taken by scholars and politicians tend to confirm the pre-existing belief, rather than dispassionately observe and evaluate the issue. Eventually the panic dies out after several years or decades, but ultimately resurfaces when yet another new medium is introduced. The general aggression model (GAM) proposed by Craig A. Anderson and Brad Bushman is a meta-theory that examines the role of situational and personal variables on behaviors that are aggressive varying from biological to cultural. Variables stem from one's internal states (feelings, thoughts, arousal) and the appraisal/decisions one makes (both automatic and controlled). The GAM was not originally not made to be used as a model to account for media violence, but as a general model for aggressive behavior. By focusing on the general media violence effects, both short and long-term effects, and the recent developments in media violence exposure and its pathways to short and long-term effects, one can use the GAM to account for media violence effects. Since the GAM is a bio-social-cognitive model, it can explain how both media violence and social environment shape behavior. A study researched how brain structure can add to the GAM and processing. Fast-paced technology, such as video games and television, change brain structure in parts of the brain that are associated with executive control and impulse control. Moreover, social environment is an important piece of the GAM. Because a media diet is part of a social environment, media violence can effect processing. Paul Lazarsfeld created this theory in 1944. Two-step flow theory opposes the notion that the effect of mass media is a direct one. Instead, it suggests that the information and ideas coming from the mass media go to people named the opinion leaders. Opinion leaders gather the information they hear, make sense of it, and develop a narrative that they would like to push. The opinion leaders would then share their views and ideas with the general public, who then take on the role of Opinion followers. Mass media can give information to many different opinion leaders that will disseminate information in their own unique way, and will then gain a following of opinion followers that will believe their specific outlook on the information. This can lead to many different groups that believe similar or vastly different things that all began at the same source. A popular example of this is news outlets that have political biases. A conservative news source will disseminate information that is typically more accepted and followed by a conservative viewership, and the same goes for more liberal news outlets that have a much more liberal following. Plato was a Greek philosopher who contributed many early thoughts on the effects that media had on individuals. In one of his works, he mentions the dangers of inappropriate poetry perverting its audience.[citation needed] He insisted that their perceptions of poetry would later translate to their perceptions of life, fitting in with George Gerbner's theory of cultivation, which derives from the idea that it is assumed that what is seen in media will apply to the real world. Criticism Although organizations such as the American Academy of Pediatrics and the American Psychological Association have suggested that thousands of studies (3,500 according to the AAP) have been conducted confirming this link, others have argued that this information is incorrect. Rather, only about two hundred studies have been conducted in peer-reviewed scientific journals on the effects of violence depicted in television shows, films, songs and video games. Critics argue that about half find some link between media and subsequent aggression (but not violent crime), whereas the other half do not find a link between consuming violent media and subsequent aggression of any kind. Critics of media violence link focus on a number of methodological and theoretical problems including (but not limited to) the following: A small study published in Royal Society Open Science on 13 March 2019 found that "both fans and non-fans of violent music exhibited a general negativity bias for violent imagery over neutral imagery regardless of the music genres." Desensitization of media violence The term desensitization can be defined as "a general concept that refers to responses to emotionally charged stimuli and describes the process by which a stimulus that initially elicits a strong physiological or emotional reaction becomes less and less capable of eliciting the response the more often it is presented". When exposed to media violence initially, the user tends to generate responses dealing with discomfort, fear, sweat glands activating, and an increase in heart rate. Moreover, after repeated and overlong exposure to media violence on a user, which includes movies, television, and video games reduces the psychological effect of media violence. The user eventually will become emotionally and cognitively desensitized to media over prolonged and repeated media exposure. Desensitization could also affect the way an individual views violence. Something that repulses a viewer at first could later become normal due to repeated exposure to violent content. For example, if a child plays a lot of violent shooter games, they may start to become numb to the consequences of actual gun violence and how dangerous of a threat it can be to others. Because it is "just a game" to them, they can commit actions virtually that they won't be reprimanded for and this can cause them to believe that such behavior is socially acceptable under certain circumstances. If violence is shown as a "solution" to a problem in most forms of media, meaning that violence is committed to achieve a goal, this can cause consumers of such media to make that connection a reality. When faced with an altercation, one may consider violence rather than de-escalation because of what they have seen on TV, Internet, in music, and in video games. This violent response suddenly becomes much more appealing to the individual and they start to consider things they would not have before. For example, in modern hip-hop music, there are a lot of popular songs that glorify murder and openly talk about killing others as a means of squashing beef or asserting dominance in the community. Whether or not these songs are true, many people listen to these lyrics and their mind becomes filled with violent imagery regardless of if they already had a predisposition to violence or not. This constant exposure to glorified violence keeps the listener in a state of arousal and heightened aggressiveness, which in turn causes changes in behavior. A level-headed person could suddenly start to think murder is an appropriate solution to earn respect amongst peers because of what they are hearing in songs. Because of this, you have many kids trying to act like their favorite rappers and seek out altercations to look "hard" in front of others. An article published in 2006 by the National Institute of Health investigated whether music genre affects a listener's tendencies for substance use and aggressive behavior. The study was conducted on college students under the age of 25 and results showed that 69% of the population reported listening to rap music, and that it positively predicted frequency of marijuana use and aggressive behaviors. Other genres were reported to negatively predict such behaviors. A study done in 2011, examined the desensitization of violent media content among a sample of a variety of students both men and women. Their findings suggest that the user's physiological activity when shown violent media is less when the user regularly uses violent media content. In the sample, both the men and women had a serious link with both regular violent media use and an increased response of arousal to violence. The sample also showed a significant link between increased media violence use and readily available cognitions of aggression. Relationship between media violence and minor aggressive behaviors Given that little evidence links media violence to serious physical aggression, bullying or youth violence, at present most of the debate appears to focus on whether media violence may influence more minor forms of aggressiveness. An article done in 1987 reviewing a history of court cases dealing with violent acts of youths showed that the courts were hesitant to hold media at fault for the violent acts. At present, no consensus has been reached on this issue. For example, in 1974 the US Surgeon General testified to congress that "the overwhelming consensus and the unanimous Scientific Advisory Committee's report indicates that televised violence, indeed, does have an adverse effect on certain members of our society." In a controlled experiment in 2016, one hundred and thirty-six children ages eight to twelve participated, to investigate whether children in grade school playing violent video games are prone to and affect both the children's physiological and cognitive responses to violence. Playing violent video games links to the activation of aggressive thoughts in contrast to users playing a game as exciting, but nonviolent have little to no aggression. When monitoring frustration levels for both games, levels of frustration in the user did not grow when playing nonviolent games as opposed to violent games user's frustration levels grew. Monitoring the cortisol results of children playing violent video games activates the flight-or-fight response and sympathetic nervous system which results in the releasing of stress hormones leading to aggressive behaviors. Because these video games increase arousal as well as aggressive thoughts, this may cause an individual to display aggressive behavior as their senses are heightened and they have aggression primed as a response. However, by 2001, the US Surgeon General's office, The Department of Health and Human Services had largely reversed itself, relegating media violence to only a minor role and noting many serious limitations in the research. Studies have also disagreed regarding whether media violence contributes to desensitization. On average, children ages 8-12 in the United States spend 4-6 hours a day watching or using screens, while teens spend up to 9 hours a day on average. This can be a point of concern for some, considering the availability of a vast selection of violent games to consumers. Today's popular games include Call of Duty, Fortnite, GTA, Rainbow Six Siege, and Red Dead Redemption. While these games may differ in the degree of violence propagated to players, violence is still the key element on which these games are built. It is hard to blame aggressive behavior on a single cause, but with that much exposure to violent media, researchers have examined if such pastimes can be negatively influencing mental health. A 2007 article published by the National Institutes of Health states that three ways in which media violence is believed to affect individuals short-term is through priming, arousal, and mimicking, all aspects of social cognitive theory. Priming is the process in which an observed stimulus can provoke a certain emotion or behavior in an individual due to the mental connection they have formed with the stimulus. For example, a ski mask can cause an individual to be on edge because, though not inherently bad, ski masks have been portrayed as objects of violent crime through their use by armed robbers. In the same way, certain groups of people can be stereotyped through associative priming. For example, if the media commonly portrays black people as more dangerous and prone to violence than other races, an individual may form that connection in their brain and become overly cautious when around black people. Arousal is another reason why media violence can increase aggression in individuals. Arousal in psychology a state of excitement or energy expenditure linked to an emotion. For example, if an individual is aroused by a piece of media and some external factor provokes anger in them, excitation transfer may cause their response to be overly aggressive and different from how they would normally respond if not in a state of excitement. In terms of media violence, some researchers believe that constant exposure to violent content can cause consumers to always be in a heightened state of emotion, or "on edge", therefore making them behave in an overly aggressive manner. Mimicry is the simplest of these three concepts. It simply states that individuals mimic what they see. This is more common in children than in adults. For example, say a child is watching a movie. In that movie, a scene plays out in which a character is made fun of by another character for how they look. The character responds to the insult by punching the other in the face. The child may see this behavior and deem it an appropriate response to such provocation so that, when faced with the same scenario, they lash out in anger. A 2015 report from the American Psychological Association measured the effects of violent video games on aggression for both males and females. Results show that there is a correlation value of 0.2 in experimental, cross-sectional, and longitudinal studies. As children advance into teen years, evidence for violent acts in relation to violent media becomes less consistent. Although most scholars caution that this decline cannot be attributed to a causal effect, they conclude that this observation argues against causal harmful effects for media violence. A recent long-term outcome study of youth found no long-term relationship between consuming violent media and youth violence or bullying. A recent report from 2023 explores how the increase of saturated media can affect violent actions in children, even years after the media is consumed. For the study, children reported their consumption of music, video games, television, websites with real people, and cartoons with real people. After reviewing the children's media diet and behavior, the study found that exposure to violent media can influence their behavior. Consuming a diverse media diet increases the likelihood that a child engages in seriously violent behavior, five to ten years after consumption. The amount of media that is available to children creates space for children to consume more media that pictures violence as an outlet for anger. However, one factor cannot be accredited for the manner in which a child behaves. Much of the research on media and violence derives from the United States, particularly the related research fields of psychology and media/communication studies. Research in Europe and Australia on the relationship between media and violence is far broader and is much more clearly embedded in politics, culture and social relationships. A study done in 2016, examined the relationship between media violence and aggression across different cultures. This was the first cross-cultural study that looked at both the effects of media violence and culture generality regarding aggression across all cultures. Samples obtained from seven different countries (Australia, China, Croatia, Germany, Japan, Romania, and the United States) completed a questionnaire about media habits. Four major findings were derived from the study, the first being that violent media was significantly related to aggressive behaviors. Secondly, the effects of media violence were similar in weight across the countries. The third finding, explains the relationship between aggressive behavior and media violence exposure is determined by the pre-existing aggressive cognitions in oneself and is relevant across different cultures. Lastly, the fourth finding is that media violence is equal to or similar enough to other risk factors that influence aggression meaning special treatment or attention is required/deserved. Jeff Lewis' book Media Culture and Human Violence challenges the conventional approaches to media violence research. Lewis argues that violence is largely generated through the interaction of social processes and modes of thinking which are constantly refreshed through the media, politics and other cultural discourses. Violence is continually presented as 'authorized' or 'legitimate' within government, legal and narrative media texts. Accordingly, Lewis disputes with the proposition that violence is 'natural' or that violence is caused by media of any sort. Rather, media interacts with culturally generated and inherited modes of thinking or 'consciousness' to create the conditions in which violence can occur. These forms of 'violence thinking' are embedded in historically rooted processes of hierarchical social organization. These hierarchical organizational systems shape our knowledge and beliefs, creating a ferment in which violence is normalized and authorized by governments and other powerful institutions. The link between violence and the media is therefore very complex, but exists within the normative framework of modern culture. See also Footnotes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/RTL/2] | [TOKENS: 532] |
Contents RTL/2 RTL/2 (Real-Time Language) is a discontinued high-level programming language for use in real-time computing, developed at Imperial Chemical Industries, Ltd. (ICI), by J.G.P. Barnes. It was originally used internally in ICI but was distributed by SPL International in 1974. It was based on concepts from ALGOL 68, and intended to be small and simple. RTL/2 was standardised in 1980 by the British Standards Institution. Language overview The data types in RTL/2 were strongly typed, with separate compiling. The compilation units contained one or more items named bricks, i.e.: A procedure brick was a procedure, which may or may not return a (scalar) value, have (scalar) parameters, or have local (scalar) variables. The entry mechanism and implementation of local variables was reentrant. Non-scalar data could only be accessed via reference (so-called REF variables were considered scalar). A data brick was a named static collection of scalars, arrays and records. There was no heap or garbage collection, so programmers had to implement memory management manually. A stack brick was an area of storage reserved for running all the procedures of a single process and contained the call stack, local variables and other housekeeping items. The extent to which stack bricks were used varied depending on the host environment in which RTL/2 programs ran. Access to the host environment of an RTL/2 program was provided via special procedure and data bricks called SVC procedures and SVC data. These were accessible in RTL/2 but implemented in some other language in the host environment. Hello World Embedded assembly RTL/2 compiles to assembly language and provides the CODE statement to allow including assembly language in RTL/2 source code. This is only available when compiled with a systems programming option (CN:F) The CODE statement takes two operands: the number of bytes used by the code insert and the number of bytes of stack used. Within code statements two trip characters are used to access RTL/2 variables. These vary between different operating systems. On a Digital Equipment Corporation (DEC) PDP-11 running RSX-11M, and a VAX running VMS, the trip characters are * and /. While the specifics varied by operating system the following is an example of a code insert on VAX/VMS: This code insert moves the value of a variable passed into the RTL/2 procedure into a variable named COUNTER in a data brick named MYDATA. Reserved words References SPL published a range of documentation for RTL/2. Each such document was assigned a reference number. The following is an incomplete list. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/C89_(C_version)] | [TOKENS: 1170] |
Contents ANSI C ANSI C, ISO C, and Standard C are successive standards for the C programming language published by the American National Standards Institute (ANSI) and ISO/IEC JTC 1/SC 22/WG 14 of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). Historically, the names referred specifically to the original and best-supported version of the standard (known as C89 or C90). Software developers writing in C are encouraged to conform to the standards, as doing so helps portability between compilers. History and outlook The first standard for C was published by ANSI. Although this document was subsequently adopted by ISO/IEC and subsequent revisions published by ISO/IEC have been adopted by ANSI, "ANSI C" is still used to refer to the standard. While some software developers use the term ISO C, others are standards-body neutral and use Standard C. Informal specification in 1978 (Brian Kernighan and Dennis Ritchie book The C Programming Language). In 1983, the American National Standards Institute formed a committee, X3J11, to establish a standard specification of C. In 1985, the first Standard Draft was released, sometimes referred to as C85. In 1986, another Draft Standard was released, sometimes referred to as C86. The prerelease Standard C was published in 1988, and sometimes referred to as C88. The ANSI standard was completed in 1989 and ratified as ANSI X3.159-1989 "Programming Language C." This version of the language is often referred to as "ANSI C". Later on sometimes the label "C89" is used to distinguish it from C90 but using the same labeling method. The same standard as C89 was ratified by ISO/IEC as ISO/IEC 9899:1990, with only formatting changes, which is sometimes referred to as C90. Therefore, the terms "C89" and "C90" refer to a language that is virtually identical. This standard has been withdrawn by both ANSI/INCITS and ISO/IEC. In 1995, the ISO/IEC published an extension, called Amendment 1, for the C standard. Its full name finally was ISO/IEC 9899:1990/AMD1:1995, nicknamed C94 or C95. Aside from error correction there were further changes to the language capabilities, such as: This was both the first standard with a __STDC_VERSION__ value (199409L) and the first version in which the year in that value did not match the year of publication (1995), leading to common names of both C94 and C95. This would happen again in C17 (2018) and C23 (2024), but they are more commonly known by their earlier years, while this standard is often referred to by its later year. In addition to the amendment, two technical corrigenda were published by ISO for C90: In March 2000, ANSI adopted the ISO/IEC 9899:1999 standard. This standard is commonly referred to as C99. Some notable additions to the previous standard include: Three technical corrigenda were published by ISO for C99: This standard has been withdrawn by both ANSI/INCITS and ISO/IEC in favour of C11. C11 was officially ratified and published on December 8, 2011. Notable features include improved Unicode support, type-generic expressions using the new _Generic keyword, a cross-platform multi-threading API (threads.h), and atomic types support in both core language and the library (stdatomic.h). One technical corrigendum has been published by ISO for C11: C17 was published in June 2018. Rather than introducing new language features, it only addresses defects in C11. C23 was published in October 2024, and is the current standard for the C programming language. C2Y is an informal name for the next revision of the C programming language that is hoped to be released in the later 2020s. As part of the standardization process, ISO/IEC also publishes technical reports and specifications related to the C language: More technical specifications are in development and pending approval, including the fifth and final part of TS 18661, a software transactional memory specification, and parallel library extensions. Support from major compilers ANSI C is supported by almost all the widely used compilers. GCC and Clang are two major C compilers popular today, both based on the C11 with updates including changes from later specifications such as C17. Any source code written only in standard C and without any hardware dependent assumptions is virtually guaranteed to compile correctly on any platform with a conforming C implementation. Without such precautions, most programs may compile only on a certain platform or with a particular compiler, due, for example, to the use of non-standard libraries, such as GUI libraries, or to the reliance on compiler- or platform-specific attributes such as the exact size of certain data types and byte endianness. To mitigate the differences between K&R C and the ANSI C standard, the __STDC__ ("standard c") macro can be used to split code into ANSI and K&R sections. In the above example, a prototype is used in a function declaration for ANSI compliant implementations, while an obsolescent non-prototype declaration is used otherwise. Those are still ANSI-compliant as of C99. Note how this code checks both definition and evaluation: this is because some implementations may set __STDC__ to zero to indicate non-ANSI compliance. Compiler support List of compilers supporting ANSI C: See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Religious_Zionism] | [TOKENS: 4132] |
Contents Religious Zionism Defunct Religious Zionism (Hebrew: צִיּוֹנוּת דָּתִית, romanized: Tziyonut Datit) is a religious denomination that views Zionism as a fundamental component of Orthodox Judaism. Its adherents are also referred to as Dati Leumi (דָּתִי לְאֻמִּי, 'National Religious'), and in Israel, they are most commonly known by the plural form of the first part of that term: Datiim (דתיים, 'Religious'). The community is sometimes called 'Knitted kippah' (כִּפָּה סְרוּגָה, Kippah seruga), the typical head covering worn by male adherents to Religious Zionism. Before the establishment of the State of Israel, most Religious Zionists were observant Jews who supported Zionist efforts to build a Jewish state in the Land of Israel. Religious Zionism revolves around three pillars: the Land of Israel, the People of Israel, and the Torah of Israel. The Hardal (חרדי לאומי, Ḥaredi Le'umi, 'Nationalist Haredi') are a sub-community, stricter in its observance, and more statist in its politics. Those Religious Zionists who are less strict in their observance – although not necessarily more liberal in their politics – are informally referred to as "dati lite". History In 1862, German Orthodox Rabbi Zvi Hirsch Kalischer published his tractate Derishat Zion, positing that the salvation of the Jews, promised by the Prophets, can come about only by self-help. Rabbi Moshe Shmuel Glasner was another prominent rabbi who supported Zionism. The main ideologue of modern Religious Zionism was Rabbi Abraham Isaac Kook, who justified Zionism according to Jewish law, and urged young religious Jews to support efforts to settle the land, and the secular Labour Zionists to give more consideration to Judaism. Kook saw Zionism as a part of a divine scheme which would result in the resettlement of the Jewish people in its homeland. This would bring Geula ("salvation") to Jews, and then to the entire world. After world harmony is achieved by the re-foundation of the Jewish homeland, the Messiah will come. Although this has not yet happened, Kook emphasized that it would take time, and that the ultimate redemption happens in stages, often not apparent while happening. In 1924, when Kook became the Ashkenazi Chief Rabbi of Mandatory Palestine, he tried to reconcile Zionism with Orthodox Judaism. Ideology Religious Zionists believe that Eretz Israel (the Land of Israel) was promised to the ancient Israelites by God. Furthermore, modern Jews have the obligation to possess and defend the land in ways that comport with the Torah's high standards of justice. To generations of diaspora Jews, Jerusalem has been a symbol of the Holy Land and of their return to it, as promised by God in numerous Biblical prophecies. Despite this, many Jews did not embrace Zionism before the 1930s, and certain religious groups opposed it then, as some groups still do now, on the grounds that an attempt to re-establish Jewish rule in Israel by human agency was blasphemous. Hastening salvation and the coming of the Messiah was considered religiously forbidden, and Zionism was seen as a sign of disbelief in God's power, and therefore, a rebellion against God. Rabbi Kook developed a theological answer to that claim, which gave Zionism a religious legitimation: "Zionism was not merely a political movement by secular Jews. It was actually a tool of God to promote His divine scheme, and to initiate the return of the Jews to their homeland – the land He promised to Abraham, Isaac, and Jacob. God wants the children of Israel to return to their home in order to establish a Jewish sovereign state in which Jews could live according to the laws of Torah and Halakha, and commit the Mitzvot of Eretz Israel (these are religious commandments which can be performed only in the Land of Israel). Moreover, to cultivate the Land of Israel was a Mitzvah by itself, and it should be carried out. Therefore, settling Israel is an obligation of the religious Jews, and helping Zionism is actually following God's will." Socialist Zionism envisaged the movement as a tool for building an advanced socialist society in the land of Israel, while solving the problem of antisemitism. The early kibbutz was a communal settlement that focused on national goals, unencumbered by religion and precepts of Jewish law such as kashrut. Socialist Zionists were one of the results of a long process of modernization within the Jewish communities of Europe, known as the Haskalah, or Jewish Enlightenment. Rabbi Kook's answer was as follows: Secular Zionists may think they do it for political, national, or socialist reasons, but in fact – the actual reason for them coming to resettle in Israel is a religious Jewish spark ("Nitzotz") in their soul, planted by God. Without their knowledge, they are contributing to the divine scheme and actually committing a great Mitzvah. The role of religious Zionists is to help them to establish a Jewish state and turn the religious spark in them into a great light. They should show them that the real source of Zionism and the longed-for Zion is Judaism and teach them Torah with love and kindness. In the end, they will understand that the laws of Torah are the key to true harmony and a socialist state (not in the Marxist meaning) that will be a light for the nations and bring salvation to the world. Shlomo Avineri explained the last part of Kook's answer: "... and the end of those pioneers, who scout into the blindness of secularism and atheism, but the treasured light inside them leads them into the path of salvation – their end is that from doing Mitzva without purpose, they will do Mitzva with a purpose." (page 2221) Ideological opposition to Zionism Some Haredi Jews view establishing Jewish sovereignty in the Holy Land before the coming of the Messiah as forbidden, as a violation of the Three Oaths. This would apply whether those who established this sovereignty were religious or secular. Another reason Haredi Jews opposed Zionism that had nothing to do with the establishment of a state or immigration to Palestine was the ideology of secular Zionism itself. Zionism's goal was first and foremost a transformation of the Jewish People from a religious society – whose sole shared characteristic was the Torah – into a political nationality, with a common land, language, and culture. Elchonon Wasserman said: The nationalist concept of the Jewish people as an ethnic or nationalistic entity has no place among us, and it's nothing but a foreign implant into Judaism; it is nothing but idolatry. And its younger sister, "religious nationalism (l'umis datis)", is idol worship that combines Hashem's name and heresy together (avodah zarah b'shituf). Chaim Brisker said, "The Zionists have already won because they got the Jews to look at themselves as a nation." Sholom Dovber Schneersohn, also known as the Rebbe Rashab, was the fifth Lubavitcher Rebbe. He opposed both secular and religious Zionism. In 1903, he published Kuntres Uma'ayan, which included a strong criticism against Zionism. He was concerned that nationalism would replace Judaism as the basis of Jewish identity. Rav Elyashiv also denounced the actions of religious Jews joining Zionist organizations as separating from authentic Judaism. In 2010, Rav Elyashiv published a letter criticizing the Shas Party for joining the World Zionist Organization (WZO). He wrote that the Party "is turning its back on the basics of Charedi Jewry of the past hundred years. He compared this move to the decision of the Mizrachi movement to join the WZO [over one hundred years ago], which was the deciding factor in their separation from authentic Torah Judaism. Organizations The first rabbis to support Zionism were Yehuda Shlomo Alkalai and Zvi Hirsch Kalischer. They argued that the change in the status of Western Europe's Jews following emancipation was the first step toward redemption (גאולה), and that, therefore, one must hasten the messianic salvation by a natural salvation – whose main pillars are the Kibbutz Galuyot ("Gathering of the Exiles"), the return to Eretz Israel, agricultural work (עבודת אדמה), and the revival of the everyday use of the Hebrew language. The Mizrachi organization was established in 1902 in Vilna at a world conference of Religious Zionists. It operates a youth movement, Bnei Akiva, which was founded in 1929. Mizrachi believes that the Torah should be at the centre of Zionism, a sentiment expressed in the Mizrachi Zionist slogan Am Yisrael B'Eretz Yisrael al pi Torat Yisrael ("The people of Israel in the land of Israel according to the Torah of Israel"). It also sees Jewish nationalism as a tool for achieving religious objectives. Mizrachi was the first official Religious Zionist party. It also built a network of religious schools that exist to this day. In 1937–1948, the Religious Kibbutz Movement established three settlement blocs of three kibbutzim each. The first was in the Beit Shean Valley, the second was in the Hebron mountains south of Bethlehem (known as Gush Etzion), and the third was in the western Negev. Kibbutz Yavne was founded in the center of the country as the core of a fourth bloc that came into being after the establishment of the state. Political parties The Labor Movement wing of Religious Zionism, founded in 1921 under the Zionist slogan "Torah va'Avodah" (Torah and Labor), was called HaPoel HaMizrachi. It represented religiously traditional Labour Zionists, both in Europe and in the Land of Israel, where it represented religious Jews in the Histadrut. In 1956, Mizrachi, HaPoel HaMizrachi, and other religious Zionists formed the National Religious Party (NRP) to advance the rights of religious Zionist Jews in Israel. The NRP operated as an independent political party until the 2003 elections. In the 2006 elections, the NRP merged with the National Union (HaIchud HaLeumi). In the 2009 elections, the Jewish Home (HaBayit HaYehudi) was formed in place of the NRP. Other parties and groups affiliated with religious Zionism are Gush Emunim, Tkuma, and Meimad. Kahanism, a radical branch of religious Zionism, was founded by Rabbi Meir Kahane, whose party, Kach, was eventually banned from the Knesset. Today, Otzma Yehudit and National Religious Party–Religious Zionism are the leading Dati Leumi parties. Educational institutions The flagship religious institution of the Religious Zionist movement is the yeshiva founded by Rabbi Abraham Isaac Kook in 1924, called in his honor "Mercaz haRav" (lit. 'the Rabbi's center'). Other Religious Zionist yeshivot include Ateret Cohanim, Beit El yeshiva, and Yeshivat Or Etzion, founded by Rabbi Haim Druckman, a foremost disciple of Rabbi Tzvi Yehuda Kook. Machon Meir is specifically outreach-focused. There are approximately 90 Hesder yeshivot, allowing students to continue their Torah study during their National Service (see below). The first of these was Yeshivat Kerem B'Yavneh, established in 1954; the largest is the Hesder Yeshiva of Sderot, with over 800 students. Others which are well known include Yeshivat Har Etzion, Yeshivat HaKotel, Yeshivat Birkat Moshe in Maale Adumim, Yeshivat Har Bracha, Yeshivat Sha'alvim, and Yeshivat Har Hamor. These institutions usually offer a kollel for Semikha, or Rabbinic ordination. Students generally prepare for the Semikha test of the Chief Rabbinate of Israel (the "Rabbanut"); until his passing in 2020, often for that of the posek R. Zalman Nechemia Goldberg. Training as a Dayan (rabbinic judge) in this community is usually through Machon Ariel (Machon Harry Fischel), also founded by Rav Kook, or Kollel Eretz Hemda; the Chief Rabbinate also commonly. The Meretz Kollel has trained hundreds of community Rabbis. Women study in institutions which are known as Midrashot (sing.: Midrasha) – prominent examples are Midreshet Ein HaNetziv and Migdal Oz. These are usually attended for one year either before or after sherut leumi. Various midrashot offer parallel degree coursework, and they may then be known as a machon. The Midrashot focus on Tanakh (Hebrew Bible) and Machshavah (Jewish thought); some offer specialized training in Halakha: Nishmat certifies women as Yoatzot Halacha, Midreshet Lindenbaum as to'anot; Lindenbaum, Matan, and Ein HaNetziv offer Talmud-intensive programs in rabbinic-level halakha. Community education programs are offered by Emunah, and Matan, across the country. For degree studies, many attend Bar Ilan University, which allows students to combine Torah study with university study, especially through its Machon HaGavoah LeTorah; Jerusalem College of Technology similarly (which also offers a Haredi track). There are also several colleges of education which are associated with the Hesder and the Midrashot, such as Herzog College, Talpiot, and the Lifshitz College of Education. These colleges often offer (master's level) specializations in Tanakh and Machshava. High school students study at Mamlachti Dati (religious state) schools, often associated with Bnei Akiva. These schools offer intensive Torah study alongside the matriculation syllabus, and emphasize tradition and observance; see Education in Israel § Educational tracks. The first of these schools was established at Kfar Haroeh by Moshe-Zvi Neria in 1939; "Yashlatz", associated with Mercaz HaRav, was founded in 1964, and predates several schools similarly linked to Hesder yeshivot, such as that at Sha'alvim; see also the school-networks AMIT and Tachkemoni (Israel) [he]. Today, there are 60 such institutions, with more than 20,000 students. A Dati Leumi girls' high school is referred to as an "Ulpana"; a boys’ high school is a "Yeshiva Tichonit". Some institutions are aligned with the Hardal community, with an ideology that is somewhat more "statist". The leading Yeshiva here is Har Hamor; several high schools also operate. Politics Most Religious Zionists embrace right-wing politics, especially the religious right-wing Jewish Home party and more recently the Religious Zionist Party, but many also support the mainstream right-wing Likud. There are also some left-wing Religious Zionists, such as Rabbi Michael Melchior, whose views were represented by the Meimad party (which ran together with the Israeli Labor party). Many Israeli settlers in the West Bank are Religious Zionists, along with most of the settlers forcibly expelled from the Gaza Strip in August and September 2005. Military service Generally, all adult Jewish males and females in Israel are obligated to serve in the IDF. Certain segments of Orthodoxy defer their service, in order to engage in full-time Torah study for purpose of spiritual development in unison with warfare. Religious Zionist belief advocates that both are critical to Jewish survival and prosperity. For this reason, many Religious Zionist men take part in the Hesder program, a concept conceived by Rabbi Yehuda Amital which allows military service to be combined with yeshiva studies. Some others attend a pre-army Mechina educational program, delaying their service by one year. 88% of Hesder students belong to combat units, compared to a national average of below 30%. Students at Mercaz HaRav, and some Hardal yeshivot, undertake their service through a modified form of Hesder. While some Religious Zionist women serve in the army, most choose national service, known as Sherut Leumi, instead (working at hospitals, schools, and day-care centers). In November 2010, the IDF held a special conference which was attended by the heads of Religious Zionism, in order to encourage female Religious Zionists to join the IDF. The IDF undertook that all modesty and kosher issues will be handled, in order to make female Religious Zionists comfortable. Dress Religious Zionists are often called Kippot sruggot, or "sruggim", in reference to the knitted or crocheted kippot (skullcaps; sing. kippah) which are worn by the men (although some of the men wear other types of head coverings, such as black velvet kippot). Otherwise – particularly for the "dati lite" – their style of dress is largely the same as secular Israelis, with jeans less common; on Shabbat, they wear a stereotypically white dress shirt (recently a polo shirt in some sectors), and often a white kippah. Women usually wear (long) skirts, and often cover their hair, usually with a hair accessory, as opposed to a sheitel (wig) in the Haredi style. In the Hardal community, the dress is generally more formal, with an emphasis on appearing neat. The kippot, which are also knitted, are significantly larger, and it is common for tzitzit to be visibly worn, in keeping with the Haredi practice; payot (sidelocks) are similarly common, as is an (untrimmed) beard. Women invariably cover their hair – usually with a snood, or a mitpachat (Hebrew for "kerchief") – and often wear sandals; their skirts are longer and looser fitting. On Shabbat, men often wear a (blue) suit – atypical in Israel outside the Haredi world – and a large white crocheted kippah. At prayer, the members of the community typically use the Koren Siddur or the Rinat Yisrael. Homes often have on their bookshelves a set of the Steinsaltz Talmud (much as the Artscroll is to be found in American Haredi homes), Mishnah with Kehati, Rambam La'Am, Peninei Halakha, and/or Tzurba M'Rabanan; as well as a selection of the numerous popular books by leading Dati Leumi figures on the weekly parsha, the festivals, and hashkafa (discussions on Jewish thought). Similar to Haredi families, more religious homes will also have all of "The Traditional Jewish Bookshelf". Notable figures See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Fail-safe] | [TOKENS: 701] |
Contents Fail-safe In engineering, a fail-safe is a design feature or practice that, in the event of a failure of the design feature, inherently responds in a way that will cause minimal or no harm to other equipment, to the environment or to people. Unlike inherent safety to a particular hazard, a system being "fail-safe" does not mean that failure is naturally inconsequential, but rather that the system's design prevents or mitigates unsafe consequences of the system's failure. If and when a "fail-safe" system fails, it remains at least as safe as it was before the failure. Since many types of failure are possible, failure mode and effects analysis is used to examine failure situations and recommend safety design and procedures. Some systems can never be made fail-safe, as continuous availability is needed. Redundancy, fault tolerance, or contingency plans are used for these situations (e.g. multiple independently controlled and fuel-fed engines). Examples Examples include: Examples include: As well as physical devices and systems fail-safe procedures can be created so that if a procedure is not carried out or carried out incorrectly no dangerous action results. For example: Other terminology Fail-safe (foolproof) devices are also known as poka-yoke devices. Poka-yoke, a Japanese term, was coined by Shigeo Shingo, a quality expert. "Safe to fail" refers to civil engineering designs such as the Room for the River project in Netherlands and the Thames Estuary 2100 Plan which incorporate flexible adaptation strategies or climate change adaptation which provide for, and limit, damage, should severe events such as 500-year floods occur. Fail-safe and fail-secure are distinct concepts. Fail-safe means that a device will not endanger lives or property when it fails. Fail-secure, also called fail-closed, means that access or data will not fall into the wrong hands in a security failure. Sometimes the approaches suggest opposite solutions. For example, if a building catches fire, fail-safe systems would unlock doors to ensure quick escape and allow firefighters inside, while fail-secure would lock doors to prevent unauthorized access to the building. The opposite of fail-closed is called fail-open. Fail active operational can be installed on systems that have a high degree of redundancy so that a single failure of any part of the system can be tolerated (fail active operational) and a second failure can be detected – at which point the system will turn itself off (uncouple, fail passive). One way of accomplishing this is to have three identical systems installed, and a control logic which detects discrepancies. An example for this are many aircraft systems, among them inertial navigation systems and pitot tubes. During the Cold War, "failsafe point" was the term used for the point of no return for American Strategic Air Command nuclear bombers, just outside Soviet airspace. In the event of receiving an attack order, the bombers were required to linger at the failsafe point and wait for a second confirming order; until one was received, they would not arm their bombs or proceed further. The design was to prevent any single failure of the American command system causing nuclear war. This sense of the term entered the American popular lexicon with the publishing of the 1962 novel Fail-Safe. (Other nuclear war command control systems have used the opposite scheme, fail-deadly, which requires continuous or regular proof that an enemy first-strike attack has not occurred to prevent the launching of a nuclear strike.) See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Animal#cite_note-Wanninger_2024-116] | [TOKENS: 6011] |
Contents Animal Animals are multicellular, eukaryotic organisms belonging to the biological kingdom Animalia (/ˌænɪˈmeɪliə/). With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Animals form a clade, meaning that they arose from a single common ancestor. Over 1.5 million living animal species have been described, of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are as many as 7.77 million animal species on Earth. Animal body lengths range from 8.5 μm (0.00033 in) to 33.6 m (110 ft). They have complex ecologies and interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology, and the study of animal behaviour is known as ethology. The animal kingdom is divided into five major clades, namely Porifera, Ctenophora, Placozoa, Cnidaria and Bilateria. Most living animal species belong to the clade Bilateria, a highly proliferative clade whose members have a bilaterally symmetric and significantly cephalised body plan, and the vast majority of bilaterians belong to two large clades: the protostomes, which includes organisms such as arthropods, molluscs, flatworms, annelids and nematodes; and the deuterostomes, which include echinoderms, hemichordates and chordates, the latter of which contains the vertebrates. The much smaller basal phylum Xenacoelomorpha have an uncertain position within Bilateria. Animals first appeared in the fossil record in the late Cryogenian period and diversified in the subsequent Ediacaran period in what is known as the Avalon explosion. Nearly all modern animal phyla first appeared in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago (Mya), and most classes during the Ordovician radiation 485.4 Mya. Common to all living animals, 6,331 groups of genes have been identified that may have arisen from a single common ancestor that lived about 650 Mya during the Cryogenian period. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa. Humans make use of many other animal species for food (including meat, eggs, and dairy products), for materials (such as leather, fur, and wool), as pets and as working animals for transportation, and services. Dogs, the first domesticated animal, have been used in hunting, in security and in warfare, as have horses, pigeons and birds of prey; while other terrestrial and aquatic animals are hunted for sports, trophies or profits. Non-human animals are also an important cultural element of human evolution, having appeared in cave arts and totems since the earliest times, and are frequently featured in mythology, religion, arts, literature, heraldry, politics, and sports. Etymology The word animal comes from the Latin noun animal of the same meaning, which is itself derived from Latin animalis 'having breath or soul'. The biological definition includes all members of the kingdom Animalia. In colloquial usage, the term animal is often used to refer only to nonhuman animals. The term metazoa is derived from Ancient Greek μετα meta 'after' (in biology, the prefix meta- stands for 'later') and ζῷᾰ zōia 'animals', plural of ζῷον zōion 'animal'. A metazoan is any member of the group Metazoa. Characteristics Animals have several characteristics that they share with other living things. Animals are eukaryotic, multicellular, and aerobic, as are plants and fungi. Unlike plants and algae, which produce their own food, animals cannot produce their own food, a feature they share with fungi. Animals ingest organic material and digest it internally. Animals have structural characteristics that set them apart from all other living things: Typically, there is an internal digestive chamber with either one opening (in Ctenophora, Cnidaria, and flatworms) or two openings (in most bilaterians). Animal development is controlled by Hox genes, which signal the times and places to develop structures such as body segments and limbs. During development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised into specialised tissues and organs, making the formation of complex structures possible, and allowing cells to be differentiated. The extracellular matrix may be calcified, forming structures such as shells, bones, and spicules. In contrast, the cells of other multicellular organisms (primarily algae, plants, and fungi) are held in place by cell walls, and so develop by progressive growth. Nearly all animals make use of some form of sexual reproduction. They produce haploid gametes by meiosis; the smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm. In most cases, a third germ layer, the mesoderm, also develops between them. These germ layers then differentiate to form tissues and organs. Repeated instances of mating with a close relative during sexual reproduction generally leads to inbreeding depression within a population due to the increased prevalence of harmful recessive traits. Animals have evolved numerous mechanisms for avoiding close inbreeding. Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentation; budding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids. Ecology Animals are categorised into ecological groups depending on their trophic levels and how they consume organic material. Such groupings include carnivores (further divided into subcategories such as piscivores, insectivores, ovivores, etc.), herbivores (subcategorised into folivores, graminivores, frugivores, granivores, nectarivores, algivores, etc.), omnivores, fungivores, scavengers/detritivores, and parasites. Interactions between animals of each biome form complex food webs within that ecosystem. In carnivorous or omnivorous species, predation is a consumer–resource interaction where the predator feeds on another organism, its prey, who often evolves anti-predator adaptations to avoid being fed upon. Selective pressures imposed on one another lead to an evolutionary arms race between predator and prey, resulting in various antagonistic/competitive coevolutions. Almost all multicellular predators are animals. Some consumers use multiple methods; for example, in parasitoid wasps, the larvae feed on the hosts' living tissues, killing them in the process, but the adults primarily consume nectar from flowers. Other animals may have very specific feeding behaviours, such as hawksbill sea turtles which mainly eat sponges. Most animals rely on biomass and bioenergy produced by plants and phytoplanktons (collectively called producers) through photosynthesis. Herbivores, as primary consumers, eat the plant material directly to digest and absorb the nutrients, while carnivores and other animals on higher trophic levels indirectly acquire the nutrients by eating the herbivores or other animals that have eaten the herbivores. Animals oxidise carbohydrates, lipids, proteins and other biomolecules in cellular respiration, which allows the animal to grow and to sustain basal metabolism and fuel other biological processes such as locomotion. Some benthic animals living close to hydrothermal vents and cold seeps on the dark sea floor consume organic matter produced through chemosynthesis (via oxidising inorganic compounds such as hydrogen sulfide) by archaea and bacteria. Animals originated in the ocean; all extant animal phyla, except for Micrognathozoa and Onychophora, feature at least some marine species. However, several lineages of arthropods begun to colonise land around the same time as land plants, probably between 510 and 471 million years ago, during the Late Cambrian or Early Ordovician. Vertebrates such as the lobe-finned fish Tiktaalik started to move on to land in the late Devonian, about 375 million years ago. Other notable animal groups that colonized land environments are Mollusca, Platyhelmintha, Annelida, Tardigrada, Onychophora, Rotifera, Nematoda. Animals occupy virtually all of earth's habitats and microhabitats, with faunas adapted to salt water, hydrothermal vents, fresh water, hot springs, swamps, forests, pastures, deserts, air, and the interiors of other organisms. Animals are however not particularly heat tolerant; very few of them can survive at constant temperatures above 50 °C (122 °F) or in the most extreme cold deserts of continental Antarctica. The collective global geomorphic influence of animals on the processes shaping the Earth's surface remains largely understudied, with most studies limited to individual species and well-known exemplars. Diversity The blue whale (Balaenoptera musculus) is the largest animal that has ever lived, weighing up to 190 tonnes and measuring up to 33.6 metres (110 ft) long. The largest extant terrestrial animal is the African bush elephant (Loxodonta africana), weighing up to 12.25 tonnes and measuring up to 10.67 metres (35.0 ft) long. The largest terrestrial animals that ever lived were titanosaur sauropod dinosaurs such as Argentinosaurus, which may have weighed as much as 73 tonnes, and Supersaurus which may have reached 39 metres. Several animals are microscopic; some Myxozoa (obligate parasites within the Cnidaria) never grow larger than 20 μm, and one of the smallest species (Myxobolus shekel) is no more than 8.5 μm when fully grown. The following table lists estimated numbers of described extant species for the major animal phyla, along with their principal habitats (terrestrial, fresh water, and marine), and free-living or parasitic ways of life. Species estimates shown here are based on numbers described scientifically; much larger estimates have been calculated based on various means of prediction, and these can vary wildly. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million. Using patterns within the taxonomic hierarchy, the total number of animal species—including those not yet described—was calculated to be about 7.77 million in 2011.[a] 3,000–6,500 4,000–25,000 Evolutionary origin Evidence of animals is found as long ago as the Cryogenian period. 24-Isopropylcholestane (24-ipc) has been found in rocks from roughly 650 million years ago; it is only produced by sponges and pelagophyte algae. Its likely origin is from sponges based on molecular clock estimates for the origin of 24-ipc production in both groups. Analyses of pelagophyte algae consistently recover a Phanerozoic origin, while analyses of sponges recover a Neoproterozoic origin, consistent with the appearance of 24-ipc in the fossil record. The first body fossils of animals appear in the Ediacaran, represented by forms such as Charnia and Spriggina. It had long been doubted whether these fossils truly represented animals, but the discovery of the animal lipid cholesterol in fossils of Dickinsonia establishes their nature. Animals are thought to have originated under low-oxygen conditions, suggesting that they were capable of living entirely by anaerobic respiration, but as they became specialised for aerobic metabolism they became fully dependent on oxygen in their environments. Many animal phyla first appear in the fossil record during the Cambrian explosion, starting about 539 million years ago, in beds such as the Burgess Shale. Extant phyla in these rocks include molluscs, brachiopods, onychophorans, tardigrades, arthropods, echinoderms and hemichordates, along with numerous now-extinct forms such as the predatory Anomalocaris. The apparent suddenness of the event may however be an artefact of the fossil record, rather than showing that all these animals appeared simultaneously. That view is supported by the discovery of Auroralumina attenboroughii, the earliest known Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predators, catching small prey with its nematocysts as modern cnidarians do. Some palaeontologists have suggested that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Early fossils that might represent animals appear for example in the 665-million-year-old rocks of the Trezona Formation of South Australia. These fossils are interpreted as most probably being early sponges. Trace fossils such as tracks and burrows found in the Tonian period (from 1 gya) may indicate the presence of triploblastic worm-like animals, roughly as large (about 5 mm wide) and complex as earthworms. However, similar tracks are produced by the giant single-celled protist Gromia sphaerica, so the Tonian trace fossils may not indicate early animal evolution. Around the same time, the layered mats of microorganisms called stromatolites decreased in diversity, perhaps due to grazing by newly evolved animals. Objects such as sediment-filled tubes that resemble trace fossils of the burrows of wormlike animals have been found in 1.2 gya rocks in North America, in 1.5 gya rocks in Australia and North America, and in 1.7 gya rocks in Australia. Their interpretation as having an animal origin is disputed, as they might be water-escape or other structures. Phylogeny Animals are monophyletic, meaning they are derived from a common ancestor. Animals are the sister group to the choanoflagellates, with which they form the Choanozoa. Ros-Rocher and colleagues (2021) trace the origins of animals to unicellular ancestors, providing the external phylogeny shown in the cladogram. Uncertainty of relationships is indicated with dashed lines. The animal clade had certainly originated by 650 mya, and may have come into being as much as 800 mya, based on molecular clock evidence for different phyla. Holomycota (inc. fungi) Ichthyosporea Pluriformea Filasterea The relationships at the base of the animal tree have been debated. Other than Ctenophora, the Bilateria and Cnidaria are the only groups with symmetry, and other evidence shows they are closely related. In addition to sponges, Placozoa has no symmetry and was often considered a "missing link" between protists and multicellular animals. The presence of hox genes in Placozoa shows that they were once more complex. The Porifera (sponges) have long been assumed to be sister to the rest of the animals, but there is evidence that the Ctenophora may be in that position. Molecular phylogenetics has supported both the sponge-sister and ctenophore-sister hypotheses. In 2017, Roberto Feuda and colleagues, using amino acid differences, presented both, with the following cladogram for the sponge-sister view that they supported (their ctenophore-sister tree simply interchanging the places of ctenophores and sponges): Porifera Ctenophora Placozoa Cnidaria Bilateria Conversely, a 2023 study by Darrin Schultz and colleagues uses ancient gene linkages to construct the following ctenophore-sister phylogeny: Ctenophora Porifera Placozoa Cnidaria Bilateria Sponges are physically very distinct from other animals, and were long thought to have diverged first, representing the oldest animal phylum and forming a sister clade to all other animals. Despite their morphological dissimilarity with all other animals, genetic evidence suggests sponges may be more closely related to other animals than the comb jellies are. Sponges lack the complex organisation found in most other animal phyla; their cells are differentiated, but in most cases not organised into distinct tissues, unlike all other animals. They typically feed by drawing in water through pores, filtering out small particles of food. The Ctenophora and Cnidaria are radially symmetric and have digestive chambers with a single opening, which serves as both mouth and anus. Animals in both phyla have distinct tissues, but these are not organised into discrete organs. They are diploblastic, having only two main germ layers, ectoderm and endoderm. The tiny placozoans have no permanent digestive chamber and no symmetry; they superficially resemble amoebae. Their phylogeny is poorly defined, and under active research. The remaining animals, the great majority—comprising some 29 phyla and over a million species—form the Bilateria clade, which have a bilaterally symmetric body plan. The Bilateria are triploblastic, with three well-developed germ layers, and their tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and in the Nephrozoa there is an internal body cavity, a coelom or pseudocoelom. These animals have a head end (anterior) and a tail end (posterior), a back (dorsal) surface and a belly (ventral) surface, and a left and a right side. A modern consensus phylogenetic tree for the Bilateria is shown below. Xenacoelomorpha Ambulacraria Chordata Ecdysozoa Spiralia Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Many bilaterians have a combination of circular muscles that constrict the body, making it longer, and an opposing set of longitudinal muscles, that shorten the body; these enable soft-bodied animals with a hydrostatic skeleton to move by peristalsis. They also have a gut that extends through the basically cylindrical body from mouth to anus. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. However, over evolutionary time, descendant spaces have evolved which have lost one or more of each of these characteristics. For example, adult echinoderms are radially symmetric (unlike their larvae), while some parasitic worms have extremely simplified body structures. Genetic studies have considerably changed zoologists' understanding of the relationships within the Bilateria. Most appear to belong to two major lineages, the protostomes and the deuterostomes. It is often suggested that the basalmost bilaterians are the Xenacoelomorpha, with all other bilaterians belonging to the subclade Nephrozoa. However, this suggestion has been contested, with other studies finding that xenacoelomorphs are more closely related to Ambulacraria than to other bilaterians. Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage. Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm. The main deuterostome taxa are the Ambulacraria and the Chordata. Ambulacraria are exclusively marine and include acorn worms, starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals. The protostomes include the Ecdysozoa, named after their shared trait of ecdysis, growth by moulting, Among the largest ecdysozoan phyla are the arthropods and the nematodes. The rest of the protostomes are in the Spiralia, named for their pattern of developing by spiral cleavage in the early embryo. Major spiralian phyla include the annelids and molluscs. History of classification In the classical era, Aristotle divided animals,[d] based on his own observations, into those with blood (roughly, the vertebrates) and those without. The animals were then arranged on a scale from man (with blood, two legs, rational soul) down through the live-bearing tetrapods (with blood, four legs, sensitive soul) and other groups such as crustaceans (no blood, many legs, sensitive soul) down to spontaneously generating creatures like sponges (no blood, no legs, vegetable soul). Aristotle was uncertain whether sponges were animals, which in his system ought to have sensation, appetite, and locomotion, or plants, which did not: he knew that sponges could sense touch and would contract if about to be pulled off their rocks, but that they were rooted like plants and never moved about. In 1758, Carl Linnaeus created the first hierarchical classification in his Systema Naturae. In his original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then, the last four have all been subsumed into a single phylum, the Chordata, while his Insecta (which included the crustaceans and arachnids) and Vermes have been renamed or broken up. The process was begun in 1793 by Jean-Baptiste de Lamarck, who called the Vermes une espèce de chaos ('a chaotic mess')[e] and split the group into three new phyla: worms, echinoderms, and polyps (which contained corals and jellyfish). By 1809, in his Philosophie Zoologique, Lamarck had created nine phyla apart from vertebrates (where he still had four phyla: mammals, birds, reptiles, and fish) and molluscs, namely cirripedes, annelids, crustaceans, arachnids, insects, worms, radiates, polyps, and infusorians. In his 1817 Le Règne Animal, Georges Cuvier used comparative anatomy to group the animals into four embranchements ('branches' with different body plans, roughly corresponding to phyla), namely vertebrates, molluscs, articulated animals (arthropods and annelids), and zoophytes (radiata) (echinoderms, cnidaria and other forms). This division into four was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860. In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals, with five phyla: coelenterates, echinoderms, articulates, molluscs, and vertebrates) and Protozoa (single-celled animals), including a sixth animal phylum, sponges. The protozoa were later moved to the former kingdom Protista, leaving only the Metazoa as a synonym of Animalia. In human culture The human population exploits a large number of other animal species for food, both of domesticated livestock species in animal husbandry and, mainly at sea, by hunting wild species. Marine fish of many species are caught commercially for food. A smaller number of species are farmed commercially. Humans and their livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined. Invertebrates including cephalopods, crustaceans, insects—principally bees and silkworms—and bivalve or gastropod molluscs are hunted or farmed for food, fibres. Chickens, cattle, sheep, pigs, and other animals are raised as livestock for meat across the world. Animal fibres such as wool and silk are used to make textiles, while animal sinews have been used as lashings and bindings, and leather is widely used to make shoes and other items. Animals have been hunted and farmed for their fur to make items such as coats and hats. Dyestuffs including carmine (cochineal), shellac, and kermes have been made from the bodies of insects. Working animals including cattle and horses have been used for work and transport from the first days of agriculture. Animals such as the fruit fly Drosophila melanogaster serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug trabectedin are based on toxins or other molecules of animal origin. People have used hunting dogs to help chase down and retrieve animals, and birds of prey to catch birds and mammals, while tethered cormorants have been used to catch fish. Poison dart frogs have been used to poison the tips of blowpipe darts. A wide variety of animals are kept as pets, from invertebrates such as tarantulas, octopuses, and praying mantises, reptiles such as snakes and chameleons, and birds including canaries, parakeets, and parrots all finding a place. However, the most kept pet species are mammals, namely dogs, cats, and rabbits. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own. A wide variety of terrestrial and aquatic animals are hunted for sport. The signs of the Western and Chinese zodiacs are based on animals. In China and Japan, the butterfly has been seen as the personification of a person's soul, and in classical representation the butterfly is also the symbol of the soul. Animals have been the subjects of art from the earliest times, both historical, as in ancient Egypt, and prehistoric, as in the cave paintings at Lascaux. Major animal paintings include Albrecht Dürer's 1515 The Rhinoceros, and George Stubbs's c. 1762 horse portrait Whistlejacket. Insects, birds and mammals play roles in literature and film, such as in giant bug movies. Animals including insects and mammals feature in mythology and religion. The scarab beetle was sacred in ancient Egypt, and the cow is sacred in Hinduism. Among other mammals, deer, horses, lions, bats, bears, and wolves are the subjects of myths and worship. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_note-:7-125] | [TOKENS: 11899] |
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of". |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Social_identity_approach] | [TOKENS: 1505] |
Contents Social identity approach "Social identity approach" is an umbrella term designed to show that there are two methods used by academics to describe certain complex social phenomena—namely the dynamics between groups and individuals. Those two theoretical methods are called social identity theory and self-categorization theory. Experts describe them as two intertwined, but distinct, social psychological theories. The term "social identity approach" arose as an attempt to militate against the tendency to conflate the two theories, as well as the tendency to mistakenly believe one theory to be a component of the other. These theories should be thought of as overlapping. While there are similarities, self categorisation theory has greater explanatory scope (i.e. is less focused on intergroup relationships specifically) and has been investigated in a broader range of empirical conditions. Self-categorization theory can also be thought of as developed to address limitations of social identity theory. Specifically the limited manner in which social identity theory deals with the cognitive processes that underpin the behaviour it describes. Although this term may be useful when contrasting broad social psychological movements, when applying either theory it is thought of as beneficial to distinguish carefully between the two theories in such a way that their specific characteristics can be retained. The social identity approach has been applied to a wide variety of fields[example needed] and continues to be very influential. There is a high citation rate for key social identity papers and that rate continues to increase. Implications The social identity approach has been contrasted with the social cohesion approach when it comes to defining social groups. The social identity approach describes the state of people thinking of themselves and others as a group. Therefore, three intra-psychological processes proceed. Firstly, social categorization (see self-categorization theory) means that people organize social information by categorizing people into groups. Secondly, social comparison (see social comparison theory) means that people give a meaning to those categories in order to understand the task of the group in the specific situation. Thirdly, social identification is the process in which people relate the self to one of those categories. Regarding the relation between collective identification and work motivation, several propositions have been made regarding situational influences, the acceptance of the leader and the self-definition of a collective. As a situational influence, research says that individuals are activated by situations that challenge their inclusion to the group. The acceptance of the leader is another proposition. The so-called ingroup-favoring-bias (see in-group favoritism) means that if the team leader is interpreted as an ingroup member, the other team members will attribute his or her good behavior internally while they will attribute bad behavior externally. For self-definition of a collective the value of the group as well as the belief in current and future success is important. Closely linked to self-definition to a collective, cohesion is another construct that has an impact on the development of group motivation and in a broader sense also to the group performance. On the topic of social groups, some social psychologists draw a distinction between different types of group phenomenon. Specifically, "those that derive from interpersonal relationships and interdependence with specific others and those that derive from membership in larger, more impersonal collectives or social categories". The social identity approach however does not anticipate this distinction. Instead it anticipates that the same psychological processes underlie intergroup and intragroup phenomenon involving both small and large groups. Relatedly, the persistent perception that the social identity approach is only relevant to large group phenomenon has led some social identity theorists to specifically reassert (both theoretically and empirically) the relevance of the social identity approach to small group interactions. Applications According to the social identity approach, leadership is a function of the group instead of the individual. Individuals who are leaders in their groups tend to be closer to the prototypical group member than are followers. Additionally, they tend to be more socially attractive, which makes it easier for group members to accept their authority and comply with their decisions. Finally, leaders tend to be viewed by others as the leader. In this final distinction, group members attribute leadership traits to the person and not the situation, furthering the distinction between the leader and others in the group by viewing him or her as special. Consistent with this view of leadership, researchers have found that individuals can manipulate their own leadership status in groups by portraying themselves as prototypical to the group. Social identity concepts have been applied to economics resulting in what is now known as identity economics. For example, two separate papers and a book by Akerlof and Kranton incorporate social identity as a factor in the principal–agent model. The main conclusion is that when agents consider themselves insiders, they will maximize their identity utility by exerting greater effort compared to the prescription behavior. On the other hand, if they consider themselves outsiders, they will require a higher wage to compensate their loss for behavior difference with prescribed behaviors. Related theoretical work The social identity model of deindividuation effects (SIDE) was developed from further research on the social identity theory and the self-categorization theory, further specifying the effects of situational factors on the functioning of processes proposed by the two theories. The SIDE model uses this framework to explain cognitive effects of visibility and anonymity in intra-group and inter-group contexts. The model is based on the idea that the self-concept is flexible and different in different situations or contexts. The theory consists of a range of different self-categories that define people as unique individuals or in terms of their membership to specific social groups and other, broader social categories based on the context of the situation. The SIDE model proposes that anonymity shifts both the focus of self-awareness from the individual self to the group self and the perceptions of others from being mostly interpersonal to being group-based (stereotyping). Research has suggested that visual anonymity not only increases negative behavior towards others, but can also promote positive social relations. In one study, all volunteers participated individually in group discussion based on three different topics. In the visually anonymous condition, all communications between participants were text-based while in the visually identifiable condition, the communication was also supplemented by two-way video cameras. The study resulted in the findings that showed anonymity significantly increased group attraction. Intergroup emotion theory further expands on the concept of personally significant group memberships as posed by social identity and self-categorization theories. This theory is primarily based on the concept of depersonalization and the interchangeability of the self with other ingroup members. This causes cognitive representations of the self and the group to become inevitably connected, and therefore the group obtains an emotional significance. This means that individuals not only categorize themselves as members of the ingroup but also "react emotionally when situations or events affect the ingroup". For example, people often report that their group is being discriminated against, even though they feel that they personally are not subject to that discrimination. Controversies Some researchers have claimed that the majority of results in research using the minimal group paradigm can be derived from self-interest and interdependence and that this poses a serious problem for social identity theory and self-categorization theory, and in particular self-categorization theory's account of social groups. Social identity researchers have responded by suggesting that the interdependence centric analysis that has been proposed as an alternative is inconsistent and still relies heavily on the social categorization processes detailed in self-categorization theory. Moreover, they argue that researchers making the above criticisms have also significantly misinterpreted the role of sociological categories in the two theories. References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/National_Security_Agency] | [TOKENS: 16850] |
Contents National Security Agency Since 1978 Since 1990 Since 1998 Since 2001 Since 2007 Databases, tools etc. GCHQ collaboration Five Eyes Other The National Security Agency (NSA) is an intelligence agency of the United States Department of Defense, under the authority of the director of national intelligence (DNI). The NSA is responsible for global monitoring, collection, and processing of information and data for global intelligence and counterintelligence purposes, specializing in a discipline known as signals intelligence (SIGINT). The NSA is also tasked with the protection of U.S. communications networks and information systems. The NSA relies on a variety of measures to accomplish its mission, the majority of which are clandestine. The NSA has roughly 32,000 employees. Originating as a unit to decipher coded communications in World War II, it was officially formed as the NSA by President Harry S. Truman in 1952. Between then and the end of the Cold War, it became the largest of the U.S. intelligence organizations in terms of personnel and budget. Still, information available as of 2013 indicates that the Central Intelligence Agency (CIA) pulled ahead in this regard, with a budget of $14.7 billion. The NSA currently conducts worldwide mass data collection and has been known to physically bug electronic systems as one method to this end. The NSA is also alleged to have been behind such attack software as Stuxnet, which severely damaged Iran's nuclear program. The NSA, alongside the CIA, maintains a physical presence in many countries across the globe; the CIA/NSA joint Special Collection Service (a highly classified intelligence team) inserts eavesdropping devices in high-value targets (such as presidential palaces or embassies). SCS collection tactics allegedly encompass "close surveillance, burglary, wiretapping, [and] breaking". Unlike the CIA and the Defense Intelligence Agency (DIA), both of which specialize primarily in foreign human espionage, the NSA does not publicly conduct human intelligence gathering. The NSA is entrusted with assisting with and coordinating SIGINT elements for other government organizations—which Executive Order prevents from engaging in such activities on their own. As part of these responsibilities, the agency has a co-located organization called the Central Security Service (CSS), which facilitates cooperation between the NSA and other U.S. defense cryptanalysis components. To further ensure streamlined communication between the signals intelligence community divisions, the NSA director simultaneously serves as the Commander of the United States Cyber Command and as Chief of the Central Security Service. The NSA's actions have been a matter of political controversy on several occasions, including its role in providing intelligence during the Gulf of Tonkin incident, which contributed to the escalation of U.S. involvement in the Vietnam War. Declassified documents later revealed that the NSA misinterpreted or overstated signals intelligence, leading to reports of a second North Vietnamese attack that likely never occurred. The agency has also received scrutiny for spying on anti–Vietnam War leaders and the agency's participation in economic espionage. In 2013, the NSA had many of its secret surveillance programs revealed to the public by Edward Snowden, a former NSA contractor. According to the leaked documents, the NSA intercepts and stores the communications of over a billion people worldwide, including United States citizens. The documents also revealed that the NSA tracks hundreds of millions of people's movements using cell phone metadata. Internationally, research has pointed to the NSA's ability to surveil the domestic Internet traffic of foreign countries through "boomerang routing". History The origins of the National Security Agency can be traced back to April 28, 1917, three weeks after the U.S. Congress declared war on Germany in World War I. A code and cipher decryption unit was established as the Cable and Telegraph Section, which was also known as the Cipher Bureau. It was headquartered in Washington, D.C., and was part of the war effort under the executive branch without direct congressional authorization. During the war, it was relocated in the army's organizational chart several times. On July 5, 1917, Herbert O. Yardley was assigned to head the unit. At that point, the unit consisted of Yardley and two civilian clerks. It absorbed the Navy's cryptanalysis functions in July 1918. World War I ended on November 11, 1918, and the army cryptographic section of Military Intelligence (MI-8) moved to New York City on May 20, 1919, where it continued intelligence activities as the Code Compilation Company under the direction of Yardley. After the disbandment of the U.S. Army cryptographic section of military intelligence known as MI-8, the U.S. government created the Cipher Bureau, also known as Black Chamber, in 1919. The Black Chamber was the United States' first peacetime cryptanalytic organization. Jointly funded by the Army and the State Department, the Cipher Bureau was disguised as a New York City commercial code company; it produced and sold such codes for business use. Its true mission, however, was to break the communications (chiefly diplomatic) of other nations. At the Washington Naval Conference, it aided American negotiators by providing them with the decrypted traffic of many of the conference delegations, including the Japanese. The Black Chamber successfully persuaded Western Union, the largest U.S. telegram company at the time, as well as several other communications companies, to illegally give the Black Chamber access to cable traffic of foreign embassies and consulates. Soon, these companies publicly discontinued their collaboration. Despite the Chamber's initial successes, it was shut down in 1929 by U.S. Secretary of State Henry L. Stimson, who defended his decision by stating, "Gentlemen do not read each other's mail." During World War II, the Signal Intelligence Service (SIS) was created to intercept and decipher the communications of the Axis powers. When the war ended, the SIS was reorganized as the Army Security Agency (ASA), and it was placed under the leadership of the Director of Military Intelligence. On May 20, 1949, all cryptologic activities were centralized under a national organization called the Armed Forces Security Agency (AFSA). This organization was originally established within the U.S. Department of Defense under the command of the Joint Chiefs of Staff. The AFSA was tasked with directing the Department of Defense communications and electronic intelligence activities, except those of U.S. military intelligence units. However, the AFSA was unable to centralize communications intelligence and failed to coordinate with civilian agencies that shared its interests, such as the Department of State, the Central Intelligence Agency (CIA) and the Federal Bureau of Investigation (FBI). In December 1951, President Harry S. Truman ordered a panel to investigate how AFSA had failed to achieve its goals. The results of the investigation led to improvements and its redesignation as the National Security Agency. The National Security Council issued a memorandum of October 24, 1952, that revised National Security Council Intelligence Directive (NSCID) 9. On the same day, Truman issued a second memorandum that called for the establishment of the NSA. The actual establishment of the NSA was done by a November 4 memo by Robert A. Lovett, the Secretary of Defense, changing the name of the AFSA to the NSA, and making the new agency responsible for all communications intelligence. Since President Truman's memo was a classified document, the existence of the NSA was not known to the public at that time. Due to its ultra-secrecy, the U.S. intelligence community referred to the NSA as "No Such Agency". In the 1960s, the NSA played a key role in expanding American commitment to the Vietnam War by providing evidence of a North Vietnamese attack on the American Naval destroyer USS Maddox during the Gulf of Tonkin incident. A secret operation, code-named "MINARET", was set up by the NSA to monitor the phone communications of Senators Frank Church and Howard Baker, as well as key leaders of the civil rights movement, including Martin Luther King Jr., and prominent U.S. journalists and athletes who criticized the Vietnam War. However, the project turned out to be controversial, and an internal review by the NSA concluded that its Minaret program was "disreputable if not outright illegal". The NSA has mounted a major effort to secure tactical communications among U.S. armed forces during the war with mixed success. The NESTOR family of compatible secure voice systems it developed was widely deployed during the Vietnam War, with about 30,000 NESTOR sets produced. However, a variety of technical and operational problems limited their use, allowing the North Vietnamese to exploit and intercept U.S. communications. : Vol I, p.79 In the aftermath of the Watergate scandal, a congressional hearing in 1975 led by Senator Frank Church revealed that the NSA, in collaboration with Britain's SIGINT intelligence agency, Government Communications Headquarters (GCHQ), had routinely intercepted the international communications of prominent anti-Vietnam war leaders such as Jane Fonda and Dr. Benjamin Spock. The NSA tracked these individuals in a secret filing system that was destroyed in 1974. Following the resignation of President Richard Nixon, there were several investigations into suspected misuse of FBI, CIA and NSA facilities. Senator Frank Church uncovered previously unknown activity, such as a CIA plot (ordered by the administration of President John F. Kennedy) to assassinate Fidel Castro. The investigation also uncovered NSA's wiretaps on targeted U.S. citizens. After the Church Committee hearings, the Foreign Intelligence Surveillance Act of 1978 was passed. This was designed to limit the practice of mass surveillance in the United States. In 1986, the NSA intercepted the communications of the Libyan government during the immediate aftermath of the Berlin discotheque bombing. The White House asserted that the NSA interception had provided "irrefutable" evidence that Libya was behind the bombing, which U.S. President Ronald Reagan cited as a justification for the 1986 United States bombing of Libya. In 1999, a multi-year investigation by the European Parliament highlighted the NSA's role in economic espionage in a report entitled 'Development of Surveillance Technology and Risk of Abuse of Economic Information'. That year, the NSA founded the NSA Hall of Honor, a memorial at the National Cryptologic Museum in Fort Meade, Maryland. The memorial is a, "tribute to the pioneers and heroes who have made significant and long-lasting contributions to American cryptology". NSA employees must be retired for more than fifteen years to qualify for the memorial. NSA's infrastructure deteriorated in the 1990s as defense budget cuts resulted in maintenance deferrals. On January 24, 2000, NSA headquarters suffered a total network outage for three days caused by an overloaded network. Incoming traffic was successfully stored on agency servers, but it could not be directed and processed. The agency carried out emergency repairs for $3 million to get the system running again (some incoming traffic was also directed instead to Britain's GCHQ for the time being). Director Michael Hayden called the outage a "wake-up call" for the need to invest in the agency's infrastructure. In the 1990s the defensive arm of the NSA—the Information Assurance Directorate (IAD)—started working more openly; the first public technical talk by an NSA scientist at a major cryptography conference was J. Solinas' presentation on efficient Elliptic Curve Cryptography algorithms at Crypto 1997. The IAD's cooperative approach to academia and industry culminated in its support for a transparent process for replacing the outdated Data Encryption Standard (DES) by an Advanced Encryption Standard (AES). Cybersecurity policy expert Susan Landau attributes the NSA's harmonious collaboration with industry and academia in the selection of the AES in 2000—and the Agency's support for the choice of a strong encryption algorithm designed by Europeans rather than by Americans—to Brian Snow, who was the Technical Director of IAD and represented the NSA as cochairman of the Technical Working Group for the AES competition, and Michael Jacobs, who headed IAD at the time.: 75 After the terrorist attacks of September 11, 2001, the NSA believed that it had public support for a dramatic expansion of its surveillance activities. According to Neal Koblitz and Alfred Menezes, the period when the NSA was a trusted partner with academia and industry in the development of cryptographic standards started to come to an end when, as part of the change in the NSA in the post-September 11 era, Snow was replaced as Technical Director, Jacobs retired, and IAD could no longer effectively oppose proposed actions by the offensive arm of the NSA. In the aftermath of the September 11 attacks, the NSA created new IT systems to deal with the flood of information from new technologies like the Internet and cell phones. ThinThread contained advanced data mining capabilities. It also had a "privacy mechanism"; surveillance was stored encrypted; decryption required a warrant. The research done under this program may have contributed to the technology used in later systems. ThinThread was canceled when Michael Hayden chose Trailblazer, which did not include ThinThread's privacy system. Trailblazer Project ramped up in 2002 and was worked on by Science Applications International Corporation (SAIC), Boeing, Computer Sciences Corporation, IBM, and Litton Industries. Some NSA whistleblowers complained internally about major problems surrounding Trailblazer. This led to investigations by Congress and the NSA and DoD Inspectors General. The project was canceled in early 2004. Turbulence started in 2005. It was developed in small, inexpensive "test" pieces, rather than one grand plan like Trailblazer. It also included offensive cyber-warfare capabilities, like injecting malware into remote computers. Congress criticized Turbulence in 2007 for having similar bureaucratic problems as Trailblazer. It was to be a realization of information processing at higher speeds in cyberspace. The massive extent of the NSA's spying, both foreign and domestic, was revealed to the public in a series of detailed disclosures of internal NSA documents beginning in June 2013. Most of the disclosures were leaked by former NSA contractor Edward Snowden. On 4 September 2020, the NSA's surveillance program was ruled unlawful by the US Court of Appeals. The court also added that the US intelligence leaders, who publicly defended it, were not telling the truth. Mission NSA's eavesdropping mission includes radio broadcasting, both from various organizations and individuals, the Internet, telephone calls, and other intercepted forms of communication. Its secure communications mission includes military, diplomatic, and all other sensitive, confidential, or secret government communications. According to a 2010 article in The Washington Post, "every day, collection systems at the National Security Agency intercept and store 1.7 billion e-mails, phone calls and other types of communications. The NSA sorts a fraction of those into 70 separate databases." Because of its listening task, NSA/CSS has been heavily involved in cryptanalytic research, continuing the work of predecessor agencies which had broken many World War II codes and ciphers (see, for instance, Purple, Venona project, and JN-25). In 2004, NSA Central Security Service and the National Cyber Security Division of the Department of Homeland Security (DHS) agreed to expand the NSA Centers of Academic Excellence in Information Assurance Education Program. As part of the National Security Presidential Directive 54/Homeland Security Presidential Directive 23 (NSPD 54), signed on January 8, 2008, by President Bush, the NSA became the lead agency to monitor and protect all of the federal government's computer networks from cyber-terrorism. A part of the NSA's mission is to serve as a combat support agency for the Department of Defense. Operations Operations by the National Security Agency can be divided into three types: "Echelon" was created in the incubator of the Cold War. Today it is a legacy system, and several NSA stations are closing. NSA/CSS, in combination with the equivalent agencies in the United Kingdom (Government Communications Headquarters), Canada (Communications Security Establishment), Australia (Australian Signals Directorate), and New Zealand (Government Communications Security Bureau), otherwise known as the UKUSA group, was reported to be in command of the operation of the so-called ECHELON system. Its capabilities were suspected to include the ability to monitor a large proportion of the world's transmitted civilian telephone, fax, and data traffic. During the early 1970s, the first of what became more than eight large satellite communications dishes were installed at Menwith Hill. Investigative journalist Duncan Campbell reported in 1988 on the "ECHELON" surveillance program, an extension of the UKUSA Agreement on global signals intelligence SIGINT, and detailed how the eavesdropping operations worked. On November 3, 1999, the BBC reported that they had confirmation from the Australian Government of the existence of a powerful "global spying network" code-named Echelon, that could "eavesdrop on every single phone call, fax or e-mail, anywhere on the planet" with Britain and the United States as the chief protagonists. They confirmed that Menwith Hill was "linked directly to the headquarters of the US National Security Agency (NSA) at Fort Meade in Maryland". NSA's United States Signals Intelligence Directive 18 (USSID 18) strictly prohibited the interception or collection of information about "... U.S. persons, entities, corporations or organizations...." without explicit written legal permission from the United States Attorney General when the subject is located abroad, or the Foreign Intelligence Surveillance Court when within U.S. borders. Alleged Echelon-related activities, including its use for motives other than national security, including political and industrial espionage, received criticism from countries outside the UKUSA alliance. The NSA was also involved in planning to blackmail people with "SEXINT", intelligence gained about a potential target's sexual activity and preferences. Those targeted had not committed any apparent crime nor were they charged with one. To support its facial recognition program, the NSA is intercepting "millions of images per day". The Real Time Regional Gateway is a data collection program introduced in 2005 in Iraq by the NSA during the Iraq War that consisted of gathering all electronic communication, storing it, then searching and otherwise analyzing it. It was effective in providing information about Iraqi insurgents who had eluded less comprehensive techniques. This "collect it all" strategy introduced by NSA director, Keith B. Alexander, is believed by Glenn Greenwald of The Guardian to be the model for the comprehensive worldwide mass archiving of communications which NSA is engaged in as of 2013. A dedicated unit of the NSA locates targets for the CIA for extrajudicial assassination in the Middle East. The NSA has also spied extensively on the European Union, the United Nations, and numerous governments including allies and trading partners in Europe, South America, and Asia. In June 2015, WikiLeaks published documents showing that NSA spied on French companies. WikiLeaks also published documents showing that NSA spied on federal German ministries since the 1990s. Even Germany's Chancellor Angela Merkel's cellphones and phones of her predecessors had been intercepted. In June 2013, Edward Snowden revealed that between 8 February and 8 March 2013, the NSA collected about 124.8 billion telephone data items and 97.1 billion computer data items throughout the world, as was displayed in charts from an internal NSA tool codenamed Boundless Informant. Initially, it was reported that some of these data reflected eavesdropping on citizens in countries like Germany, Spain, and France, but later on, it became clear that those data were collected by European agencies during military missions abroad and were subsequently shared with NSA. In 2013, reporters uncovered a secret memo that claims the NSA created and pushed for the adoption of the Dual EC DRBG encryption standard that contained built-in vulnerabilities in 2006 to the United States National Institute of Standards and Technology (NIST), and the International Organization for Standardization (aka ISO). This memo appears to give credence to previous speculation by cryptographers at Microsoft Research. Edward Snowden claims that the NSA often bypasses the encryption process altogether by lifting information before encryption or after decryption. XKeyscore rules (as specified in a file xkeyscorerules100.txt, sourced by German TV stations NDR and WDR, who claim to have excerpts from its source code) reveal that the NSA tracks users of privacy-enhancing software tools, including Tor; an anonymous email service provided by the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) in Cambridge, Massachusetts; and readers of the Linux Journal. Linus Torvalds, the founder of Linux kernel, joked during a LinuxCon keynote on September 18, 2013, that the NSA, who is the founder of SELinux, wanted a backdoor in the kernel. However, later, Linus' father, a Member of the European Parliament (MEP), revealed that the NSA actually did this. When my oldest son was asked the same question: "Has he been approached by the NSA about backdoors?" he said "No", but at the same time he nodded. Then he was sort of in the legal free. He had given the right answer, everybody understood that the NSA had approached him. — Nils Torvalds, LIBE Committee Inquiry on Electronic Mass Surveillance of EU Citizens – 11th Hearing, 11 November 2013 IBM Notes was the first widely adopted software product to use public key cryptography for client-server and server–server authentication and encryption of data. Until US laws regulating encryption were changed in 2000, IBM and Lotus were prohibited from exporting versions of Notes that supported symmetric encryption keys that were longer than 40 bits. In 1997, Lotus negotiated an agreement with the NSA that allowed the export of a version that supported stronger keys with 64 bits, but 24 of the bits were encrypted with a special key and included in the message to provide a "workload reduction factor" for the NSA. This strengthened the protection for users of Notes outside the US against private-sector industrial espionage, but not against spying by the US government. While it is assumed that foreign transmissions terminating in the U.S. (such as a non-U.S. citizen accessing a U.S. website) subject non-U.S. citizens to NSA surveillance, recent research into boomerang routing has raised new concerns about the NSA's ability to surveil the domestic Internet traffic of foreign countries. Boomerang routing occurs when an Internet transmission that originates and terminates in a single country transits another. Research at the University of Toronto has suggested that approximately 25% of Canadian domestic traffic may be subject to NSA surveillance activities as a result of the boomerang routing of Canadian Internet service providers. A document included in the NSA files released with Glenn Greenwald's book No Place to Hide details how the agency's Tailored Access Operations (TAO) and other NSA units gained access to hardware equipment. They intercepted routers, servers, and other network hardware equipment being shipped to organizations targeted for surveillance and installing covert implant firmware onto them before they are delivered. This was described by an NSA manager as "some of the most productive operations in TAO because they preposition access points into hard target networks around the world." Computers that were seized by the NSA due to interdiction are often modified with a physical device known as Cottonmouth. It is a device that can be inserted at the USB port of a computer to establish remote access to the targeted machine. According to the NSA's Tailored Access Operations (TAO) group implant catalog, after implanting Cottonmouth, the NSA can establish a network bridge "that allows the NSA to load exploit software onto modified computers as well as allowing the NSA to relay commands and data between hardware and software implants." NSA's mission, as outlined in Executive Order 12333 in 1981, is to collect information that constitutes "foreign intelligence or counterintelligence" while not "acquiring information concerning the domestic activities of United States persons". NSA has declared that it relies on the FBI to collect information on foreign intelligence activities within the borders of the United States while confining its activities within the United States to the embassies and missions of foreign nations. The appearance of a 'Domestic Surveillance Directorate' of the NSA was soon exposed as a hoax in 2013. NSA's domestic surveillance activities are limited by the requirements imposed by the Fourth Amendment to the U.S. Constitution. The Foreign Intelligence Surveillance Court for example held in October 2011, citing multiple Supreme Court precedents, that the Fourth Amendment prohibitions against unreasonable searches and seizures apply to the contents of all communications, whatever the means, because "a person's private communications are akin to personal papers." However, these protections do not apply to non-U.S. persons located outside of U.S. borders, so the NSA's foreign surveillance efforts are subject to far fewer limitations under U.S. law. The specific requirements for domestic surveillance operations are contained in the Foreign Intelligence Surveillance Act of 1978 (FISA), which does not extend protection to non-U.S. citizens located outside of U.S. territory. George W. Bush, president during the 9/11 terrorist attacks, approved the Patriot Act shortly after the attacks to take anti-terrorist security measures. Titles 1, 2, and 9 specifically authorized measures that would be taken by the NSA. These titles granted enhanced domestic security against terrorism, surveillance procedures, and improved intelligence, respectively. On March 10, 2004, there was a debate between President Bush and White House Counsel Alberto Gonzales, Attorney General John Ashcroft, and Acting Attorney General James Comey. The Attorneys General were unsure if the NSA's programs could be considered constitutional. They threatened to resign over the matter, but ultimately the NSA's programs continued. On March 11, 2004, President Bush signed a new authorization for mass surveillance of Internet records, in addition to the surveillance of phone records. This allowed the president to be able to override laws such as the Foreign Intelligence Surveillance Act, which protected civilians from mass surveillance. In addition to this, President Bush also signed that the measures of mass surveillance were also retroactively in place. One such surveillance program, authorized by the U.S. Signals Intelligence Directive 18 of President George Bush, was the Highlander Project undertaken for the National Security Agency by the U.S. Army 513th Military Intelligence Brigade. NSA relayed telephone (including cell phone) conversations obtained from ground, airborne, and satellite monitoring stations to various U.S. Army Signal Intelligence Officers, including the 201st Military Intelligence Battalion. Conversations of citizens of the U.S. were intercepted, along with those of other nations. Proponents of the surveillance program claim that the President has executive authority to order such action[citation needed], arguing that laws such as FISA are overridden by the President's Constitutional powers. In addition, some argued that FISA was implicitly overridden by a subsequent statute, the Authorization for Use of Military Force, although the Supreme Court's ruling in Hamdan v. Rumsfeld deprecates this view. Under the PRISM program, which started in 2007, NSA gathers Internet communications from foreign targets from nine major U.S. Internet-based communication service providers: Microsoft, Yahoo, Google, Facebook, PalTalk, AOL, Skype, YouTube and Apple. Data gathered include email, videos, photos, VoIP chats such as Skype, and file transfers. Former NSA director General Keith Alexander claimed that in September 2009 the NSA prevented Najibullah Zazi and his friends from carrying out a terrorist attack. However, no evidence has been presented demonstrating that the NSA has ever been instrumental in preventing a terrorist attack. FASCIA is a database created and used by the U.S. National Security Agency that contains trillions of device-location records that are collected from a variety of sources. Its existence was revealed during the 2013 global surveillance disclosure by Edward Snowden. The FASCIA database stores various types of information, including Location Area Codes (LACs), Cell Tower IDs (CeLLIDs), Visitor Location Registers (VLRs), International Mobile Station Equipment Identity (IMEIs) and MSISDNs (Mobile Subscriber Integrated Services Digital Network-Numbers). Over about seven months, more than 27 terabytes of location data were collected and stored in the database. Commercial Solutions for Classified (CSfC) is a key component of the NSA's commercial cybersecurity strategy. CSfC-validated commercial products are proven to meet rigorous security requirements for protection of classified National Security Systems (NSS) data. Once validated, the Department of Defense (DoD), Intelligence Community, Military Services, and other U.S. government agencies are able to implement these commercial hardware and software technologies into their data protection and cybersecurity solutions. Besides the more traditional ways of eavesdropping to collect signals intelligence, the NSA is also engaged in hacking computers, smartphones, and their networks. A division that conducts such operations is the Tailored Access Operations (TAO) division, which has been active since at least circa 1998. According to the Foreign Policy magazine, "... the Office of Tailored Access Operations, or TAO, has successfully penetrated Chinese computer and telecommunications systems for almost 15 years, generating some of the best and most reliable intelligence information about what is going on inside the People's Republic of China." In an interview with Wired magazine, Edward Snowden said the Tailored Access Operations division accidentally caused Syria's internet blackout in 2012. Organizational structure The NSA is led by the Director of the National Security Agency (DIRNSA), who also serves as Chief of the Central Security Service (CHCSS) and Commander of the United States Cyber Command (USCYBERCOM) and is the highest-ranking military official of these organizations. He is assisted by a Deputy Director, who is the highest-ranking civilian within the NSA/CSS. NSA also has an Inspector General, head of the Office of the Inspector General (OIG); a General Counsel, head of the Office of the General Counsel (OGC); and a Director of Compliance, who is head of the Office of the Director of Compliance (ODOC). The National Security Agency Office of Inspector General has worked on cases in collaboration with the United States Department of Justice and the Central Intelligence Agency Office of Inspector General. Unlike other intelligence organizations such as the CIA or DIA, the NSA has always been particularly reticent concerning its internal organizational structure.[citation needed] As of the mid-1990s, the National Security Agency was organized into five Directorates: Each of these directorates consisted of several groups or elements, designated by a letter. There were for example the A Group, which was responsible for all SIGINT operations against the Soviet Union and Eastern Europe, and the G Group, which was responsible for SIGINT related to all non-communist countries. These groups were divided into units designated by an additional number, like unit A5 for breaking Soviet codes, and G6, being the office for the Middle East, North Africa, Cuba, and Central and South America. As of 2013[update], NSA has about a dozen directorates, which are designated by a letter, although not all of them are publicly known. In the year 2000, a leadership team was formed consisting of the director, the deputy director, and the directors of the Signals Intelligence (SID), the Information Assurance (IAD) and the Technical Directorate (TD). The chiefs of other main NSA divisions became associate directors of the senior leadership team. After President George W. Bush initiated the President's Surveillance Program (PSP) in 2001, the NSA created a 24-hour Metadata Analysis Center (MAC), followed in 2004 by the Advanced Analysis Division (AAD), with the mission of analyzing content, Internet metadata and telephone metadata. Both units were part of the Signals Intelligence Directorate. In 2016, a proposal combined the Signals Intelligence Directorate with the Information Assurance Directorate into a Directorate of Operations. NSANet stands for National Security Agency Network and is the official NSA intranet. It is a classified network, for information up to the level of TS/SCI to support the use and sharing of intelligence data between NSA and the signals intelligence agencies of the four other nations of the Five Eyes partnership. The management of NSANet has been delegated to the Central Security Service Texas (CSSTEXAS). NSANet is a highly secured computer network consisting of fiber-optic and satellite communication channels that are almost completely separated from the public Internet. The network allows NSA personnel and civilian and military intelligence analysts anywhere in the world to have access to the agency's systems and databases. This access is tightly controlled and monitored. For example, every keystroke is logged, activities are audited at random, and downloading and printing of documents from NSANet are recorded. In 1998, NSANet, along with NIPRNet and SIPRNet, had "significant problems with poor search capabilities, unorganized data, and old information". In 2004, the network was reported to have used over twenty commercial off-the-shelf operating systems. Some universities that do highly sensitive research are allowed to connect to it. The thousands of Top Secret internal NSA documents that were taken by Edward Snowden in 2013 were stored in "a file-sharing location on the NSA's intranet site"; so, they could easily be read online by NSA personnel. Everyone with a TS/SCI clearance had access to these documents. As a system administrator, Snowden was responsible for moving accidentally misplaced highly sensitive documents to safer storage locations. The NSA maintains at least two watch centers: The NSA has its law enforcement team, known as the NSA Police (and formerly as NSA Security Protective Force) which provides law enforcement services, emergency response, and physical security to its officials and properties. NSA Police are armed federal officers. NSA Police has a K9 division, which generally conducts explosive detection screening of mail, vehicles, and cargo entering NSA grounds. They use marked vehicles to carry out patrols. Employees The number of NSA employees is officially classified but there are several sources providing estimates. In 1961, the NSA had 59,000 military and civilian employees, which grew to 93,067 in 1969, of which 19,300 worked at the headquarters at Fort Meade. In the early 1980s, NSA had roughly 50,000 military and civilian personnel. By 1989 this number had grown again to 75,000, of which 25,000 worked at the NSA headquarters. Between 1990 and 1995 the NSA's budget and workforce were cut by one-third, which led to a substantial loss of experience. In 2012, the NSA said more than 30,000 employees worked at Fort Meade and other facilities. In 2012, John C. Inglis, the deputy director, said that the total number of NSA employees is "somewhere between 37,000 and one billion" as a joke, and stated that the agency is "probably the biggest employer of introverts." In 2013 Der Spiegel stated that the NSA had 40,000 employees. More widely, it has been described as the world's largest single employer of mathematicians. Some NSA employees form part of the workforce of the National Reconnaissance Office (NRO), the agency that provides the NSA with satellite signals intelligence. As of 2013 about 1,000 system administrators work for the NSA. The NSA received criticism early on in 1960 after two agents had defected to the Soviet Union. Investigations by the House Un-American Activities Committee and a special subcommittee of the United States House Committee on Armed Services revealed severe cases of ignorance of personnel security regulations, prompting the former personnel director and the director of security to step down and leading to the adoption of stricter security practices. Nonetheless, security breaches reoccurred only a year later when in an issue of Izvestia of July 23, 1963, a former NSA employee published several cryptologic secrets. The very same day, an NSA clerk-messenger committed suicide as ongoing investigations disclosed that he had sold secret information to the Soviets regularly. The reluctance of congressional houses to look into these affairs prompted a journalist to write, "If a similar series of tragic blunders occurred in any ordinary agency of Government an aroused public would insist that those responsible be officially censured, demoted, or fired." David Kahn criticized the NSA's tactics of concealing its doings as smug and the Congress' blind faith in the agency's right-doing as shortsighted and pointed out the necessity of surveillance by the Congress to prevent abuse of power. Edward Snowden's leaking of the existence of PRISM in 2013 caused the NSA to institute a "two-man rule", where two system administrators are required to be present when one accesses certain sensitive information. Snowden claims he suggested such a rule in 2009. The NSA conducts polygraph tests of employees. For new employees, the tests are meant to discover enemy spies who are applying to the NSA and to uncover any information that could make an applicant pliant to coercion. As part of the latter, historically EPQs or "embarrassing personal questions" about sexual behavior had been included in the NSA polygraph. The NSA also conducts five-year periodic reinvestigation polygraphs of employees, focusing on counterintelligence programs. In addition, the NSA conducts periodic polygraph investigations to find spies and leakers; those who refuse to take them may receive "termination of employment", according to a 1982 memorandum from the director of the NSA. There are also "special access examination" polygraphs for employees who wish to work in highly sensitive areas, and those polygraphs cover counterintelligence questions and some questions about behavior. NSA's brochure states that the average test length is between two and four hours. A 1983 report of the Office of Technology Assessment stated that "It appears that the NSA [National Security Agency] (and possibly CIA) use the polygraph not to determine deception or truthfulness per se, but as a technique of interrogation to encourage admissions." Sometimes applicants in the polygraph process confess to committing felonies such as murder, rape, and selling of illegal drugs. Between 1974 and 1979, of the 20,511 job applicants who took polygraph tests, 695 (3.4%) confessed to previous felony crimes; almost all of those crimes had been undetected. In 2010 the NSA produced a video explaining its polygraph process. The video, ten minutes long, is titled "The Truth About the Polygraph" and was posted to the Web site of the Defense Security Service. Jeff Stein of The Washington Post said that the video portrays "various applicants, or actors playing them—it's not clear—describing everything bad they had heard about the test, the implication being that none of it is true." AntiPolygraph.org argues that the NSA-produced video omits some information about the polygraph process; it produced a video responding to the NSA video. George Maschke, the founder of the Web site, accused the NSA polygraph video of being "Orwellian". In 2013, an article indicated that after Edward Snowden revealed his identity in 2013, the NSA began requiring polygraphing of employees once per quarter. The number of exemptions from legal requirements has been criticized. When in 1964 Congress was hearing a bill giving the director of the NSA the power to fire at will any employee, The Washington Post wrote: "This is the very definition of arbitrariness. It means that an employee could be discharged and disgraced based on anonymous allegations without the slightest opportunity to defend himself." Yet, the bill was accepted by an overwhelming majority. Also, every person hired to a job in the US after 2007, at any private organization, state or federal government agency, must be reported to the New Hire Registry, ostensibly to look for child support evaders, except that employees of an intelligence agency may be excluded from reporting if the director deems it necessary for national security reasons. Facilities When the agency was first established, its headquarters and cryptographic center were in the Naval Security Station in Washington, D.C. The COMINT functions were located in Arlington Hall in Northern Virginia, which served as the headquarters of the U.S. Army's cryptographic operations. Because the Soviet Union had detonated a nuclear bomb and because the facilities were crowded, the federal government wanted to move several agencies, including the AFSA/NSA. A planning committee considered Fort Knox, but Fort Meade, Maryland, was ultimately chosen as NSA headquarters because it was far enough away from Washington, D.C. in case of a nuclear strike and was close enough so its employees would not have to move their families. Construction of additional buildings began after the agency occupied buildings at Fort Meade in the late 1950s, which they soon outgrew. In 1963 the new headquarters building, nine stories tall, opened. NSA workers referred to the building as the "Headquarters Building" and since the NSA management occupied the top floor, workers used "Ninth Floor" to refer to their leaders. COMSEC remained in Washington, D.C., until its new building was completed in 1968. In September 1986, the Operations 2A and 2B buildings, both copper-shielded to prevent eavesdropping, opened with a dedication by President Ronald Reagan. The four NSA buildings became known as the "Big Four." The NSA director moved to 2B when it opened. Headquarters for the National Security Agency is located at 39°6′32″N 76°46′17″W / 39.10889°N 76.77139°W / 39.10889; -76.77139 in Fort George G. Meade, Maryland, although it is separate from other compounds and agencies that are based within this same military installation. Fort Meade is about 20 mi (32 km) southwest of Baltimore, and 25 mi (40 km) northeast of Washington, D.C. The NSA has two dedicated exits off Baltimore–Washington Parkway. The Eastbound exit from the Parkway (heading toward Baltimore) is open to the public and provides employee access to its main campus and public access to the National Cryptology Museum. The Westbound side exit, (heading toward Washington) is labeled "NSA Employees Only". The exit may only be used by people with the proper clearances, and security vehicles parked along the road guard the entrance. NSA is the largest employer in the state of Maryland, and two-thirds of its personnel work at Fort Meade. Built on 350 acres (140 ha; 0.55 sq mi) of Fort Meade's 5,000 acres (2,000 ha; 7.8 sq mi), the site has 1,300 buildings and an estimated 18,000 parking spaces. The main NSA headquarters and operations building is what James Bamford, author of Body of Secrets, describes as "a modern boxy structure" that appears similar to "any stylish office building." The building is covered with one-way dark glass, which is lined with copper shielding to prevent espionage by trapping in signals and sounds. It contains 3,000,000 square feet (280,000 m2), or more than 68 acres (28 ha), of floor space; Bamford said that the U.S. Capitol "could easily fit inside it four times over." The facility has over 100 watchposts, one of them being the visitor control center, a two-story area that serves as the entrance. At the entrance, a white pentagonal structure, visitor badges are issued to visitors and security clearances of employees are checked. The visitor center includes a painting of the NSA seal. The OPS2A building, the tallest building in the NSA complex and the location of much of the agency's operations directorate is accessible from the visitor center. Bamford described it as a "dark glass Rubik's Cube". The facility's "red corridor" houses non-security operations such as concessions and the drug store. The name refers to the "red badge" which is worn by someone without a security clearance. The NSA headquarters includes a cafeteria, a credit union, ticket counters for airlines and entertainment, a barbershop, and a bank. NSA headquarters has its own post office, fire department, and police force. The employees at the NSA headquarters reside in various places in the Baltimore-Washington area, including Annapolis, Baltimore, and Columbia in Maryland and the District of Columbia, including the Georgetown community. The NSA maintains a shuttle service from the Odenton station of MARC to its Visitor Control Center and has done so since 2005. Following a major power outage in 2000, in 2003, and follow-ups through 2007, The Baltimore Sun reported that the NSA was at risk of electrical overload because of insufficient internal electrical infrastructure at Fort Meade to support the amount of equipment being installed. This problem was apparently recognized in the 1990s but not made a priority, and "now the agency's ability to keep its operations going is threatened." On August 6, 2006, The Baltimore Sun reported that the NSA had completely maxed out the grid and that Baltimore Gas & Electric (BGE, now Constellation Energy) was unable to sell them any more power. NSA decided to move some of its operations to a new satellite facility. BGE provided NSA with 65 to 75 megawatts at Fort Meade in 2007 and expected that an increase of 10 to 15 megawatts would be needed later that year. In 2011, the NSA was Maryland's largest consumer of power. In 2007, as BGE's largest customer, NSA bought as much electricity as Annapolis, the capital city of Maryland. One estimate put the potential for power consumption by the new Utah Data Center at US$40 million per year. In 1995, The Baltimore Sun reported that the NSA is the owner of the single largest group of supercomputers. NSA held a groundbreaking ceremony at Fort Meade in May 2013 for its High-Performance Computing Center 2, expected to open in 2016. Called Site M, the center has a 150-megawatt power substation, 14 administrative buildings and 10 parking garages. It cost $3.2 billion and covers 227 acres (92 ha; 0.355 sq mi). The center is 1,800,000 square feet (17 ha; 0.065 sq mi) and initially uses 60 megawatts of electricity. Increments II and III are expected to be completed by 2030 and would quadruple the space, covering 5,800,000 square feet (54 ha; 0.21 sq mi) with 60 buildings and 40 parking garages. Defense contractors are also establishing or expanding cybersecurity facilities near the NSA and around the Washington metropolitan area. The DoD Computer Security Center was founded in 1981 and renamed the National Computer Security Center (NCSC) in 1985. NCSC was responsible for computer security throughout the federal government. NCSC was part of NSA, and during the late 1980s and the 1990s, NSA and NCSC published Trusted Computer System Evaluation Criteria in a six-foot high Rainbow Series of books that detailed trusted computing and network platform specifications. The Rainbow books were replaced by the Common Criteria, however, in the early 2000s. NSA had facilities at Friendship Annex (FANX) in Linthicum, Maryland, which is a 20 to 25-minute drive from Fort Meade; the Aerospace Data Facility at Buckley Space Force Base in Aurora, Colorado; NSA Texas in the Texas Cryptology Center at Lackland Air Force Base in San Antonio, Texas; NSA Georgia, Georgia Cryptologic Center, Fort Gordon, Augusta, Georgia; NSA Hawaii, Hawaii Cryptologic Center in Honolulu; the Multiprogram Research Facility in Oak Ridge, Tennessee, and elsewhere. In 2009, to protect its assets and access more electricity, NSA sought to decentralize and expand its existing facilities in Fort Meade and Menwith Hill, the latter expansion expected to be completed by 2015. On January 6, 2011, a groundbreaking ceremony was held to begin construction on the NSA's first Comprehensive National Cyber-security Initiative (CNCI) Data Center, known as the "Utah Data Center" for short. The $1.5B data center is being built at Camp Williams, Utah, located 25 miles (40 km) south of Salt Lake City, and will help support the agency's National Cyber-security Initiative. It is expected to be operational by September 2013. Construction of Utah Data Center finished in May 2019. In 2012, NSA collected intelligence from four geostationary satellites. Satellite receivers were at Roaring Creek Station in Catawissa, Pennsylvania and Salt Creek Station in Arbuckle, California. It operated ten to twenty taps on U.S. telecom switches. NSA had installations in several U.S. states and from them observed intercepts from Europe, the Middle East, North Africa, Latin America, and Asia. The Yakima Herald-Republic cited Bamford, saying that many of NSA's bases for its Echelon program were a legacy system, using outdated, 1990s technology. In 2004, NSA closed its operations at Bad Aibling Station (Field Station 81) in Bad Aibling, Germany. In 2012, NSA began to move some of its operations at Yakima Research Station, Yakima Training Center, in Washington state to Colorado, planning to leave Yakima closed. During 2013, NSA also intended to close operations at Sugar Grove, West Virginia. Following the UKUSA Agreement between the Five Eyes that cooperated on signals intelligence and ECHELON, NSA stations were built at GCHQ Bude in Morwenstow, United Kingdom; Geraldton, Pine Gap and Shoal Bay, Australia; Leitrim and Ottawa, Ontario, Canada; Misawa, Japan; and Waihopai and Tangimoana, New Zealand. NSA operates RAF Menwith Hill in North Yorkshire, United Kingdom, which was, according to BBC News in 2007, the largest electronic monitoring station in the world. Planned in 1954, and opened in 1960, the base covered 562 acres (227 ha; 0.878 sq mi) in 1999. The agency's European Cryptologic Center (ECC), with 240 employees in 2011, is headquartered at a US military compound in Griesheim, near Frankfurt in Germany. A 2011 NSA report indicates that the ECC is responsible for the "largest analysis and productivity in Europe" and focuses on various priorities, including Africa, Europe, the Middle East, and counterterrorism operations. Since the mid-1980s, the NSA and Taiwan's National Security Bureau have jointly operated a signals intelligence (SIGINT) listening station at Yangmingshan. In 2013, a new Consolidated Intelligence Center, also to be used by NSA, is being built at the headquarters of the United States Army Europe in Wiesbaden, Germany. NSA's partnership with Bundesnachrichtendienst (BND), the German foreign intelligence service, was confirmed by BND president Gerhard Schindler. Thailand is a "3rd party partner" of the NSA along with nine other nations. These are non-English-speaking countries that have made security agreements for the exchange of SIGINT raw material and end product reports. Thailand is the site of at least two US SIGINT collection stations. One is at the US Embassy in Bangkok, an NSA-CIA Joint Special Collection Service (JSCS) unit. It presumably eavesdrops on foreign consulates, embassies, governmental communications, and other targets of opportunity. The second installation is a FORNSAT (foreign satellite interception) station in the Thai city of Khon Kaen. It is codenamed INDRA, but has also been referred to as LEMONWOOD. The station is approximately 40 hectares (99 acres) in size and consists of a large 3,700–4,600 m2 (40,000–50,000 ft2) operations building on the west side of the ops compound and four radome-enclosed parabolic antennas. Possibly two of the radome-enclosed antennas are used for SATCOM intercept and two antennas are used for relaying the intercepted material back to the NSA. There is also a PUSHER-type circularly-disposed antenna array (CDAA) just north of the ops compound. NSA activated Khon Kaen in October 1979. Its mission was to eavesdrop on the radio traffic of Chinese army and air force units in southern China, especially in and around the city of Kunming in Yunnan Province. In the late 1970s, the base consisted only of a small CDAA antenna array that was remote-controlled via satellite from the NSA listening post at Kunia, Hawaii, and a small force of civilian contractors from Bendix Field Engineering Corp. whose job it was to keep the antenna array and satellite relay facilities up and running 24/7. According to the papers of the late General William Odom, the INDRA facility was upgraded in 1986 with a new British-made PUSHER CDAA antenna as part of an overall upgrade of NSA and Thai SIGINT facilities whose objective was to spy on the neighboring communist nations of Vietnam, Laos, and Cambodia. The base fell into disrepair in the 1990s as China and Vietnam became more friendly towards the US, and by 2002 archived satellite imagery showed that the PUSHER CDAA antenna had been torn down, perhaps indicating that the base had been closed. At some point in the period since 9/11, the Khon Kaen base was reactivated and expanded to include a sizeable SATCOM intercept mission. It is likely that the NSA presence at Khon Kaen is relatively small, and that most of the work is done by civilian contractors. Research and development NSA has been involved in debates about public policy, both indirectly as a behind-the-scenes adviser to other departments, and directly during and after Vice Admiral Bobby Ray Inman's directorship. NSA was a major player in the debates of the 1990s regarding the export of cryptography in the United States. Restrictions on export were reduced but not eliminated in 1996. Its secure government communications work has involved the NSA in numerous technology areas, including the design of specialized communications hardware and software, production of dedicated semiconductors at the Ft. Meade chip fabrication plant), and advanced cryptography research. For 50 years, the NSA designed and built most of its in-house computer equipment, but from the 1990s until about 2003 (when the U.S. Congress curtailed the practice), the agency contracted with the private sector in the fields of research and equipment. NSA was embroiled in some controversy concerning its involvement in the creation of the Data Encryption Standard (DES), a standard and public block cipher algorithm used by the U.S. government and banking community. During the development of DES by IBM in the 1970s, NSA recommended changes to some details of the design. There was suspicion that these changes had weakened the algorithm sufficiently to enable the agency to eavesdrop if required, including speculation that a critical component—the so-called S-boxes—had been altered to insert a "backdoor" and that the reduction in key length might have made it feasible for NSA to discover DES keys using massive computing power. It has since been observed that the S-boxes in DES are particularly resilient against differential cryptanalysis, a technique that was not publicly discovered until the late 1980s but known to the IBM DES team. The involvement of the NSA in selecting a successor to the Data Encryption Standard (DES), the Advanced Encryption Standard (AES), was limited to hardware performance testing. NSA has subsequently certified AES for protection of classified information when used in NSA-approved systems. The NSA is responsible for the encryption-related components in these legacy systems: The NSA oversees encryption in the following systems that are in use today: The NSA has specified Suite A and Suite B cryptographic algorithm suites to be used in U.S. government systems; the Suite B algorithms are a subset of those previously specified by NIST and are expected to serve for most information protection purposes, while the Suite A algorithms are secret and are intended for especially high levels of protection. The widely used SHA-1 and SHA-2 hash functions were designed by NSA. SHA-1 is a slight modification of the weaker SHA-0 algorithm, also designed by NSA in 1993. This small modification was suggested by the NSA two years later, with no justification other than the fact that it provides additional security. An attack for SHA-0 that does not apply to the revised algorithm was indeed found between 1998 and 2005 by academic cryptographers. Because of weaknesses and key length restrictions in SHA-1, NIST deprecates its use for digital signatures and approves only the newer SHA-2 algorithms for such applications from 2013 on. A new hash standard, SHA-3, has recently been selected through the competition concluded on October 2, 2012, with the selection of Keccak as the algorithm. The process to select SHA-3 was similar to the one held in choosing the AES, but some doubts have been cast over it, since fundamental modifications have been made to Keccak to turn it into a standard. These changes potentially undermine the cryptanalysis performed during the competition and reduce the security levels of the algorithm. Because of concerns that widespread use of strong cryptography would hamper government use of wiretaps, the NSA proposed the concept of key escrow in 1993 and introduced the Clipper chip that would offer stronger protection than DES but would allow access to encrypted data by authorized law enforcement officials. The proposal was strongly opposed and key escrow requirements ultimately went nowhere. However, NSA's Fortezza hardware-based encryption cards, created for the Clipper project, are still used within government, and NSA ultimately declassified and published the design of the Skipjack cipher used on the cards. NSA promoted the inclusion of a random number generator called Dual EC DRBG in the U.S. National Institute of Standards and Technology's 2007 guidelines. This led to speculation of a backdoor which would allow NSA access to data encrypted by systems using that pseudorandom number generator (PRNG). This is now deemed to be plausible based on the fact that output of next iterations of PRNG can provably be determined if relation between two internal Elliptic Curve points is known. Both NIST and RSA are now officially recommending against the use of this PRNG. Perfect Citizen is a program to perform vulnerability assessment by the NSA in the American critical infrastructure. It was originally reported to be a program to develop a system of sensors to detect cyber attacks on critical infrastructure computer networks in both the private and public sector through a network monitoring system named Einstein. It is funded by the Comprehensive National Cybersecurity Initiative and thus far Raytheon has received a contract for up to $100 million for the initial stage. The NSA has invested many millions of dollars in academic research under grant code prefix MDA904, resulting in over 3,000 papers as of October 11, 2007.[update] The NSA publishes its documents through various publications. Despite this, the NSA/CSS has, at times, attempted to restrict the publication of academic research into cryptography; for example, the Khufu and Khafre block ciphers were voluntarily withheld in response to an NSA request to do so. In response to a FOIA lawsuit, in 2013 the NSA released the 643-page research paper titled, "Untangling the Web: A Guide to Internet Research", written and compiled by NSA employees to assist other NSA workers in searching for information of interest to the agency on the public Internet. NSA can file for a patent from the U.S. Patent and Trademark Office under gag order. Unlike normal patents, these are not revealed to the public and do not expire. However, if the Patent Office receives an application for an identical patent from a third party, they will reveal the NSA's patent and officially grant it to the NSA for the full term on that date. One of NSA's published patents describes a method of geographically locating an individual computer site in an Internet-like network, based on the latency of multiple network connections. Although no public patent exists, NSA is reported to have used a similar locating technology called trilateralization that allows real-time tracking of an individual's location, including altitude from ground level, using data obtained from cellphone towers. Insignia and memorials The heraldic insignia of NSA consists of an eagle inside a circle, grasping a key in its talons. The eagle represents the agency's national mission. Its breast features a shield with bands of red and white, taken from the Great Seal of the United States and representing Congress. The key is taken from the emblem of Saint Peter and represents security. When the NSA was created, the agency had no emblem and used that of the Department of Defense. The agency adopted its first of two emblems in 1963. The current NSA insignia has been in use since 1965, when then-Director, LTG Marshall S. Carter (USA) ordered the creation of a device to represent the agency. The NSA's flag consists of the agency's seal on a light blue background. Crews associated with NSA missions have been involved in several dangerous and deadly situations. The USS Liberty incident in 1967 and USS Pueblo incident in 1968 are examples of the losses endured during the Cold War. The National Security Agency/Central Security Service Cryptologic Memorial honors and remembers the fallen personnel, both military and civilian, of these intelligence missions. It is made of black granite, and has 171 names carved into it, as of 2013.[update] It is located at NSA headquarters. A tradition of declassifying the stories of the fallen was begun in 2001. Constitutionality, legality, and privacy concerning operations In the United States, at least since 2001, there has been legal controversy over what signal intelligence can be used for and how much freedom the National Security Agency has to use signal intelligence. In 2015, the government made slight changes in how it uses and collects certain types of data, specifically phone records. The government was not analyzing the phone records as of early 2019. The surveillance programs were deemed unlawful in September 2020 in a court of appeals case. On December 16, 2005, The New York Times reported that under White House pressure and with an executive order from President George W. Bush, the National Security Agency, in an attempt to thwart terrorism, had been tapping phone calls made to persons outside the country, without obtaining warrants from the United States Foreign Intelligence Surveillance Court, a secret court created for that purpose under the Foreign Intelligence Surveillance Act (FISA). Edward Snowden is a former American intelligence contractor who revealed in 2013 the existence of secret wide-ranging information-gathering programs conducted by the National Security Agency (NSA). More specifically, Snowden released information that demonstrated how the United States government was gathering immense amounts of personal communications, emails, phone locations, web histories and more of American citizens without their knowledge. One of Snowden's primary motivators for releasing this information was fear of a surveillance state developing as a result of the infrastructure being created by the NSA. As Snowden recounts, "I believe that, at this point in history, the greatest danger to our freedom and way of life comes from the reasonable fear of omniscient State powers kept in check by nothing more than policy documents... It is not that I do not value intelligence, but that I oppose . . . omniscient, automatic, mass surveillance. . . . That seems to me a greater threat to the institutions of free society than missed intelligence reports, and unworthy of the costs." In March 2014, Army General Martin Dempsey, Chairman of the Joint Chiefs of Staff, told the House Armed Services Committee, "The vast majority of the documents that Snowden ... exfiltrated from our highest levels of security ... had nothing to do with exposing government oversight of domestic activities. The vast majority of those were related to our military capabilities, operations, tactics, techniques, and procedures." When asked in a May 2014 interview to quantify the number of documents Snowden stole, retired NSA director Keith Alexander said there was no accurate way of counting what he took, but Snowden may have downloaded more than a million documents. On January 17, 2006, the Center for Constitutional Rights filed a lawsuit, CCR v. Bush, against the George W. Bush presidency. The lawsuit challenged the National Security Agency's (NSA's) surveillance of people within the U.S., including the interception of CCR emails without securing a warrant first. In the August 2006 case ACLU v. NSA, U.S. District Court Judge Anna Diggs Taylor concluded that NSA's warrantless surveillance program was both illegal and unconstitutional. On July 6, 2007, the 6th Circuit Court of Appeals vacated the decision because the ACLU lacked standing to bring the suit. In September 2008, the Electronic Frontier Foundation (EFF) filed a class action lawsuit against the NSA and several high-ranking officials of the Bush administration, charging an "illegal and unconstitutional program of dragnet communications surveillance," based on documentation provided by former AT&T technician Mark Klein. As a result of the USA Freedom Act passed by Congress in June 2015, the NSA had to shut down its bulk phone surveillance program on November 29 of the same year. The USA Freedom Act forbids the NSA to collect metadata and content of phone calls unless it has a warrant for terrorism investigation. In that case, the agency must ask the telecom companies for the record, which will only be kept for six months. The NSA's use of large telecom companies to assist it with its surveillance efforts has caused several privacy concerns.: 1568–69 In May 2008, Mark Klein, a former AT&T employee, alleged that his company had cooperated with NSA in installing Narus hardware to replace the FBI Carnivore program, to monitor network communications including traffic between U.S. citizens. NSA was reported in 2008 to use its computing capability to analyze "transactional" data that it regularly acquires from other government agencies, which gather it under their jurisdictional authorities. A 2013 advisory group for the Obama administration, seeking to reform NSA spying programs following the revelations of documents released by Edward J. Snowden, mentioned in 'Recommendation 30' on page 37, "...that the National Security Council staff should manage an interagency process to review regularly the activities of the US Government regarding attacks that exploit a previously unknown vulnerability in a computer application." Retired cybersecurity expert Richard A. Clarke was a group member and stated on April 11, 2014, that NSA had no advance knowledge of Heartbleed. In August 2013 it was revealed that a 2005 IRS training document showed that NSA intelligence intercepts and wiretaps, both foreign and domestic, were being supplied to the Drug Enforcement Administration (DEA) and Internal Revenue Service (IRS) and were illegally used to launch criminal investigations of US citizens. Law enforcement agents were directed to conceal how the investigations began and recreate a legal investigative trail by re-obtaining the same evidence by other means. In the months leading to April 2009, the NSA intercepted the communications of U.S. citizens, including a congressman, although the Justice Department believed that the interception was unintentional. The Justice Department then took action to correct the issues and bring the program into compliance with existing laws. United States Attorney General Eric Holder resumed the program according to his understanding of the Foreign Intelligence Surveillance Act amendment of 2008, without explaining what had occurred. Polls conducted in June 2013 found divided results among Americans regarding NSA's secret data collection. Rasmussen Reports found that 59% of Americans disapprove, Gallup found that 53% disapprove, and Pew found that 56% are in favor of NSA data collection. On April 25, 2013, the NSA obtained a court order requiring Verizon's Business Network Services to provide metadata on all calls in its system to the NSA "on an ongoing daily basis" for three months, as reported by The Guardian on June 6, 2013. This information includes "the numbers of both parties on a call ... location data, call duration, unique identifiers, and the time and duration of all calls" but not "[t]he contents of the conversation itself". The order relies on the so-called "business records" provision of the Patriot Act. In August 2013, following the Snowden leaks, new details about the NSA's data mining activity were revealed. Reportedly, the majority of emails into or out of the United States are captured at "selected communications links" and automatically analyzed for keywords or other "selectors". Emails that do not match are deleted. The utility of such a massive metadata collection in preventing terrorist attacks is disputed. Many studies reveal the dragnet-like system to be ineffective. One such report, released by the New America Foundation concluded that after an analysis of 225 terrorism cases, the NSA "had no discernible impact on preventing acts of terrorism." Defenders of the program said that while metadata alone cannot provide all the information necessary to prevent an attack, it assures the ability to "connect the dots" between suspect foreign numbers and domestic numbers with a speed only the NSA's software is capable of. One benefit of this is quickly being able to determine the difference between suspicious activity and real threats. As an example, NSA director General Keith B. Alexander mentioned at the annual Cybersecurity Summit in 2013, that metadata analysis of domestic phone call records after the Boston Marathon bombing helped determine that rumors of a follow-up attack in New York were baseless. In addition to doubts about its effectiveness, many people argue that the collection of metadata is an unconstitutional invasion of privacy. As of 2015[update], the collection process remained legal and grounded in the ruling from Smith v. Maryland (1979). A prominent opponent of the data collection and its legality is U.S. District Judge Richard J. Leon, who issued a report in 2013 in which he stated: "I cannot imagine a more 'indiscriminate' and 'arbitrary invasion' than this systematic and high tech collection and retention of personal data on virtually every single citizen for purposes of querying and analyzing it without prior judicial approval...Surely, such a program infringes on 'that degree of privacy' that the founders enshrined in the Fourth Amendment". As of May 7, 2015, the United States Court of Appeals for the Second Circuit ruled that the interpretation of Section 215 of the Patriot Act was wrong and that the NSA program that has been collecting Americans' phone records in bulk is illegal. It stated that Section 215 cannot be interpreted to allow government to collect national phone data and, as a result, expired on June 1, 2015. This ruling "is the first time a higher-level court in the regular judicial system has reviewed the NSA phone records program." The replacement law known as the USA Freedom Act, which will enable the NSA to continue to have bulk access to citizens' metadata but with the stipulation that the data will now be stored by the companies themselves. This change will not have any effect on other Agency procedures—outside of metadata collection—which have purportedly challenged Americans' Fourth Amendment rights, including Upstream collection, a mass of techniques used by the Agency to collect and store American's data/communications directly from the Internet backbone. Under the Upstream collection program, the NSA paid telecommunications companies hundreds of millions of dollars in order to collect data from them. While companies such as Google and Yahoo! claim that they do not provide "direct access" from their servers to the NSA unless under a court order, the NSA had access to emails, phone calls, and cellular data users. Under this new ruling, telecommunications companies maintain bulk user metadata on their servers for at least 18 months, to be provided upon request to the NSA. This ruling made the mass storage of specific phone records at NSA datacenters illegal, but it did not rule on Section 215's constitutionality. In a declassified document it was revealed that 17,835 phone lines were on an improperly permitted "alert list" from 2006 to 2009 in breach of compliance, which tagged these phone lines for daily monitoring. Eleven percent of these monitored phone lines met the agency's legal standard for "reasonably articulable suspicion" (RAS). The NSA tracks the locations of hundreds of millions of cell phones per day, allowing it to map people's movements and relationships in detail. The NSA has been reported to have access to all communications made via Google, Microsoft, Facebook, Yahoo, YouTube, AOL, Skype, Apple and Paltalk, and collects hundreds of millions of contact lists from personal email and instant messaging accounts each year. It has also managed to weaken much of the encryption used on the Internet (by collaborating with, coercing, or otherwise infiltrating numerous technology companies to leave "backdoors" into their systems) so that the majority of encryption is inadvertently vulnerable to different forms of attack. Domestically, the NSA has been proven to collect and store metadata records of phone calls, including over 120 million US Verizon subscribers, as well as intercept vast amounts of communications via the internet (Upstream). The government's legal standing had been to rely on a secret interpretation of the Patriot Act whereby the entirety of US communications may be considered "relevant" to a terrorism investigation if it is expected that even a tiny minority may relate to terrorism. The NSA also supplies foreign intercepts to the DEA, IRS and other law enforcement agencies, who use these to initiate criminal investigations. Federal agents are then instructed to "recreate" the investigative trail via parallel construction. The NSA also spies on influential Muslim societies to obtain information that could be used to discredit them, such as their use of pornography. The targets, both domestic and abroad, are not suspected of any crime but hold religious or political views deemed "radical" by the NSA. According to a report in The Washington Post in July 2014, relying on information provided by Snowden, 90% of those placed under surveillance in the U.S. are ordinary Americans and are not the intended targets. The newspaper said it had examined documents including emails, text messages, and online accounts that support the claim. The Intelligence Committees of the US House and Senate exercise primary oversight over the NSA; other members of Congress have been denied access to materials and information regarding the agency and its activities. The United States Foreign Intelligence Surveillance Court, the secret court charged with regulating the NSA's activities is, according to its chief judge, incapable of investigating or verifying how often the NSA breaks even its own secret rules. It has since been reported that the NSA violated its own rules on data access thousands of times a year, many of these violations involving large-scale data interceptions. NSA officers have even used data intercepts to spy on love interests; "most of the NSA violations were self-reported, and each instance resulted in administrative action of termination."[attribution needed] The NSA has "generally disregarded the special rules for disseminating United States person information" by illegally sharing its intercepts with different law enforcement agencies. A March 2009 FISA Court opinion, which the court released, states that protocols restricting data queries had been "so frequently and systemically violated that it can be fairly said that this critical element of the overall ... regime has never functioned effectively." In 2011 the same court noted that the "volume and nature" of the NSA's bulk foreign Internet intercepts was "fundamentally different from what the court had been led to believe". Email contact lists (including those of US citizens) are collected at numerous foreign locations to work around the illegality of doing so on US soil. Legal opinions on the NSA's bulk collection program have differed. In mid-December 2013, U.S. District Judge Richard Leon ruled that the "almost-Orwellian" program likely violates the Constitution, and wrote, "I cannot imagine a more 'indiscriminate' and 'arbitrary invasion' than this systematic and high-tech collection and retention of personal data on virtually every single citizen for purposes of querying and analyzing it without prior judicial approval. Surely, such a program infringes on 'that degree of privacy' that the Founders enshrined in the Fourth Amendment. Indeed, I have little doubt that the author of our Constitution, James Madison, who cautioned us to beware 'the abridgment of the freedom of the people by gradual and silent encroachments by those in power,' would be aghast." Later that month, U.S. District Judge William Pauley ruled that the NSA's collection of telephone records is legal and valuable in the fight against terrorism. In his opinion, he wrote, "a bulk telephony metadata collection program [is] a wide net that could find and isolate gossamer contacts among suspected terrorists in an ocean of seemingly disconnected data" and noted that a similar collection of data before 9/11 might have prevented the attack. At a March 2013 Senate Intelligence Committee hearing, Senator Ron Wyden asked the Director of National Intelligence James Clapper, "Does the NSA collect any type of data at all on millions or hundreds of millions of Americans?" Clapper replied "No, sir. ... Not wittingly. There are cases where they could inadvertently perhaps collect, but not wittingly." This statement came under scrutiny months later, in June 2013, when details of the PRISM surveillance program were published, showing that "the NSA apparently can gain access to the servers of nine Internet companies for a wide range of digital data." Wyden said that Clapper had failed to give a "straight answer" in his testimony. Clapper, in response to criticism, said, "I responded in what I thought was the most truthful, or least untruthful manner." Clapper added, "There are honest differences on the semantics of what—when someone says 'collection' to me, that has a specific meaning, which may have a different meaning to him." Edward Snowden additionally revealed the existence of XKeyscore, a top-secret surveillance program that allows the NSA to search vast databases of "the metadata as well as the content of emails and other internet activity, such as browser history," with the capability to search by "name, telephone number, IP address, keywords, the language in which the internet activity was conducted or the type of browser used." XKeyscore "provides the technological capability, if not the legal authority, to target even US persons for extensive electronic surveillance without a warrant provided that some identifying information, such as their email or IP address, is known to the analyst." Regarding the necessity of these NSA programs, Alexander stated on June 27, 2013, that the NSA's bulk phone and Internet intercepts had been instrumental in preventing 54 terrorist "events", including 13 in the US, and in all but one of these cases had provided the initial tip to "unravel the threat stream". On July 31 NSA Deputy Director John Inglis conceded to the Senate that these intercepts had not been vital in stopping any terrorist attacks, but were "close" to vital in identifying and convicting four San Diego men for sending US$8,930 to Al-Shabaab, a militia that conducts terrorism in Somalia. The U.S. government has aggressively sought to dismiss and challenge Fourth Amendment cases raised against it, and has granted retroactive immunity to ISPs and telecoms participating in domestic surveillance. The U.S. military has acknowledged blocking access to parts of The Guardian website for thousands of defense personnel across the country, and blocking the entire Guardian website for personnel stationed throughout Afghanistan, the Middle East, and South Asia. In October 2014, the United Nations report condemned mass surveillance programs carried out by the U.S. intelligence communities and other nations as violating multiple global treaties and conventions that guaranteed core privacy rights. An exploit dubbed EternalBlue, created by the NSA, was used in the WannaCry ransomware attack in May 2017. The exploit had been leaked online by a hacking group, The Shadow Brokers, nearly a month before the attack. Several experts have pointed the finger at the NSA's non-disclosure of the underlying vulnerability, and their loss of control over the EternalBlue attack tool that exploited it. Edward Snowden said that if the NSA had "privately disclosed the flaw used to attack hospitals when they found it, not when they lost it, [the attack] might not have happened". Wikipedia co-founder, Jimmy Wales, stated that he joined "with Microsoft and the other leaders of the industry in saying this is a huge screw-up by the government ... the moment the NSA found it, they should have notified Microsoft so they could quietly issue a patch and really chivvy people along, long before it became a huge problem." Former employee David Evenden, who had left the NSA to work for US defense contractor Cyperpoint at a position in the United Arab Emirates, was tasked with hacking UAE neighbor Qatar in 2015 to determine if they were funding terrorist group Muslim Brotherhood. He quit the company after learning his team had hacked Qatari Sheikha Moza bint Nasser's email exchanges with Michelle Obama, just before she visited Doha. Upon Evenden's return to the US, he reported his experiences to the FBI. The incident highlights a growing trend of former NSA employees and contractors leaving the agency to start up their firms, and then hiring out to countries like Turkey, Sudan, and even Russia, a country involved in numerous cyberattacks against the US. In May 2021, it was reported that the Danish Defence Intelligence Service collaborated with the NSA to wiretap on fellow EU members and leaders, leading to wide backlash among EU countries and demands for explanation from Danish and American governments. NSA director Paul Nakasone disclosed in a letter to Representative Ron Wyden that the NSA buys data without a warrant. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Social_network#cite_note-47] | [TOKENS: 5247] |
Contents Social network 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias A social network is a social structure consisting of a set of social actors (such as individuals or organizations), networks of dyadic ties, and other social interactions between actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities along with a variety of theories explaining the patterns observed in these structures. The study of these structures uses social network analysis to identify local and global patterns, locate influential entities, and examine dynamics of networks. For instance, social network analysis has been used in studying the spread of misinformation on social media platforms or analyzing the influence of key figures in social networks. Social networks and the analysis of them is an inherently interdisciplinary academic field which emerged from social psychology, sociology, statistics, and graph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations". Jacob Moreno is credited with developing the first sociograms in the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in the social and behavioral sciences by the 1980s. Social network analysis is now one of the major paradigms in contemporary sociology, and is also employed in a number of other social and formal sciences. Together with other complex networks, it forms part of the nascent field of network science. Overview The social network is a theoretical construct useful in the social sciences to study relationships between individuals, groups, organizations, or even entire societies (social units, see differentiation). The term is used to describe a social structure determined by such interactions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. An axiom of the social network approach to understanding social interaction is that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is that individual agency is often ignored although this may not be the case in practice (see agent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations, network analytics are useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited to anthropology, biology, communication studies, economics, geography, information science, organizational studies, social psychology, sociology, and sociolinguistics. History In the late 1890s, both Émile Durkheim and Ferdinand Tönnies foreshadowed the idea of social networks in their theories and research of social groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief (Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and instrumental social links (Gesellschaft, German, commonly translated as "society"). Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors. Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups. Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, and mathematics working independently. In psychology, in the 1930s, Jacob L. Moreno began systematic recording and analysis of social interaction in small groups, especially classrooms and work groups (see sociometry). In anthropology, the foundation for social network theory is the theoretical and ethnographic work of Bronislaw Malinowski, Alfred Radcliffe-Brown, and Claude Lévi-Strauss. A group of social anthropologists associated with Max Gluckman and the Manchester School, including John A. Barnes, J. Clyde Mitchell and Elizabeth Bott Spillius, often are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa, India and the United Kingdom. Concomitantly, British anthropologist S. F. Nadel codified a theory of social structure that was influential in later network analysis. In sociology, the early (1930s) work of Talcott Parsons set the stage for taking a relational approach to understanding social structure. Later, drawing upon Parsons' theory, the work of sociologist Peter Blau provides a strong impetus for analyzing the relational ties of social units with his work on social exchange theory. By the 1970s, a growing number of scholars worked to combine the different tracks and traditions. One group consisted of sociologist Harrison White and his students at the Harvard University Department of Social Relations. Also independently active in the Harvard Social Relations department at the time were Charles Tilly, who focused on networks in political and community sociology and social movements, and Stanley Milgram, who developed the "six degrees of separation" thesis. Mark Granovetter and Barry Wellman are among the former students of White who elaborated and championed the analysis of social networks. Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, and physicists such as Duncan J. Watts, Albert-László Barabási, Peter Bearman, Nicholas A. Christakis, James H. Fowler, and others, developing and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks. Levels of analysis In general, social networks are self-organizing, emergent, and complex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system. These patterns become more apparent as network size increases. However, a global network analysis of, for example, all interpersonal relationships in the world is not feasible and is likely to contain so much information as to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis. The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question. Although levels of analysis are not necessarily mutually exclusive, there are three general levels into which networks may fall: micro-level, meso-level, and macro-level. At the micro-level, social network research typically begins with an individual, snowballing as social relationships are traced, or may begin with a small group of individuals in a particular social context. Dyadic level: A dyad is a social relationship between two individuals. Network research on dyads may concentrate on structure of the relationship (e.g. multiplexity, strength), social equality, and tendencies toward reciprocity/mutuality. Triadic level: Add one individual to a dyad, and you have a triad. Research at this level may concentrate on factors such as balance and transitivity, as well as social equality and tendencies toward reciprocity/mutuality. In the balance theory of Fritz Heider the triad is the key to social dynamics. The discord in a rivalrous love triangle is an example of an unbalanced triad, likely to change to a balanced triad by a change in one of the relations. The dynamics of social friendships in society has been modeled by balancing triads. The study is carried forward with the theory of signed graphs. Actor level: The smallest unit of analysis in a social network is an individual in their social setting, i.e., an "actor" or "ego." Egonetwork analysis focuses on network characteristics, such as size, relationship strength, density, centrality, prestige and roles such as isolates, liaisons, and bridges. Such analyses, are most commonly used in the fields of psychology or social psychology, ethnographic kinship analysis or other genealogical studies of relationships between individuals. Subset level: Subset levels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus on distance and reachability, cliques, cohesive subgroups, or other group actions or behavior. In general, meso-level theories begin with a population size that falls between the micro- and macro-levels. However, meso-level may also refer to analyses that are specifically designed to reveal connections between micro- and macro-levels. Meso-level networks are low density and may exhibit causal processes distinct from interpersonal micro-level networks. Organizations: Formal organizations are social groups that distribute tasks for a collective goal. Network research on organizations may focus on either intra-organizational or inter-organizational ties in terms of formal or informal relationships. Intra-organizational networks themselves often contain multiple levels of analysis, especially in larger organizations with multiple branches, franchises or semi-autonomous departments. In these cases, research is often conducted at a work group level and organization level, focusing on the interplay between the two structures. Experiments with networked groups online have documented ways to optimize group-level coordination through diverse interventions, including the addition of autonomous agents to the groups. Randomly distributed networks: Exponential random graph models of social networks became state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to represent social-structural effects commonly observed in many human social networks, including general degree-based structural effects commonly observed in many human social networks as well as reciprocity and transitivity, and at the node-level, homophily and attribute-based activity and popularity effects, as derived from explicit hypotheses about dependencies among network ties. Parameters are given in terms of the prevalence of small subgraph configurations in the network and can be interpreted as describing the combinations of local social processes from which a given network emerges. These probability models for networks on a given set of actors allow generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing models to be built from theoretical structural foundations of social behavior. Scale-free networks: A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. In network theory a scale-free ideal network is a random network with a degree distribution that unravels the size distribution of social groups. Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called "hubs", and may serve specific purposes in their networks, although this depends greatly on the social context. Another general characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law. The Barabási model of network evolution shown above is an example of a scale-free network. Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of interactions, such as economic or other resource transfer interactions over a large population. Large-scale networks: Large-scale network is a term somewhat synonymous with "macro-level." It is primarily used in social and behavioral sciences, and in economics. Originally, the term was used extensively in the computer sciences (see large-scale network mapping). Complex networks: Most larger social networks display features of social complexity, which involves substantial non-trivial features of network topology, with patterns of complex connections between elements that are neither purely regular nor purely random (see, complexity science, dynamical system and chaos theory), as do biological, and technological networks. Such complex network features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure (see stochastic block model), and hierarchical structure. In the case of agency-directed networks these features also include reciprocity, triad significance profile (TSP, see network motif), and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such as lattices and random graphs, do not show these features. Theoretical links Various theoretical frameworks have been imported for the use of social network analysis. The most prominent of these are Graph theory, Balance theory, Social comparison theory, and more recently, the Social identity approach. Few complete theories have been produced from social network analysis. Two that have are structural role theory and heterophily theory. The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be important in seeking information and innovation, as cliques have a tendency to have more homogeneous opinions as well as share many common traits. This homophilic tendency was the reason for the members of the cliques to be attracted together in the first place. However, being similar, each member of the clique would also know more or less what the other members knew. To find new information or insights, members of the clique will have to look beyond the clique to its other friends and acquaintances. This is what Granovetter called "the strength of weak ties". Structural holes In the context of networks, social capital exists where people have an advantage because of their location in a network. Contacts in a network provide information, opportunities and perspectives that can be beneficial to the central player in the network. Most social structures tend to be characterized by dense clusters of strong connections. Information within these clusters tends to be rather homogeneous and redundant. Non-redundant information is most often obtained through contacts in different clusters. When two separate clusters possess non-redundant information, there is said to be a structural hole between them. Thus, a network that bridges structural holes will provide network benefits that are in some degree additive, rather than overlapping. An ideal network structure has a vine and cluster structure, providing access to many different clusters and structural holes. Networks rich in structural holes are a form of social capital in that they offer information benefits. The main player in a network that bridges structural holes is able to access information from diverse sources and clusters. For example, in business networks, this is beneficial to an individual's career because he is more likely to hear of job openings and opportunities if his network spans a wide range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory of weak ties, which rests on the basis that having a broad range of contacts is most effective for job attainment. Structural holes have been widely applied in social network analysis, resulting in applications in a wide range of practical scenarios as well as machine learning-based social prediction. Research clusters Research has used network analysis to examine networks created when artists are exhibited together in museum exhibition. Such networks have been shown to affect an artist's recognition in history and historical narratives, even when controlling for individual accomplishments of the artist. Other work examines how network grouping of artists can affect an individual artist's auction performance. An artist's status has been shown to increase when associated with higher status networks, though this association has diminishing returns over an artist's career. In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of community ties had to do with who talked, associated, traded, and attended church with whom. Today, however, there are extended "online" communities developed through telecommunications devices and social network services. Such devices and services require extensive and ongoing maintenance and analysis, often using network science methods. Community development studies, today, also make extensive use of such methods. Complex networks require methods specific to modelling and interpreting social complexity and complex adaptive systems, including techniques of dynamic network analysis. Mechanisms such as Dual-phase evolution explain how temporal changes in connectivity contribute to the formation of structure in social networks. The study of social networks is being used to examine the nature of interdependencies between actors and the ways in which these are related to outcomes of conflict and cooperation. Areas of study include cooperative behavior among participants in collective actions such as protests; promotion of peaceful behavior, social norms, and public goods within communities through networks of informal governance; the role of social networks in both intrastate conflict and interstate conflict; and social networking among politicians, constituents, and bureaucrats. In criminology and urban sociology, much attention has been paid to the social networks among criminal actors. For example, murders can be seen as a series of exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other violent acts to maintain their reputation for strength. Diffusion of ideas and innovations studies focus on the spread and use of ideas from one actor to another or one culture and another. This line of research seeks to explain why some become "early adopters" of ideas and innovations, and links social network structure with facilitating or impeding the spread of an innovation. A case in point is the social diffusion of linguistic innovation such as neologisms. Experiments and large-scale field trials (e.g., by Nicholas Christakis and collaborators) have shown that cascades of desirable behaviors can be induced in social groups, in settings as diverse as Honduras villages, Indian slums, or in the lab. Still other experiments have documented the experimental induction of social contagion of voting behavior, emotions, risk perception, and commercial products. In demography, the study of social networks has led to new sampling methods for estimating and reaching populations that are hard to enumerate (for example, homeless people or intravenous drug users.) For example, respondent driven sampling is a network-based sampling technique that relies on respondents to a survey recommending further respondents. The field of sociology focuses almost entirely on networks of outcomes of social interactions. More narrowly, economic sociology considers behavioral interactions of individuals and groups through social capital and social "markets". Sociologists, such as Mark Granovetter, have developed core principles about the interactions of social structure, information, ability to punish or reward, and trust that frequently recur in their analyses of political, economic and other institutions. Granovetter examines how social structures and social networks can affect economic outcomes like hiring, price, productivity and innovation and describes sociologists' contributions to analyzing the impact of social structure and networks on the economy. Analysis of social networks is increasingly incorporated into health care analytics, not only in epidemiological studies but also in models of patient communication and education, disease prevention, mental health diagnosis and treatment, and in the study of health care organizations and systems. Human ecology is an interdisciplinary and transdisciplinary study of the relationship between humans and their natural, social, and built environments. The scientific philosophy of human ecology has a diffuse history with connections to geography, sociology, psychology, anthropology, zoology, and natural ecology. In the study of literary systems, network analysis has been applied by Anheier, Gerhards and Romo, De Nooy, Senekal, and Lotker, to study various aspects of how literature functions. The basic premise is that polysystem theory, which has been around since the writings of Even-Zohar, can be integrated with network theory and the relationships between different actors in the literary network, e.g. writers, critics, publishers, literary histories, etc., can be mapped using visualization from SNA. Research studies of formal or informal organization relationships, organizational communication, economics, economic sociology, and other resource transfers. Social networks have also been used to examine how organizations interact with each other, characterizing the many informal connections that link executives together, as well as associations and connections between individual employees at different organizations. Many organizational social network studies focus on teams. Within team network studies, research assesses, for example, the predictors and outcomes of centrality and power, density and centralization of team instrumental and expressive ties, and the role of between-team networks. Intra-organizational networks have been found to affect organizational commitment, organizational identification, interpersonal citizenship behaviour. Social capital is a form of economic and cultural capital in which social networks are central, transactions are marked by reciprocity, trust, and cooperation, and market agents produce goods and services not mainly for themselves, but for a common good. Social capital is split into three dimensions: the structural, the relational and the cognitive dimension. The structural dimension describes how partners interact with each other and which specific partners meet in a social network. Also, the structural dimension of social capital indicates the level of ties among organizations. This dimension is highly connected to the relational dimension which refers to trustworthiness, norms, expectations and identifications of the bonds between partners. The relational dimension explains the nature of these ties which is mainly illustrated by the level of trust accorded to the network of organizations. The cognitive dimension analyses the extent to which organizations share common goals and objectives as a result of their ties and interactions. Social capital is a sociological concept about the value of social relations and the role of cooperation and confidence to achieve positive outcomes. The term refers to the value one can get from their social ties. For example, newly arrived immigrants can make use of their social ties to established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of unfamiliarity with the local language). A positive relationship exists between social capital and the intensity of social network use. In a dynamic framework, higher activity in a network feeds into higher social capital which itself encourages more activity. This particular cluster focuses on brand-image and promotional strategy effectiveness, taking into account the impact of customer participation on sales and brand-image. This is gauged through techniques such as sentiment analysis which rely on mathematical areas of study such as data mining and analytics. This area of research produces vast numbers of commercial applications as the main goal of any study is to understand consumer behaviour and drive sales. In many organizations, members tend to focus their activities inside their own groups, which stifles creativity and restricts opportunities. A player whose network bridges structural holes has an advantage in detecting and developing rewarding opportunities. Such a player can mobilize social capital by acting as a "broker" of information between two clusters that otherwise would not have been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher and political economist John Stuart Mill, writes, "it is hardly possible to overrate the value of placing human beings in contact with persons dissimilar to themselves.... Such communication [is] one of the primary sources of progress." Thus, a player with a network rich in structural holes can add value to an organization through new ideas and opportunities. This in turn, helps an individual's career development and advancement. A social capital broker also reaps control benefits of being the facilitator of information flow between contacts. Full communication with exploratory mindsets and information exchange generated by dynamically alternating positions in a social network promotes creative and deep thinking. In the case of consulting firm Eden McCallum, the founders were able to advance their careers by bridging their connections with former big three consulting firm consultants and mid-size industry firms. By bridging structural holes and mobilizing social capital, players can advance their careers by executing new opportunities between contacts. There has been research that both substantiates and refutes the benefits of information brokerage. A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot materialize due to the communal sharing values" of such organizations. However, this study only analyzed Chinese firms, which tend to have strong communal sharing values. Information and control benefits of structural holes are still valuable in firms that are not quite as inclusive and cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply chain for one of America's largest electronics companies. He found that managers who often discussed issues with other groups were better paid, received more positive job evaluations and were more likely to be promoted. Thus, bridging structural holes can be beneficial to an organization, and in turn, to an individual's career. Computer networks combined with social networking software produce a new medium for social interaction. A relationship over a computerized social networking service can be characterized by context, direction, and strength. The content of a relation refers to the resource that is exchanged. In a computer-mediated communication context, social pairs exchange different kinds of information, including sending a data file or a computer program as well as providing emotional support or arranging a meeting. With the rise of electronic commerce, information exchanged may also correspond to exchanges of money, goods or services in the "real" world. Social network analysis methods have become essential to examining these types of computer mediated communication. In addition, the sheer size and the volatile nature of social media has given rise to new network metrics. A key concern with networks extracted from social media is the lack of robustness of network metrics given missing data. Based on the pattern of homophily, ties between people are most likely to occur between nodes that are most similar to each other, or within neighbourhood segregation, individuals are most likely to inhabit the same regional areas as other individuals who are like them. Therefore, social networks can be used as a tool to measure the degree of segregation or homophily within a social network. Social Networks can both be used to simulate the process of homophily but it can also serve as a measure of level of exposure of different groups to each other within a current social network of individuals in a certain area. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/United_States#cite_note-Silkenat_2019_p._252-121] | [TOKENS: 17273] |
Contents United States The United States of America (USA), also known as the United States (U.S.) or America, is a country primarily located in North America. It is a federal republic of 50 states and a federal capital district, Washington, D.C. The 48 contiguous states border Canada to the north and Mexico to the south, with the semi-exclave of Alaska in the northwest and the archipelago of Hawaii in the Pacific Ocean. The United States also asserts sovereignty over five major island territories and various uninhabited islands in Oceania and the Caribbean.[j] It is a megadiverse country, with the world's third-largest land area[c] and third-largest population, exceeding 341 million.[k] Paleo-Indians first migrated from North Asia to North America at least 15,000 years ago, and formed various civilizations. Spanish colonization established Spanish Florida in 1513, the first European colony in what is now the continental United States. British colonization followed with the 1607 settlement of Virginia, the first of the Thirteen Colonies. Enslavement of Africans was practiced in all colonies by 1770 and supplied most of the labor for the Southern Colonies' plantation economy. Clashes with the British Crown began as a civil protest over the illegality of taxation without representation in Parliament and the denial of other English rights. They evolved into the American Revolution, which led to the Declaration of Independence and a society based on universal rights. Victory in the 1775–1783 Revolutionary War brought international recognition of U.S. sovereignty and fueled westward expansion, further dispossessing native inhabitants. As more states were admitted, a North–South division over slavery led the Confederate States of America to declare secession and fight the Union in the 1861–1865 American Civil War. With the United States' victory and reunification, slavery was abolished nationally. By the late 19th century, the U.S. economy outpaced the French, German and British economies combined. As of 1900, the country had established itself as a great power, a status solidified after its involvement in World War I. Following Japan's attack on Pearl Harbor in 1941, the U.S. entered World War II. Its aftermath left the U.S. and the Soviet Union as rival superpowers, competing for ideological dominance and international influence during the Cold War. The Soviet Union's collapse in 1991 ended the Cold War, leaving the U.S. as the world's sole superpower. The U.S. federal government is a representative democracy with a president and a constitution that grants separation of powers under three branches: legislative, executive, and judicial. The United States Congress is a bicameral national legislature composed of the House of Representatives (a lower house based on population) and the Senate (an upper house based on equal representation for each state). Federalism grants substantial autonomy to the 50 states. In addition, 574 Native American tribes have sovereignty rights, and there are 326 Native American reservations. Since the 1850s, the Democratic and Republican parties have dominated American politics. American ideals and values are based on a democratic tradition inspired by the American Enlightenment movement. A developed country, the U.S. ranks high in economic competitiveness, innovation, and higher education. Accounting for over a quarter of nominal global GDP, its economy has been the world's largest since about 1890. It is the wealthiest country, with the highest disposable household income per capita among OECD members, though its wealth inequality is highly pronounced. Shaped by centuries of immigration, the culture of the U.S. is diverse and globally influential. Making up more than a third of global military spending, the country has one of the strongest armed forces and is a designated nuclear state. A member of numerous international organizations, the U.S. plays a major role in global political, cultural, economic, and military affairs. Etymology Documented use of the phrase "United States of America" dates back to January 2, 1776. On that day, Stephen Moylan, a Continental Army aide to General George Washington, wrote a letter to Joseph Reed, Washington's aide-de-camp, seeking to go "with full and ample powers from the United States of America to Spain" to seek assistance in the Revolutionary War effort. The first known public usage is an anonymous essay published in the Williamsburg newspaper The Virginia Gazette on April 6, 1776. Sometime on or after June 11, 1776, Thomas Jefferson wrote "United States of America" in a rough draft of the Declaration of Independence, which was adopted by the Second Continental Congress on July 4, 1776. The term "United States" and its initialism "U.S.", used as nouns or as adjectives in English, are common short names for the country. The initialism "USA", a noun, is also common. "United States" and "U.S." are the established terms throughout the U.S. federal government, with prescribed rules.[l] "The States" is an established colloquial shortening of the name, used particularly from abroad; "stateside" is the corresponding adjective or adverb. "America" is the feminine form of the first word of Americus Vesputius, the Latinized name of Italian explorer Amerigo Vespucci (1454–1512);[m] it was first used as a place name by the German cartographers Martin Waldseemüller and Matthias Ringmann in 1507.[n] Vespucci first proposed that the West Indies discovered by Christopher Columbus in 1492 were part of a previously unknown landmass and not among the Indies at the eastern limit of Asia. In English, the term "America" usually does not refer to topics unrelated to the United States, despite the usage of "the Americas" to describe the totality of the continents of North and South America. History The first inhabitants of North America migrated from Siberia approximately 15,000 years ago, either across the Bering land bridge or along the now-submerged Ice Age coastline. Small isolated groups of hunter-gatherers are said to have migrated alongside herds of large herbivores far into Alaska, with ice-free corridors developing along the Pacific coast and valleys of North America in c. 16,500 – c. 13,500 BCE (c. 18,500 – c. 15,500 BP). The Clovis culture, which appeared around 11,000 BCE, is believed to be the first widespread culture in the Americas. Over time, Indigenous North American cultures grew increasingly sophisticated, and some, such as the Mississippian culture, developed agriculture, architecture, and complex societies. In the post-archaic period, the Mississippian cultures were located in the midwestern, eastern, and southern regions, and the Algonquian in the Great Lakes region and along the Eastern Seaboard, while the Hohokam culture and Ancestral Puebloans inhabited the Southwest. Native population estimates of what is now the United States before the arrival of European colonizers range from around 500,000 to nearly 10 million. Christopher Columbus began exploring the Caribbean for Spain in 1492, leading to Spanish-speaking settlements and missions from what are now Puerto Rico and Florida to New Mexico and California. The first Spanish colony in the present-day continental United States was Spanish Florida, chartered in 1513. After several settlements failed there due to starvation and disease, Spain's first permanent town, Saint Augustine, was founded in 1565. France established its own settlements in French Florida in 1562, but they were either abandoned (Charlesfort, 1578) or destroyed by Spanish raids (Fort Caroline, 1565). Permanent French settlements were founded much later along the Great Lakes (Fort Detroit, 1701), the Mississippi River (Saint Louis, 1764) and especially the Gulf of Mexico (New Orleans, 1718). Early European colonies also included the thriving Dutch colony of New Nederland (settled 1626, present-day New York) and the small Swedish colony of New Sweden (settled 1638 in what became Delaware). British colonization of the East Coast began with the Virginia Colony (1607) and the Plymouth Colony (Massachusetts, 1620). The Mayflower Compact in Massachusetts and the Fundamental Orders of Connecticut established precedents for local representative self-governance and constitutionalism that would develop throughout the American colonies. While European settlers in what is now the United States experienced conflicts with Native Americans, they also engaged in trade, exchanging European tools for food and animal pelts.[o] Relations ranged from close cooperation to warfare and massacres. The colonial authorities often pursued policies that forced Native Americans to adopt European lifestyles, including conversion to Christianity. Along the eastern seaboard, settlers trafficked Africans through the Atlantic slave trade, largely to provide manual labor on plantations. The original Thirteen Colonies[p] that would later found the United States were administered as possessions of the British Empire by Crown-appointed governors, though local governments held elections open to most white male property owners. The colonial population grew rapidly from Maine to Georgia, eclipsing Native American populations; by the 1770s, the natural increase of the population was such that only a small minority of Americans had been born overseas. The colonies' distance from Britain facilitated the entrenchment of self-governance, and the First Great Awakening, a series of Christian revivals, fueled colonial interest in guaranteed religious liberty. Following its victory in the French and Indian War, Britain began to assert greater control over local affairs in the Thirteen Colonies, resulting in growing political resistance. One of the primary grievances of the colonists was the denial of their rights as Englishmen, particularly the right to representation in the British government that taxed them. To demonstrate their dissatisfaction and resolve, the First Continental Congress met in 1774 and passed the Continental Association, a colonial boycott of British goods enforced by local "committees of safety" that proved effective. The British attempt to then disarm the colonists resulted in the 1775 Battles of Lexington and Concord, igniting the American Revolutionary War. At the Second Continental Congress, the colonies appointed George Washington commander-in-chief of the Continental Army, and created a committee that named Thomas Jefferson to draft the Declaration of Independence. Two days after the Second Continental Congress passed the Lee Resolution to create an independent, sovereign nation, the Declaration was adopted on July 4, 1776. The political values of the American Revolution evolved from an armed rebellion demanding reform within an empire to a revolution that created a new social and governing system founded on the defense of liberty and the protection of inalienable natural rights; sovereignty of the people; republicanism over monarchy, aristocracy, and other hereditary political power; civic virtue; and an intolerance of political corruption. The Founding Fathers of the United States, who included Washington, Jefferson, John Adams, Benjamin Franklin, Alexander Hamilton, John Jay, James Madison, Thomas Paine, and many others, were inspired by Classical, Renaissance, and Enlightenment philosophies and ideas. Though in practical effect since its drafting in 1777, the Articles of Confederation was ratified in 1781 and formally established a decentralized government that operated until 1789. After the British surrender at the siege of Yorktown in 1781, American sovereignty was internationally recognized by the Treaty of Paris (1783), through which the U.S. gained territory stretching west to the Mississippi River, north to present-day Canada, and south to Spanish Florida. The Northwest Ordinance (1787) established the precedent by which the country's territory would expand with the admission of new states, rather than the expansion of existing states. The U.S. Constitution was drafted at the 1787 Constitutional Convention to overcome the limitations of the Articles. It went into effect in 1789, creating a federal republic governed by three separate branches that together formed a system of checks and balances. George Washington was elected the country's first president under the Constitution, and the Bill of Rights was adopted in 1791 to allay skeptics' concerns about the power of the more centralized government. His resignation as commander-in-chief after the Revolutionary War and his later refusal to run for a third term as the country's first president established a precedent for the supremacy of civil authority in the United States and the peaceful transfer of power. In the late 18th century, American settlers began to expand westward in larger numbers, many with a sense of manifest destiny. The Louisiana Purchase of 1803 from France nearly doubled the territory of the United States. Lingering issues with Britain remained, leading to the War of 1812, which was fought to a draw. Spain ceded Florida and its Gulf Coast territory in 1819. The Missouri Compromise of 1820, which admitted Missouri as a slave state and Maine as a free state, attempted to balance the desire of northern states to prevent the expansion of slavery into new territories with that of southern states to extend it there. Primarily, the compromise prohibited slavery in all other lands of the Louisiana Purchase north of the 36°30′ parallel. As Americans expanded further into territory inhabited by Native Americans, the federal government implemented policies of Indian removal or assimilation. The most significant such legislation was the Indian Removal Act of 1830, a key policy of President Andrew Jackson. It resulted in the Trail of Tears (1830–1850), in which an estimated 60,000 Native Americans living east of the Mississippi River were forcibly removed and displaced to lands far to the west, causing 13,200 to 16,700 deaths along the forced march. Settler expansion as well as this influx of Indigenous peoples from the East resulted in the American Indian Wars west of the Mississippi. During the colonial period, slavery became legal in all the Thirteen colonies, but by 1770 it provided the main labor force in the large-scale, agriculture-dependent economies of the Southern Colonies from Maryland to Georgia. The practice began to be significantly questioned during the American Revolution, and spurred by an active abolitionist movement that had reemerged in the 1830s, states in the North enacted laws to prohibit slavery within their boundaries. At the same time, support for slavery had strengthened in Southern states, with widespread use of inventions such as the cotton gin (1793) having made slavery immensely profitable for Southern elites. The United States annexed the Republic of Texas in 1845, and the 1846 Oregon Treaty led to U.S. control of the present-day American Northwest. Dispute with Mexico over Texas led to the Mexican–American War (1846–1848). After the victory of the U.S., Mexico recognized U.S. sovereignty over Texas, New Mexico, and California in the 1848 Mexican Cession; the cession's lands also included the future states of Nevada, Colorado and Utah. The California gold rush of 1848–1849 spurred a huge migration of white settlers to the Pacific coast, leading to even more confrontations with Native populations. One of the most violent, the California genocide of thousands of Native inhabitants, lasted into the mid-1870s. Additional western territories and states were created. Throughout the 1850s, the sectional conflict regarding slavery was further inflamed by national legislation in the U.S. Congress and decisions of the Supreme Court. In Congress, the Fugitive Slave Act of 1850 mandated the forcible return to their owners in the South of slaves taking refuge in non-slave states, while the Kansas–Nebraska Act of 1854 effectively gutted the anti-slavery requirements of the Missouri Compromise. In its Dred Scott decision of 1857, the Supreme Court ruled against a slave brought into non-slave territory, simultaneously declaring the entire Missouri Compromise to be unconstitutional. These and other events exacerbated tensions between North and South that would culminate in the American Civil War (1861–1865). Beginning with South Carolina, 11 slave-state governments voted to secede from the United States in 1861, joining to create the Confederate States of America. All other state governments remained loyal to the Union.[q] War broke out in April 1861 after the Confederacy bombarded Fort Sumter. Following the Emancipation Proclamation on January 1, 1863, many freed slaves joined the Union army. The war began to turn in the Union's favor following the 1863 Siege of Vicksburg and Battle of Gettysburg, and the Confederates surrendered in 1865 after the Union's victory in the Battle of Appomattox Court House. Efforts toward reconstruction in the secessionist South had begun as early as 1862, but it was only after President Lincoln's assassination that the three Reconstruction Amendments to the Constitution were ratified to protect civil rights. The amendments codified nationally the abolition of slavery and involuntary servitude except as punishment for crimes, promised equal protection under the law for all persons, and prohibited discrimination on the basis of race or previous enslavement. As a result, African Americans took an active political role in ex-Confederate states in the decade following the Civil War. The former Confederate states were readmitted to the Union, beginning with Tennessee in 1866 and ending with Georgia in 1870. National infrastructure, including transcontinental telegraph and railroads, spurred growth in the American frontier. This was accelerated by the Homestead Acts, through which nearly 10 percent of the total land area of the United States was given away free to some 1.6 million homesteaders. From 1865 through 1917, an unprecedented stream of immigrants arrived in the United States, including 24.4 million from Europe. Most came through the Port of New York, as New York City and other large cities on the East Coast became home to large Jewish, Irish, and Italian populations. Many Northern Europeans as well as significant numbers of Germans and other Central Europeans moved to the Midwest. At the same time, about one million French Canadians migrated from Quebec to New England. During the Great Migration, millions of African Americans left the rural South for urban areas in the North. Alaska was purchased from Russia in 1867. The Compromise of 1877 is generally considered the end of the Reconstruction era, as it resolved the electoral crisis following the 1876 presidential election and led President Rutherford B. Hayes to reduce the role of federal troops in the South. Immediately, the Redeemers began evicting the Carpetbaggers and quickly regained local control of Southern politics in the name of white supremacy. African Americans endured a period of heightened, overt racism following Reconstruction, a time often considered the nadir of American race relations. A series of Supreme Court decisions, including Plessy v. Ferguson, emptied the Fourteenth and Fifteenth Amendments of their force, allowing Jim Crow laws in the South to remain unchecked, sundown towns in the Midwest, and segregation in communities across the country, which would be reinforced in part by the policy of redlining later adopted by the federal Home Owners' Loan Corporation. An explosion of technological advancement, accompanied by the exploitation of cheap immigrant labor, led to rapid economic expansion during the Gilded Age of the late 19th century. It continued into the early 20th, when the United States already outpaced the economies of Britain, France, and Germany combined. This fostered the amassing of power by a few prominent industrialists, largely by their formation of trusts and monopolies to prevent competition. Tycoons led the nation's expansion in the railroad, petroleum, and steel industries. The United States emerged as a pioneer of the automotive industry. These changes resulted in significant increases in economic inequality, slum conditions, and social unrest, creating the environment for labor unions and socialist movements to begin to flourish. This period eventually ended with the advent of the Progressive Era, which was characterized by significant economic and social reforms. Pro-American elements in Hawaii overthrew the Hawaiian monarchy; the islands were annexed in 1898. That same year, Puerto Rico, the Philippines, and Guam were ceded to the U.S. by Spain after the latter's defeat in the Spanish–American War. (The Philippines was granted full independence from the U.S. on July 4, 1946, following World War II. Puerto Rico and Guam have remained U.S. territories.) American Samoa was acquired by the United States in 1900 after the Second Samoan Civil War. The U.S. Virgin Islands were purchased from Denmark in 1917. The United States entered World War I alongside the Allies in 1917 helping to turn the tide against the Central Powers. In 1920, a constitutional amendment granted nationwide women's suffrage. During the 1920s and 1930s, radio for mass communication and early television transformed communications nationwide. The Wall Street Crash of 1929 triggered the Great Depression, to which President Franklin D. Roosevelt responded with the New Deal plan of "reform, recovery and relief", a series of unprecedented and sweeping recovery programs and employment relief projects combined with financial reforms and regulations. Initially neutral during World War II, the U.S. began supplying war materiel to the Allies of World War II in March 1941 and entered the war in December after Japan's attack on Pearl Harbor. Agreeing to a "Europe first" policy, the U.S. concentrated its wartime efforts on Japan's allies Italy and Germany until their final defeat in May 1945. The U.S. developed the first nuclear weapons and used them against the Japanese cities of Hiroshima and Nagasaki in August 1945, ending the war. The United States was one of the "Four Policemen" who met to plan the post-war world, alongside the United Kingdom, the Soviet Union, and China. The U.S. emerged relatively unscathed from the war, with even greater economic power and international political influence. The end of World War II in 1945 left the U.S. and the Soviet Union as superpowers, each with its own political, military, and economic sphere of influence. Geopolitical tensions between the two superpowers soon led to the Cold War. The U.S. implemented a policy of containment intended to limit the Soviet Union's sphere of influence; engaged in regime change against governments perceived to be aligned with the Soviets; and prevailed in the Space Race, which culminated with the first crewed Moon landing in 1969. Domestically, the U.S. experienced economic growth, urbanization, and population growth following World War II. The civil rights movement emerged, with Martin Luther King Jr. becoming a prominent leader in the early 1960s. The Great Society plan of President Lyndon B. Johnson's administration resulted in groundbreaking and broad-reaching laws, policies and a constitutional amendment to counteract some of the worst effects of lingering institutional racism. The counterculture movement in the U.S. brought significant social changes, including the liberalization of attitudes toward recreational drug use and sexuality. It also encouraged open defiance of the military draft (leading to the end of conscription in 1973) and wide opposition to U.S. intervention in Vietnam, with the U.S. totally withdrawing in 1975. A societal shift in the roles of women was significantly responsible for the large increase in female paid labor participation starting in the 1970s, and by 1985 the majority of American women aged 16 and older were employed. The Fall of Communism and the dissolution of the Soviet Union from 1989 to 1991 marked the end of the Cold War and left the United States as the world's sole superpower. This cemented the United States' global influence, reinforcing the concept of the "American Century" as the U.S. dominated international political, cultural, economic, and military affairs. The 1990s saw the longest recorded economic expansion in American history, a dramatic decline in U.S. crime rates, and advances in technology. Throughout this decade, technological innovations such as the World Wide Web, the evolution of the Pentium microprocessor in accordance with Moore's law, rechargeable lithium-ion batteries, the first gene therapy trial, and cloning either emerged in the U.S. or were improved upon there. The Human Genome Project was formally launched in 1990, while Nasdaq became the first stock market in the United States to trade online in 1998. In the Gulf War of 1991, an American-led international coalition of states expelled an Iraqi invasion force that had occupied neighboring Kuwait. The September 11 attacks on the United States in 2001 by the pan-Islamist militant organization al-Qaeda led to the war on terror and subsequent military interventions in Afghanistan and in Iraq. The U.S. housing bubble culminated in 2007 with the Great Recession, the largest economic contraction since the Great Depression. In the 2010s and early 2020s, the United States has experienced increased political polarization and democratic backsliding. The country's polarization was violently reflected in the January 2021 Capitol attack, when a mob of insurrectionists entered the U.S. Capitol and sought to prevent the peaceful transfer of power in an attempted self-coup d'état. Geography The United States is the world's third-largest country by total area behind Russia and Canada.[c] The 48 contiguous states and the District of Columbia have a combined area of 3,119,885 square miles (8,080,470 km2). In 2021, the United States had 8% of the Earth's permanent meadows and pastures and 10% of its cropland. Starting in the east, the coastal plain of the Atlantic seaboard gives way to inland forests and rolling hills in the Piedmont plateau region. The Appalachian Mountains and the Adirondack Massif separate the East Coast from the Great Lakes and the grasslands of the Midwest. The Mississippi River System, the world's fourth-longest river system, runs predominantly north–south through the center of the country. The flat and fertile prairie of the Great Plains stretches to the west, interrupted by a highland region in the southeast. The Rocky Mountains, west of the Great Plains, extend north to south across the country, peaking at over 14,000 feet (4,300 m) in Colorado. The supervolcano underlying Yellowstone National Park in the Rocky Mountains, the Yellowstone Caldera, is the continent's largest volcanic feature. Farther west are the rocky Great Basin and the Chihuahuan, Sonoran, and Mojave deserts. In the northwest corner of Arizona, carved by the Colorado River, is the Grand Canyon, a steep-sided canyon and popular tourist destination known for its overwhelming visual size and intricate, colorful landscape. The Cascade and Sierra Nevada mountain ranges run close to the Pacific coast. The lowest and highest points in the contiguous United States are in the State of California, about 84 miles (135 km) apart. At an elevation of 20,310 feet (6,190.5 m), Alaska's Denali (also called Mount McKinley) is the highest peak in the country and on the continent. Active volcanoes in the U.S. are common throughout Alaska's Alexander and Aleutian Islands. Located entirely outside North America, the archipelago of Hawaii consists of volcanic islands, physiographically and ethnologically part of the Polynesian subregion of Oceania. In addition to its total land area, the United States has one of the world's largest marine exclusive economic zones spanning approximately 4.5 million square miles (11.7 million km2) of ocean. With its large size and geographic variety, the United States includes most climate types. East of the 100th meridian, the climate ranges from humid continental in the north to humid subtropical in the south. The western Great Plains are semi-arid. Many mountainous areas of the American West have an alpine climate. The climate is arid in the Southwest, Mediterranean in coastal California, and oceanic in coastal Oregon, Washington, and southern Alaska. Most of Alaska is subarctic or polar. Hawaii, the southern tip of Florida and U.S. territories in the Caribbean and Pacific are tropical. The United States receives more high-impact extreme weather incidents than any other country. States bordering the Gulf of Mexico are prone to hurricanes, and most of the world's tornadoes occur in the country, mainly in Tornado Alley. Due to climate change in the country, extreme weather has become more frequent in the U.S. in the 21st century, with three times the number of reported heat waves compared to the 1960s. Since the 1990s, droughts in the American Southwest have become more persistent and more severe. The regions considered as the most attractive to the population are the most vulnerable. The U.S. is one of 17 megadiverse countries containing large numbers of endemic species: about 17,000 species of vascular plants occur in the contiguous United States and Alaska, and over 1,800 species of flowering plants are found in Hawaii, few of which occur on the mainland. The United States is home to 428 mammal species, 784 birds, 311 reptiles, 295 amphibians, and around 91,000 insect species. There are 63 national parks, and hundreds of other federally managed monuments, forests, and wilderness areas, administered by the National Park Service and other agencies. About 28% of the country's land is publicly owned and federally managed, primarily in the Western States. Most of this land is protected, though some is leased for commercial use, and less than one percent is used for military purposes. Environmental issues in the United States include debates on non-renewable resources and nuclear energy, air and water pollution, biodiversity, logging and deforestation, and climate change. The U.S. Environmental Protection Agency (EPA) is the federal agency charged with addressing most environmental-related issues. The idea of wilderness has shaped the management of public lands since 1964, with the Wilderness Act. The Endangered Species Act of 1973 provides a way to protect threatened and endangered species and their habitats. The United States Fish and Wildlife Service implements and enforces the Act. In 2024, the U.S. ranked 35th among 180 countries in the Environmental Performance Index. Government and politics The United States is a federal republic of 50 states and a federal capital district, Washington, D.C. The U.S. asserts sovereignty over five unincorporated territories and several uninhabited island possessions. It is the world's oldest surviving federation, and its presidential system of federal government has been adopted, in whole or in part, by many newly independent states worldwide following their decolonization. The Constitution of the United States serves as the country's supreme legal document. Most scholars describe the United States as a liberal democracy.[r] Composed of three branches, all headquartered in Washington, D.C., the federal government is the national government of the United States. The U.S. Constitution establishes a separation of powers intended to provide a system of checks and balances to prevent any of the three branches from becoming supreme. The three-branch system is known as the presidential system, in contrast to the parliamentary system where the executive is part of the legislative body. Many countries around the world adopted this aspect of the 1789 Constitution of the United States, especially in the postcolonial Americas. In the U.S. federal system, sovereign powers are shared between three levels of government specified in the Constitution: the federal government, the states, and Indian tribes. The U.S. also asserts sovereignty over five permanently inhabited territories: American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the U.S. Virgin Islands. Residents of the 50 states are governed by their elected state government, under state constitutions compatible with the national constitution, and by elected local governments that are administrative divisions of a state. States are subdivided into counties or county equivalents, and (except for Hawaii) further divided into municipalities, each administered by elected representatives. The District of Columbia is a federal district containing the U.S. capital, Washington, D.C. The federal district is an administrative division of the federal government. Indian country is made up of 574 federally recognized tribes and 326 Indian reservations. They hold a government-to-government relationship with the U.S. federal government in Washington and are legally defined as domestic dependent nations with inherent tribal sovereignty rights. In addition to the five major territories, the U.S. also asserts sovereignty over the United States Minor Outlying Islands in the Pacific Ocean and the Caribbean. The seven undisputed islands without permanent populations are Baker Island, Howland Island, Jarvis Island, Johnston Atoll, Kingman Reef, Midway Atoll, and Palmyra Atoll. U.S. sovereignty over the unpopulated Bajo Nuevo Bank, Navassa Island, Serranilla Bank, and Wake Island is disputed. The Constitution is silent on political parties. However, they developed independently in the 18th century with the Federalist and Anti-Federalist parties. Since then, the United States has operated as a de facto two-party system, though the parties have changed over time. Since the mid-19th century, the two main national parties have been the Democratic Party and the Republican Party. The former is perceived as relatively liberal in its political platform while the latter is perceived as relatively conservative in its platform. The United States has an established structure of foreign relations, with the world's second-largest diplomatic corps as of 2024[update]. It is a permanent member of the United Nations Security Council and home to the United Nations headquarters. The United States is a member of the G7, G20, and OECD intergovernmental organizations. Almost all countries have embassies and many have consulates (official representatives) in the country. Likewise, nearly all countries host formal diplomatic missions with the United States, except Iran, North Korea, and Bhutan. Though Taiwan does not have formal diplomatic relations with the U.S., it maintains close unofficial relations. The United States regularly supplies Taiwan with military equipment to deter potential Chinese aggression. Its geopolitical attention also turned to the Indo-Pacific when the United States joined the Quadrilateral Security Dialogue with Australia, India, and Japan. The United States has a "Special Relationship" with the United Kingdom and strong ties with Canada, Australia, New Zealand, the Philippines, Japan, South Korea, Israel, and several European Union countries such as France, Italy, Germany, Spain, and Poland. The U.S. works closely with its NATO allies on military and national security issues, and with countries in the Americas through the Organization of American States and the United States–Mexico–Canada Free Trade Agreement. The U.S. exercises full international defense authority and responsibility for Micronesia, the Marshall Islands, and Palau through the Compact of Free Association. It has increasingly conducted strategic cooperation with India, while its ties with China have steadily deteriorated. Beginning in 2014, the U.S. had become a key ally of Ukraine. After Donald Trump was elected U.S. president in 2024, he sought to negotiate an end to the Russo-Ukrainian War. He paused all military aid to Ukraine in March 2025, although the aid resumed later. Trump also ended U.S. intelligence sharing with the country, but this too was eventually restored. The president is the commander-in-chief of the United States Armed Forces and appoints its leaders, the secretary of defense and the Joint Chiefs of Staff. The Department of Defense, headquartered at the Pentagon near Washington, D.C., administers five of the six service branches, which are made up of the U.S. Army, Marine Corps, Navy, Air Force, and Space Force. The Coast Guard is administered by the Department of Homeland Security in peacetime and can be transferred to the Department of the Navy in wartime. Total strength of the entire military is about 1.3 million active duty with an additional 400,000 in reserve. The United States spent $997 billion on its military in 2024, which is by far the largest amount of any country, making up 37% of global military spending and accounting for 3.4% of the country's GDP. The U.S. possesses 42% of the world's nuclear weapons—the second-largest stockpile after that of Russia. The U.S. military is widely regarded as the most powerful and advanced in the world. The United States has the third-largest combined armed forces in the world, behind the Chinese People's Liberation Army and Indian Armed Forces. The U.S. military operates about 800 bases and facilities abroad, and maintains deployments greater than 100 active duty personnel in 25 foreign countries. The United States has engaged in over 400 military interventions since its founding in 1776, with over half of these occurring between 1950 and 2019 and 25% occurring in the post-Cold War era. State defense forces (SDFs) are military units that operate under the sole authority of a state government. SDFs are authorized by state and federal law but are under the command of the state's governor. By contrast, the 54 U.S. National Guard organizations[t] fall under the dual control of state or territorial governments and the federal government; their units can also become federalized entities, but SDFs cannot be federalized. The National Guard personnel of a state or territory can be federalized by the president under the National Defense Act Amendments of 1933; this legislation created the Guard and provides for the integration of Army National Guard and Air National Guard units and personnel into the U.S. Army and (since 1947) the U.S. Air Force. The total number of National Guard members is about 430,000, while the estimated combined strength of SDFs is less than 10,000. There are about 18,000 U.S. police agencies from local to national level in the United States. Law in the United States is mainly enforced by local police departments and sheriff departments in their municipal or county jurisdictions. The state police departments have authority in their respective state, and federal agencies such as the Federal Bureau of Investigation (FBI) and the U.S. Marshals Service have national jurisdiction and specialized duties, such as protecting civil rights, national security, enforcing U.S. federal courts' rulings and federal laws, and interstate criminal activity. State courts conduct almost all civil and criminal trials, while federal courts adjudicate the much smaller number of civil and criminal cases that relate to federal law. There is no unified "criminal justice system" in the United States. The American prison system is largely heterogenous, with thousands of relatively independent systems operating across federal, state, local, and tribal levels. In 2025, "these systems hold nearly 2 million people in 1,566 state prisons, 98 federal prisons, 3,116 local jails, 1,277 juvenile correctional facilities, 133 immigration detention facilities, and 80 Indian country jails, as well as in military prisons, civil commitment centers, state psychiatric hospitals, and prisons in the U.S. territories." Despite disparate systems of confinement, four main institutions dominate: federal prisons, state prisons, local jails, and juvenile correctional facilities. Federal prisons are run by the Federal Bureau of Prisons and hold pretrial detainees as well as people who have been convicted of federal crimes. State prisons, run by the department of corrections of each state, hold people sentenced and serving prison time (usually longer than one year) for felony offenses. Local jails are county or municipal facilities that incarcerate defendants prior to trial; they also hold those serving short sentences (typically under a year). Juvenile correctional facilities are operated by local or state governments and serve as longer-term placements for any minor adjudicated as delinquent and ordered by a judge to be confined. In January 2023, the United States had the sixth-highest per capita incarceration rate in the world—531 people per 100,000 inhabitants—and the largest prison and jail population in the world, with more than 1.9 million people incarcerated. An analysis of the World Health Organization Mortality Database from 2010 showed U.S. homicide rates "were 7 times higher than in other high-income countries, driven by a gun homicide rate that was 25 times higher". Economy The U.S. has a highly developed mixed economy that has been the world's largest nominally since about 1890. Its 2024 gross domestic product (GDP)[e] of more than $29 trillion constituted over 25% of nominal global economic output, or 15% at purchasing power parity (PPP). From 1983 to 2008, U.S. real compounded annual GDP growth was 3.3%, compared to a 2.3% weighted average for the rest of the G7. The country ranks first in the world by nominal GDP, second when adjusted for purchasing power parities (PPP), and ninth by PPP-adjusted GDP per capita. In February 2024, the total U.S. federal government debt was $34.4 trillion. Of the world's 500 largest companies by revenue, 138 were headquartered in the U.S. in 2025, the highest number of any country. The U.S. dollar is the currency most used in international transactions and the world's foremost reserve currency, backed by the country's dominant economy, its military, the petrodollar system, its large U.S. treasuries market, and its linked eurodollar. Several countries use it as their official currency, and in others it is the de facto currency. The U.S. has free trade agreements with several countries, including the USMCA. Although the United States has reached a post-industrial level of economic development and is often described as having a service economy, it remains a major industrial power; in 2024, the U.S. manufacturing sector was the world's second-largest by value output after China's. New York City is the world's principal financial center, and its metropolitan area is the world's largest metropolitan economy. The New York Stock Exchange and Nasdaq, both located in New York City, are the world's two largest stock exchanges by market capitalization and trade volume. The United States is at the forefront of technological advancement and innovation in many economic fields, especially in artificial intelligence; electronics and computers; pharmaceuticals; and medical, aerospace and military equipment. The country's economy is fueled by abundant natural resources, a well-developed infrastructure, and high productivity. The largest trading partners of the United States are the European Union, Mexico, Canada, China, Japan, South Korea, the United Kingdom, Vietnam, India, and Taiwan. The United States is the world's largest importer and second-largest exporter.[u] It is by far the world's largest exporter of services. Americans have the highest average household and employee income among OECD member states, and the fourth-highest median household income in 2023, up from sixth-highest in 2013. With personal consumption expenditures of over $18.5 trillion in 2023, the U.S. has a heavily consumer-driven economy and is the world's largest consumer market. The U.S. ranked first in the number of dollar billionaires and millionaires in 2023, with 735 billionaires and nearly 22 million millionaires. Wealth in the United States is highly concentrated; in 2011, the richest 10% of the adult population owned 72% of the country's household wealth, while the bottom 50% owned just 2%. U.S. wealth inequality increased substantially since the late 1980s, and income inequality in the U.S. reached a record high in 2019. In 2024, the country had some of the highest wealth and income inequality levels among OECD countries. Since the 1970s, there has been a decoupling of U.S. wage gains from worker productivity. In 2016, the top fifth of earners took home more than half of all income, giving the U.S. one of the widest income distributions among OECD countries. There were about 771,480 homeless persons in the U.S. in 2024. In 2022, 6.4 million children experienced food insecurity. Feeding America estimates that around one in five, or approximately 13 million, children experience hunger in the U.S. and do not know where or when they will get their next meal. Also in 2022, about 37.9 million people, or 11.5% of the U.S. population, were living in poverty. The United States has a smaller welfare state and redistributes less income through government action than most other high-income countries. It is the only advanced economy that does not guarantee its workers paid vacation nationally and one of a few countries in the world without federal paid family leave as a legal right. The United States has a higher percentage of low-income workers than almost any other developed country, largely because of a weak collective bargaining system and lack of government support for at-risk workers. The United States has been a leader in technological innovation since the late 19th century and scientific research since the mid-20th century. Methods for producing interchangeable parts and the establishment of a machine tool industry enabled the large-scale manufacturing of U.S. consumer products in the late 19th century. By the early 20th century, factory electrification, the introduction of the assembly line, and other labor-saving techniques created the system of mass production. In the 21st century, the United States continues to be one of the world's foremost scientific powers, though China has emerged as a major competitor in many fields. The U.S. has the highest research and development expenditures of any country and ranks ninth as a percentage of GDP. In 2022, the United States was (after China) the country with the second-highest number of published scientific papers. In 2021, the U.S. ranked second (also after China) by the number of patent applications, and third by trademark and industrial design applications (after China and Germany), according to World Intellectual Property Indicators. In 2025 the United States ranked third (after Switzerland and Sweden) in the Global Innovation Index. The United States is considered to be a world leader in the development of artificial intelligence technology. In 2023, the United States was ranked the second most technologically advanced country in the world (after South Korea) by Global Finance magazine. The United States has maintained a space program since the late 1950s, beginning with the establishment of the National Aeronautics and Space Administration (NASA) in 1958. NASA's Apollo program (1961–1972) achieved the first crewed Moon landing with the 1969 Apollo 11 mission; it remains one of the agency's most significant milestones. Other major endeavors by NASA include the Space Shuttle program (1981–2011), the Voyager program (1972–present), the Hubble and James Webb space telescopes (launched in 1990 and 2021, respectively), and the multi-mission Mars Exploration Program (Spirit and Opportunity, Curiosity, and Perseverance). NASA is one of five agencies collaborating on the International Space Station (ISS); U.S. contributions to the ISS include several modules, including Destiny (2001), Harmony (2007), and Tranquility (2010), as well as ongoing logistical and operational support. The United States private sector dominates the global commercial spaceflight industry. Prominent American spaceflight contractors include Blue Origin, Boeing, Lockheed Martin, Northrop Grumman, and SpaceX. NASA programs such as the Commercial Crew Program, Commercial Resupply Services, Commercial Lunar Payload Services, and NextSTEP have facilitated growing private-sector involvement in American spaceflight. In 2023, the United States received approximately 84% of its energy from fossil fuel, and its largest source of energy was petroleum (38%), followed by natural gas (36%), renewable sources (9%), coal (9%), and nuclear power (9%). In 2022, the United States constituted about 4% of the world's population, but consumed around 16% of the world's energy. The U.S. ranks as the second-highest emitter of greenhouse gases behind China. The U.S. is the world's largest producer of nuclear power, generating around 30% of the world's nuclear electricity. It also has the highest number of nuclear power reactors of any country. From 2024, the U.S. plans to triple its nuclear power capacity by 2050. The United States' 4 million miles (6.4 million kilometers) of road network, owned almost entirely by state and local governments, is the longest in the world. The extensive Interstate Highway System that connects all major U.S. cities is funded mostly by the federal government but maintained by state departments of transportation. The system is further extended by state highways and some private toll roads. The U.S. is among the top ten countries with the highest vehicle ownership per capita (850 vehicles per 1,000 people) in 2022. A 2022 study found that 76% of U.S. commuters drive alone and 14% ride a bicycle, including bike owners and users of bike-sharing networks. About 11% use some form of public transportation. Public transportation in the United States is well developed in the largest urban areas, notably New York City, Washington, D.C., Boston, Philadelphia, Chicago, and San Francisco; otherwise, coverage is generally less extensive than in most other developed countries. The U.S. also has many relatively car-dependent localities. Long-distance intercity travel is provided primarily by airlines, but travel by rail is more common along the Northeast Corridor, the only high-speed rail in the U.S. that meets international standards. Amtrak, the country's government-sponsored national passenger rail company, has a relatively sparse network compared to that of Western European countries. Service is concentrated in the Northeast, California, the Midwest, the Pacific Northwest, and Virginia/Southeast. The United States has an extensive air transportation network. U.S. civilian airlines are all privately owned. The three largest airlines in the world, by total number of passengers carried, are U.S.-based; American Airlines became the global leader after its 2013 merger with US Airways. Of the 50 busiest airports in the world, 16 are in the United States, as well as five of the top 10. The world's busiest airport by passenger volume is Hartsfield–Jackson Atlanta International in Atlanta, Georgia. In 2022, most of the 19,969 U.S. airports were owned and operated by local government authorities, and there are also some private airports. Some 5,193 are designated as "public use", including for general aviation. The Transportation Security Administration (TSA) has provided security at most major airports since 2001. The country's rail transport network, the longest in the world at 182,412.3 mi (293,564.2 km), handles mostly freight (in contrast to more passenger-centered rail in Europe). Because they are often privately owned operations, U.S. railroads lag behind those of the rest of the world in terms of electrification. The country's inland waterways are the world's fifth-longest, totaling 25,482 mi (41,009 km). They are used extensively for freight, recreation, and a small amount of passenger traffic. Of the world's 50 busiest container ports, four are located in the United States, with the busiest in the country being the Port of Los Angeles. Demographics The U.S. Census Bureau reported 331,449,281 residents on April 1, 2020,[v] making the United States the third-most-populous country in the world, after India and China. The Census Bureau's official 2025 population estimate was 341,784,857, an increase of 3.1% since the 2020 census. According to the Bureau's U.S. Population Clock, on July 1, 2024, the U.S. population had a net gain of one person every 16 seconds, or about 5400 people per day. In 2023, 51% of Americans age 15 and over were married, 6% were widowed, 10% were divorced, and 34% had never been married. In 2023, the total fertility rate for the U.S. stood at 1.6 children per woman, and, at 23%, it had the world's highest rate of children living in single-parent households in 2019. Most Americans live in the suburbs of major metropolitan areas. The United States has a diverse population; 37 ancestry groups have more than one million members. White Americans with ancestry from Europe, the Middle East, or North Africa form the largest racial and ethnic group at 57.8% of the United States population. Hispanic and Latino Americans form the second-largest group and are 18.7% of the United States population. African Americans constitute the country's third-largest ancestry group and are 12.1% of the total U.S. population. Asian Americans are the country's fourth-largest group, composing 5.9% of the United States population. The country's 3.7 million Native Americans account for about 1%, and some 574 native tribes are recognized by the federal government. In 2024, the median age of the United States population was 39.1 years. While many languages and dialects are spoken in the United States, English is by far the most commonly spoken and written. De facto, English is the official language of the United States, and in 2025, Executive Order 14224 declared English official. However, the U.S. has never had a de jure official language, as Congress has never passed a law to designate English as official for all three federal branches. Some laws, such as U.S. naturalization requirements, nonetheless standardize English. Twenty-eight states and the United States Virgin Islands have laws that designate English as the sole official language; 19 states and the District of Columbia have no official language. Three states and four U.S. territories have recognized local or indigenous languages in addition to English: Hawaii (Hawaiian), Alaska (twenty Native languages),[w] South Dakota (Sioux), American Samoa (Samoan), Puerto Rico (Spanish), Guam (Chamorro), and the Northern Mariana Islands (Carolinian and Chamorro). In total, 169 Native American languages are spoken in the United States. In Puerto Rico, Spanish is more widely spoken than English. According to the American Community Survey (2020), some 245.4 million people in the U.S. age five and older spoke only English at home. About 41.2 million spoke Spanish at home, making it the second most commonly used language. Other languages spoken at home by one million people or more include Chinese (3.40 million), Tagalog (1.71 million), Vietnamese (1.52 million), Arabic (1.39 million), French (1.18 million), Korean (1.07 million), and Russian (1.04 million). German, spoken by 1 million people at home in 2010, fell to 857,000 total speakers in 2020. America's immigrant population is by far the world's largest in absolute terms. In 2022, there were 87.7 million immigrants and U.S.-born children of immigrants in the United States, accounting for nearly 27% of the overall U.S. population. In 2017, out of the U.S. foreign-born population, some 45% (20.7 million) were naturalized citizens, 27% (12.3 million) were lawful permanent residents, 6% (2.2 million) were temporary lawful residents, and 23% (10.5 million) were unauthorized immigrants. In 2019, the top countries of origin for immigrants were Mexico (24% of immigrants), India (6%), China (5%), the Philippines (4.5%), and El Salvador (3%). In fiscal year 2022, over one million immigrants (most of whom entered through family reunification) were granted legal residence. The undocumented immigrant population in the U.S. reached a record high of 14 million in 2023. The First Amendment guarantees the free exercise of religion in the country and forbids Congress from passing laws respecting its establishment. Religious practice is widespread, among the most diverse in the world, and profoundly vibrant. The country has the world's largest Christian population, which includes the fourth-largest population of Catholics. Other notable faiths include Judaism, Buddhism, Hinduism, Islam, New Age, and Native American religions. Religious practice varies significantly by region. "Ceremonial deism" is common in American culture. The overwhelming majority of Americans believe in a higher power or spiritual force, engage in spiritual practices such as prayer, and consider themselves religious or spiritual. In the Southern United States' "Bible Belt", evangelical Protestantism plays a significant role culturally; New England and the Western United States tend to be more secular. Mormonism, a Restorationist movement founded in the U.S. in 1847, is the predominant religion in Utah and a major religion in Idaho. About 82% of Americans live in metropolitan areas, particularly in suburbs; about half of those reside in cities with populations over 50,000. In 2022, 333 incorporated municipalities had populations over 100,000, nine cities had more than one million residents, and four cities—New York City, Los Angeles, Chicago, and Houston—had populations exceeding two million. Many U.S. metropolitan populations are growing rapidly, particularly in the South and West. According to the Centers for Disease Control and Prevention (CDC), average U.S. life expectancy at birth reached 79.0 years in 2024, its highest recorded level. This was an increase of 0.6 years over 2023. The CDC attributed the improvement to a significant fall in the number of fatal drug overdoses in the country, noting that "heart disease continues to be the leading cause of death in the United States, followed by cancer and unintentional injuries." In 2024, life expectancy at birth for American men rose to 76.5 years (+0.7 years compared to 2023), while life expectancy for women was 81.4 years (+0.3 years). Starting in 1998, life expectancy in the U.S. fell behind that of other wealthy industrialized countries, and Americans' "health disadvantage" gap has been increasing ever since. The Commonwealth Fund reported in 2020 that the U.S. had the highest suicide rate among high-income countries. Approximately one-third of the U.S. adult population is obese and another third is overweight. The U.S. healthcare system far outspends that of any other country, measured both in per capita spending and as a percentage of GDP, but attains worse healthcare outcomes when compared to peer countries for reasons that are debated. The United States is the only developed country without a system of universal healthcare, and a significant proportion of the population that does not carry health insurance. Government-funded healthcare coverage for the poor (Medicaid) and for those age 65 and older (Medicare) is available to Americans who meet the programs' income or age qualifications. In 2010, then-President Obama passed the Patient Protection and Affordable Care Act.[x] Abortion in the United States is not federally protected, and is illegal or restricted in 17 states. American primary and secondary education, known in the U.S. as K–12 ("kindergarten through 12th grade"), is decentralized. School systems are operated by state, territorial, and sometimes municipal governments and regulated by the U.S. Department of Education. In general, children are required to attend school or an approved homeschool from the age of five or six (kindergarten or first grade) until they are 18 years old. This often brings students through the 12th grade, the final year of a U.S. high school, but some states and territories allow them to leave school earlier, at age 16 or 17. The U.S. spends more on education per student than any other country, an average of $18,614 per year per public elementary and secondary school student in 2020–2021. Among Americans age 25 and older, 92.2% graduated from high school, 62.7% attended some college, 37.7% earned a bachelor's degree, and 14.2% earned a graduate degree. The U.S. literacy rate is near-universal. The U.S. has produced the most Nobel Prize winners of any country, with 411 (having won 413 awards). U.S. tertiary or higher education has earned a global reputation. Many of the world's top universities, as listed by various ranking organizations, are in the United States, including 19 of the top 25. American higher education is dominated by state university systems, although the country's many private universities and colleges enroll about 20% of all American students. Local community colleges generally offer open admissions, lower tuition, and coursework leading to a two-year associate degree or a non-degree certificate. As for public expenditures on higher education, the U.S. spends more per student than the OECD average, and Americans spend more than all nations in combined public and private spending. Colleges and universities directly funded by the federal government do not charge tuition and are limited to military personnel and government employees, including: the U.S. service academies, the Naval Postgraduate School, and military staff colleges. Despite some student loan forgiveness programs in place, student loan debt increased by 102% between 2010 and 2020, and exceeded $1.7 trillion in 2022. Culture and society The United States is home to a wide variety of ethnic groups, traditions, and customs. The country has been described as having the values of individualism and personal autonomy, as well as a strong work ethic and competitiveness. Voluntary altruism towards others also plays a major role; according to a 2016 study by the Charities Aid Foundation, Americans donated 1.44% of total GDP to charity—the highest rate in the world by a large margin. Americans have traditionally been characterized by a unifying political belief in an "American Creed" emphasizing consent of the governed, liberty, equality under the law, democracy, social equality, property rights, and a preference for limited government. The U.S. has acquired significant hard and soft power through its diplomatic influence, economic power, military alliances, and cultural exports such as American movies, music, video games, sports, and food. The influence that the United States exerts on other countries through soft power is referred to as Americanization. Nearly all present Americans or their ancestors came from Europe, Africa, or Asia (the "Old World") within the past five centuries. Mainstream American culture is a Western culture largely derived from the traditions of European immigrants with influences from many other sources, such as traditions brought by slaves from Africa. More recent immigration from Asia and especially Latin America has added to a cultural mix that has been described as a homogenizing melting pot, and a heterogeneous salad bowl, with immigrants contributing to, and often assimilating into, mainstream American culture. Under the First Amendment to the Constitution, the United States is considered to have the strongest protections of free speech of any country. Flag desecration, hate speech, blasphemy, and lese majesty are all forms of protected expression. A 2016 Pew Research Center poll found that Americans were the most supportive of free expression of any polity measured. Additionally, they are the "most supportive of freedom of the press and the right to use the Internet without government censorship". The U.S. is a socially progressive country with permissive attitudes surrounding human sexuality. LGBTQ rights in the United States are among the most advanced by global standards. The American Dream, or the perception that Americans enjoy high levels of social mobility, plays a key role in attracting immigrants. Whether this perception is accurate has been a topic of debate. While mainstream culture holds that the United States is a classless society, scholars identify significant differences between the country's social classes, affecting socialization, language, and values. Americans tend to greatly value socioeconomic achievement, but being ordinary or average is promoted by some as a noble condition as well. The National Foundation on the Arts and the Humanities is an agency of the United States federal government that was established in 1965 with the purpose to "develop and promote a broadly conceived national policy of support for the humanities and the arts in the United States, and for institutions which preserve the cultural heritage of the United States." It is composed of four sub-agencies: Colonial American authors were influenced by John Locke and other Enlightenment philosophers. The American Revolutionary Period (1765–1783) is notable for the political writings of Benjamin Franklin, Alexander Hamilton, Thomas Paine, and Thomas Jefferson. Shortly before and after the Revolutionary War, the newspaper rose to prominence, filling a demand for anti-British national literature. An early novel is William Hill Brown's The Power of Sympathy, published in 1791. Writer and critic John Neal in the early- to mid-19th century helped advance America toward a unique literature and culture by criticizing predecessors such as Washington Irving for imitating their British counterparts, and by influencing writers such as Edgar Allan Poe, who took American poetry and short fiction in new directions. Ralph Waldo Emerson and Margaret Fuller pioneered the influential Transcendentalism movement; Henry David Thoreau, author of Walden, was influenced by this movement. The conflict surrounding abolitionism inspired writers, like Harriet Beecher Stowe, and authors of slave narratives, such as Frederick Douglass. Nathaniel Hawthorne's The Scarlet Letter (1850) explored the dark side of American history, as did Herman Melville's Moby-Dick (1851). Major American poets of the 19th century American Renaissance include Walt Whitman, Melville, and Emily Dickinson. Mark Twain was the first major American writer to be born in the West. Henry James achieved international recognition with novels like The Portrait of a Lady (1881). As literacy rates rose, periodicals published more stories centered around industrial workers, women, and the rural poor. Naturalism, regionalism, and realism were the major literary movements of the period. While modernism generally took on an international character, modernist authors working within the United States more often rooted their work in specific regions, peoples, and cultures. Following the Great Migration to northern cities, African-American and black West Indian authors of the Harlem Renaissance developed an independent tradition of literature that rebuked a history of inequality and celebrated black culture. An important cultural export during the Jazz Age, these writings were a key influence on Négritude, a philosophy emerging in the 1930s among francophone writers of the African diaspora. In the 1950s, an ideal of homogeneity led many authors to attempt to write the Great American Novel, while the Beat Generation rejected this conformity, using styles that elevated the impact of the spoken word over mechanics to describe drug use, sexuality, and the failings of society. Contemporary literature is more pluralistic than in previous eras, with the closest thing to a unifying feature being a trend toward self-conscious experiments with language. Twelve American laureates have won the Nobel Prize in Literature. Media in the United States is broadly uncensored, with the First Amendment providing significant protections, as reiterated in New York Times Co. v. United States. The four major broadcasters in the U.S. are the National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), American Broadcasting Company (ABC), and Fox Broadcasting Company (Fox). The four major broadcast television networks are all commercial entities. The U.S. cable television system offers hundreds of channels catering to a variety of niches. In 2021, about 83% of Americans over age 12 listened to broadcast radio, while about 40% listened to podcasts. In the prior year, there were 15,460 licensed full-power radio stations in the U.S. according to the Federal Communications Commission (FCC). Much of the public radio broadcasting is supplied by National Public Radio (NPR), incorporated in February 1970 under the Public Broadcasting Act of 1967. U.S. newspapers with a global reach and reputation include The Wall Street Journal, The New York Times, The Washington Post, and USA Today. About 800 publications are produced in Spanish. With few exceptions, newspapers are privately owned, either by large chains such as Gannett or McClatchy, which own dozens or even hundreds of newspapers; by small chains that own a handful of papers; or, in an increasingly rare situation, by individuals or families. Major cities often have alternative newspapers to complement the mainstream daily papers, such as The Village Voice in New York City and LA Weekly in Los Angeles. The five most-visited websites in the world are Google, YouTube, Facebook, Instagram, and ChatGPT—all of them American-owned. Other popular platforms used include X (formerly Twitter) and Amazon. In 2025, the U.S. was the world's second-largest video game market by revenue (after China). In 2015, the U.S. video game industry consisted of 2,457 companies that employed around 220,000 jobs and generated $30.4 billion in revenue. There are 444 game publishers, developers, and hardware companies in California alone. According to the Game Developers Conference (GDC), the U.S. is the top location for video game development, with 58% of the world's game developers based there in 2025. The United States is well known for its theater. Mainstream theater in the United States derives from the old European theatrical tradition and has been heavily influenced by the British theater. By the middle of the 19th century, America had created new distinct dramatic forms in the Tom Shows, the showboat theater and the minstrel show. The central hub of the American theater scene is the Theater District in Manhattan, with its divisions of Broadway, off-Broadway, and off-off-Broadway. Many movie and television celebrities have gotten their big break working in New York productions. Outside New York City, many cities have professional regional or resident theater companies that produce their own seasons. The biggest-budget theatrical productions are musicals. U.S. theater has an active community theater culture. The Tony Awards recognizes excellence in live Broadway theater and are presented at an annual ceremony in Manhattan. The awards are given for Broadway productions and performances. One is also given for regional theater. Several discretionary non-competitive awards are given as well, including a Special Tony Award, the Tony Honors for Excellence in Theatre, and the Isabelle Stevenson Award. Folk art in colonial America grew out of artisanal craftsmanship in communities that allowed commonly trained people to individually express themselves. It was distinct from Europe's tradition of high art, which was less accessible and generally less relevant to early American settlers. Cultural movements in art and craftsmanship in colonial America generally lagged behind those of Western Europe. For example, the prevailing medieval style of woodworking and primitive sculpture became integral to early American folk art, despite the emergence of Renaissance styles in England in the late 16th and early 17th centuries. The new English styles would have been early enough to make a considerable impact on American folk art, but American styles and forms had already been firmly adopted. Not only did styles change slowly in early America, but there was a tendency for rural artisans there to continue their traditional forms longer than their urban counterparts did—and far longer than those in Western Europe. The Hudson River School was a mid-19th-century movement in the visual arts tradition of European naturalism. The 1913 Armory Show in New York City, an exhibition of European modernist art, shocked the public and transformed the U.S. art scene. American Realism and American Regionalism sought to reflect and give America new ways of looking at itself. Georgia O'Keeffe, Marsden Hartley, and others experimented with new and individualistic styles, which would become known as American modernism. Major artistic movements such as the abstract expressionism of Jackson Pollock and Willem de Kooning and the pop art of Andy Warhol and Roy Lichtenstein developed largely in the United States. Major photographers include Alfred Stieglitz, Edward Steichen, Dorothea Lange, Edward Weston, James Van Der Zee, Ansel Adams, and Gordon Parks. The tide of modernism and then postmodernism has brought global fame to American architects, including Frank Lloyd Wright, Philip Johnson, and Frank Gehry. The Metropolitan Museum of Art in Manhattan is the largest art museum in the United States and the fourth-largest in the world. American folk music encompasses numerous music genres, variously known as traditional music, traditional folk music, contemporary folk music, or roots music. Many traditional songs have been sung within the same family or folk group for generations, and sometimes trace back to such origins as the British Isles, mainland Europe, or Africa. The rhythmic and lyrical styles of African-American music in particular have influenced American music. Banjos were brought to America through the slave trade. Minstrel shows incorporating the instrument into their acts led to its increased popularity and widespread production in the 19th century. The electric guitar, first invented in the 1930s, and mass-produced by the 1940s, had an enormous influence on popular music, in particular due to the development of rock and roll. The synthesizer, turntablism, and electronic music were also largely developed in the U.S. Elements from folk idioms such as the blues and old-time music were adopted and transformed into popular genres with global audiences. Jazz grew from blues and ragtime in the early 20th century, developing from the innovations and recordings of composers such as W.C. Handy and Jelly Roll Morton. Louis Armstrong and Duke Ellington increased its popularity early in the 20th century. Country music developed in the 1920s, bluegrass and rhythm and blues in the 1940s, and rock and roll in the 1950s. In the 1960s, Bob Dylan emerged from the folk revival to become one of the country's most celebrated songwriters. The musical forms of punk and hip hop both originated in the United States in the 1970s. The United States has the world's largest music market, with a total retail value of $15.9 billion in 2022. Most of the world's major record companies are based in the U.S.; they are represented by the Recording Industry Association of America (RIAA). Mid-20th-century American pop stars, such as Frank Sinatra and Elvis Presley, became global celebrities and best-selling music artists, as have artists of the late 20th century, such as Michael Jackson, Madonna, Whitney Houston, and Mariah Carey, and of the early 21st century, such as Eminem, Britney Spears, Lady Gaga, Katy Perry, Taylor Swift and Beyoncé. The United States has the world's largest apparel market by revenue. Apart from professional business attire, American fashion is eclectic and predominantly informal. Americans' diverse cultural roots are reflected in their clothing; however, sneakers, jeans, T-shirts, and baseball caps are emblematic of American styles. New York, with its Fashion Week, is considered to be one of the "Big Four" global fashion capitals, along with Paris, Milan, and London. A study demonstrated that general proximity to Manhattan's Garment District has been synonymous with American fashion since its inception in the early 20th century. A number of well-known designer labels, among them Tommy Hilfiger, Ralph Lauren, Tom Ford and Calvin Klein, are headquartered in Manhattan. Labels cater to niche markets, such as preteens. New York Fashion Week is one of the most influential fashion shows in the world, and is held twice each year in Manhattan; the annual Met Gala, also in Manhattan, has been called the fashion world's "biggest night". The U.S. film industry has a worldwide influence and following. Hollywood, a district in central Los Angeles, the nation's second-most populous city, is also metonymous for the American filmmaking industry. The major film studios of the United States are the primary source of the most commercially successful movies selling the most tickets in the world. Largely centered in the New York City region from its beginnings in the late 19th century through the first decades of the 20th century, the U.S. film industry has since been primarily based in and around Hollywood. Nonetheless, American film companies have been subject to the forces of globalization in the 21st century, and an increasing number of films are made elsewhere. The Academy Awards, popularly known as "the Oscars", have been held annually by the Academy of Motion Picture Arts and Sciences since 1929, and the Golden Globe Awards have been held annually since January 1944. The industry peaked in what is commonly referred to as the "Golden Age of Hollywood", from the early sound period until the early 1960s, with screen actors such as John Wayne and Marilyn Monroe becoming iconic figures. In the 1970s, "New Hollywood", or the "Hollywood Renaissance", was defined by grittier films influenced by French and Italian realist pictures of the post-war period. The 21st century has been marked by the rise of American streaming platforms, which came to rival traditional cinema. Early settlers were introduced by Native Americans to foods such as turkey, sweet potatoes, corn, squash, and maple syrup. Of the most enduring and pervasive examples are variations of the native dish called succotash. Early settlers and later immigrants combined these with foods they were familiar with, such as wheat flour, beef, and milk, to create a distinctive American cuisine. New World crops, especially pumpkin, corn, potatoes, and turkey as the main course are part of a shared national menu on Thanksgiving, when many Americans prepare or purchase traditional dishes to celebrate the occasion. Characteristic American dishes such as apple pie, fried chicken, doughnuts, french fries, macaroni and cheese, ice cream, hamburgers, hot dogs, and American pizza derive from the recipes of various immigrant groups. Mexican dishes such as burritos and tacos preexisted the United States in areas later annexed from Mexico, and adaptations of Chinese cuisine as well as pasta dishes freely adapted from Italian sources are all widely consumed. American chefs have had a significant impact on society both domestically and internationally. In 1946, the Culinary Institute of America was founded by Katharine Angell and Frances Roth. This would become the United States' most prestigious culinary school, where many of the most talented American chefs would study prior to successful careers. The United States restaurant industry was projected at $899 billion in sales for 2020, and employed more than 15 million people, representing 10% of the nation's workforce directly. It is the country's second-largest private employer and the third-largest employer overall. The United States is home to over 220 Michelin star-rated restaurants, 70 of which are in New York City. Wine has been produced in what is now the United States since the 1500s, with the first widespread production beginning in what is now New Mexico in 1628. In the modern U.S., wine production is undertaken in all fifty states, with California producing 84 percent of all U.S. wine. With more than 1,100,000 acres (4,500 km2) under vine, the United States is the fourth-largest wine-producing country in the world, after Italy, Spain, and France. The classic American diner, a casual restaurant type originally intended for the working class, emerged during the 19th century from converted railroad dining cars made stationary. The diner soon evolved into purpose-built structures whose number expanded greatly in the 20th century. The American fast-food industry developed alongside the nation's car culture. American restaurants developed the drive-in format in the 1920s, which they began to replace with the drive-through format by the 1940s. American fast-food restaurant chains, such as McDonald's, Burger King, Chick-fil-A, Kentucky Fried Chicken, Dunkin' Donuts and many others, have numerous outlets around the world. The most popular spectator sports in the U.S. are American football, basketball, baseball, soccer, and ice hockey. Their premier leagues are, respectively, the National Football League, the National Basketball Association, Major League Baseball, Major League Soccer, and the National Hockey League, All these leagues enjoy wide-ranging domestic media coverage and, except for the MLS, all are considered the preeminent leagues in their respective sports in the world. While most major U.S. sports such as baseball and American football have evolved out of European practices, basketball, volleyball, skateboarding, and snowboarding are American inventions, many of which have become popular worldwide. Lacrosse and surfing arose from Native American and Native Hawaiian activities that predate European contact. The market for professional sports in the United States was approximately $69 billion in July 2013, roughly 50% larger than that of Europe, the Middle East, and Africa combined. American football is by several measures the most popular spectator sport in the United States. Although American football does not have a substantial following in other nations, the NFL does have the highest average attendance (67,254) of any professional sports league in the world. In the year 2024, the NFL generated over $23 billion, making them the most valued professional sports league in the United States and the world. Baseball has been regarded as the U.S. "national sport" since the late 19th century. The most-watched individual sports in the U.S. are golf and auto racing, particularly NASCAR and IndyCar. On the collegiate level, earnings for the member institutions exceed $1 billion annually, and college football and basketball attract large audiences, as the NCAA March Madness tournament and the College Football Playoff are some of the most watched national sporting events. In the U.S., the intercollegiate sports level serves as the main feeder system for professional and Olympic sports, with significant exceptions such as Minor League Baseball. This differs greatly from practices in nearly all other countries, where publicly and privately funded sports organizations serve this function. Eight Olympic Games have taken place in the United States. The 1904 Summer Olympics in St. Louis, Missouri, were the first-ever Olympic Games held outside of Europe. The Olympic Games will be held in the U.S. for a ninth time when Los Angeles hosts the 2028 Summer Olympics. U.S. athletes have won a total of 2,968 medals (1,179 gold) at the Olympic Games, the most of any country. In other international competition, the United States is the home of a number of prestigious events, including the America's Cup, World Baseball Classic, the U.S. Open, and the Masters Tournament. The U.S. men's national soccer team has qualified for eleven World Cups, while the women's national team has won the FIFA Women's World Cup and Olympic soccer tournament four and five times, respectively. The 1999 FIFA Women's World Cup was hosted by the United States. Its final match was attended by 90,185, setting the world record for largest women's sporting event crowd at the time. The United States hosted the 1994 FIFA World Cup and will co-host, along with Canada and Mexico, the 2026 FIFA World Cup. See also Notes References This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023, FAO, FAO. External links 40°N 100°W / 40°N 100°W / 40; -100 (United States of America) |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Weak_artificial_intelligence#cite_note-7] | [TOKENS: 594] |
Contents Weak artificial intelligence Weak artificial intelligence (weak AI) is artificial intelligence that implements a limited part of the mind, or, as narrow AI, artificial narrow intelligence (ANI), is focused on one narrow task. Weak AI is contrasted with strong AI, which can be interpreted in various ways: Narrow AI can be classified as being "limited to a single, narrowly defined task. Most modern AI systems would be classified in this category." Artificial general intelligence is conversely the opposite. Applications and risks Some examples of narrow AI are AlphaGo, self-driving cars, robot systems used in the medical field, and diagnostic doctors. Narrow AI systems are sometimes dangerous if unreliable. And the behavior that it follows can become inconsistent. It could be difficult for the AI to grasp complex patterns and get to a solution that works reliably in various environments. This "brittleness" can cause it to fail in unpredictable ways. Narrow AI failures can sometimes have significant consequences. It could for example cause disruptions in the electric grid, damage nuclear power plants, cause global economic problems, and misdirect autonomous vehicles. Medicines could be incorrectly sorted and distributed. Also, medical diagnoses can ultimately have serious and sometimes deadly consequences if the AI is faulty or biased. Simple AI programs have already worked their way into society, oftentimes unnoticed by the public. Autocorrection for typing, speech recognition for speech-to-text programs, and vast expansions in the data science fields are examples. Narrow AI has also been the subject of some controversy, including resulting in unfair prison sentences, discrimination against women in the workplace for hiring, resulting in death via autonomous driving, among other cases. Despite being "narrow" AI, recommender systems are efficient at predicting user reactions based their posts, patterns, or trends. For instance, TikTok's "For You" algorithm can determine a user's interests or preferences in less than an hour. Some other social media AI systems are used to detect bots that may be involved in propaganda or other potentially malicious activities. Weak AI versus strong AI John Searle contests the possibility of strong AI (by which he means conscious AI). He further believes that the Turing test (created by Alan Turing and originally called the "imitation game", used to assess whether a machine can converse indistinguishably from a human) is not accurate or appropriate for testing whether an AI is "strong". Scholars such as Antonio Lieto have argued that the current research on both AI and cognitive modelling are perfectly aligned with the weak-AI hypothesis (that should not be confused with the "general" vs "narrow" AI distinction) and that the popular assumption that cognitively inspired AI systems espouse the strong AI hypothesis is ill-posed and problematic since "artificial models of brain and mind can be used to understand mental phenomena without pretending that that they are the real phenomena that they are modelling" (as, on the other hand, implied by the strong AI assumption). See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Role_theory] | [TOKENS: 2078] |
Contents Role theory Role theory (or social role theory) is a concept in sociology and in social psychology that considers most of everyday activity to be the acting-out of socially defined categories (e.g., mother, manager, teacher). Each role is a set of rights, duties, expectations, norms, and behaviors that a person has to face and fulfill. The model is based on the observation that people behave in a predictable way, and that an individual's behavior is context specific, based on social position and other factors. Research conducted on role theory mainly centers around the concepts of consensus, role conflict, role taking, and conformity. Although the word role has existed in European languages for centuries, as a sociological concept, the term has only been around since the 1920s and 1930s. It became more prominent in sociological discourse through the theoretical works of George Herbert Mead, Jacob L. Moreno, Talcott Parsons, Ralph Linton, and Georg Simmel. Two of Mead's concepts—the mind and the self—are the precursors to role theory. Depending on the general perspective of the theoretical tradition, there are many types of role theory, however, it may be divided into two major types, in particular: structural functionalism role theory and dramaturgical role theory. Structural functionalism role theory is essentially defined as everyone having a place in the social structure and every place had a corresponding role, which has an equal set of expectations and behaviors. Life is more structured, and there is a specific place for everything. In contrast, dramaturgical role theory defines life as a never-ending play, in which we are all actors. The essence of this role theory is to role-play in an acceptable manner in society. Robert Kegan’s theory of adult development plays a role in understanding role theory. Three pivotal sections in his theory are first the socialized mind. People in this mindset, base their actions on the opinion of others. The second part is the self-authorized mind, this mindset breaks loose of others thoughts and makes their own decisions. The last part in this theory is the self-transforming mind. This mindset listens to the thoughts and opinions of others, yet still is able to choose and make the decision for themselves. Less than 1 percent of people are in the self-transforming mindset. For the socialized mind, 60 percent of people are in this mindset well into their adult years. Role theory is following perceived roles and standards that people in society normalize. People are confined to roles that have been placed around them due to the socialized mind. The internalization of the value of others in society leads to role theory. A key insight of this theory is that role conflict occurs when a person is expected to simultaneously act out multiple roles that carry contradictory expectations. They are pulled in different ways as they strive to hold various types of societal standards and statuses. Role Substantial debate exists in the field over the meaning of the role in role theory. A role can be defined as a social position, behavior associated with a social position, or a typical behavior. Some theorists have put forward the idea that roles are essentially expectations about how an individual ought to behave in a given situation, whereas others consider it means how individuals actually behave in a given social position. Some have suggested that a role is a characteristic behavior or expected behavior, a part to be played, or a script for social conduct. In sociology, there are different categories of social roles: Role theory models behavior as patterns of behaviors to which one can conform, with this conformity being based on the expectations of others.[a] It has been argued that a role must in some sense being defined in relation to others.[b] The manner and degree is debated by sociologists. Turner used the concept of an "other-role", arguing the process of defining a role is negotiating one's role with other-roles.[c] Turner argued that the process of describing a role also modifies the role which would otherwise be implicit, referring to this process as role-making arguing that very formal roles such as those in the military are not representative of roles because the role-making process is suppressed.[d] Sociologist Howard S. Becker similarly claims that the label given and the definition used in a social context can change actions and behaviors. Situation-specific roles develop ad hoc in a given social situation. However it can be argued that the expectations and norms that define this ad hoc role are defined by the social role. The word consensus is used when a group of people have the same expectations through agreement. We live in a society where people know how they should act, which is a result of learned behaviors stemming from social norms. As a whole society follows typical roles and follows their expected norms. Subsequently, there is a standard created through the conformity of these social groups. The relationship between roles and norms Some theorists view behavior as being enforced by social norms. Turner rather argues that there is a norm of consistency that failing to conform to a role breaks a norm because it violates consistency. Cultural roles are seen as a matter of course, and are mostly stable. In cultural changes new roles can develop and old roles can disappear – these cultural changes are affected by political and social conflicts. For example, the feminist movement initiated a change in male and female roles in Western societies. The roles, or the exact duties of men more specifically are being questioned. With more women going further in school than men comes more financial and occupational benefits. Unfortunately, these benefits have not been shown to increase women's happiness. Social differentiation received a lot of attention due to the development of different job roles. Robert K. Merton distinguished between intrapersonal and interpersonal role conflicts. For example, a foreman has to develop his own social role facing the expectations of his team members and his supervisor – this is an interpersonal role conflict. He also has to arrange his different social roles as father, husband, club member – this is an intrapersonal role conflict. Ralph Dahrendorf distinguished between must-expectations, with sanctions; shall-expectations, with sanctions and rewards and can-expectations, with rewards. The foreman has to avoid corruption; he should satisfy his reference groups (e.g. team members and supervisors); and he can be sympathetic. He argues another component of role theory is that people accept their own roles in the society and it is not the society that imposes them. In their life people have to face different social roles, sometimes they have to face different roles at the same time in different social situations. There is an evolution of social roles: some disappear and some new develop. Role behavior is influenced by: These three aspects are used to evaluate one's own behavior as well as the behavior of other people. Heinrich Popitz defines social roles as norms of behavior that a special social group has to follow. Norms of behavior are a set of behaviors that have become typical among group members; in case of deviance, negative sanctions follow. Gender has played a crucial role in our societal norms and the distinction between how female and male roles are viewed in society. Specifically within the workplace, and in the home. Historically there was a division of roles created by society due to gender. Gender was a social difference between female and male; whereas sex was nature. Gender became a way to categorize men and women and divide them into their societal roles. Although gender is important there are many different ways that women are categorized in society. Other ways are racially and through class experience. While we have societal roles from gender, there will always be a separation between females and males. With the advancement of times, with jobs and the industry moving away from strength and labor, women have advanced their education for employment. The sex segregation between women and men has decreased as time has matured and evolved away from traditional gender roles in society. In public relations Role theory is a perspective that considers everyday activity to be acting out socially defined categories. Split into two narrower definitions: status is one's position within a social system or group; and role is one's pattern of behavior associated with a status. Organizational role is defined as "recurring actions of an individual, appropriately interrelated with the repetitive activities of others so as to yield a predictable outcome." (Katz & Kahn, 1978). Within an organization there are three main topologies: Role conflict, strain, or making Despite variations in the terms used, the central component of all of the formulations is incompatibility. Role conflict is a conflict among the roles corresponding to two or more statuses, for example, teenagers who have to deal with pregnancy (statuses: teenager, mother). Role conflict is said to exist when there are important differences among the ratings given for various expectations. By comparing the extent of agreement or disagreement among the ranks, a measure of role conflict was obtained. Role strain or "role pressure" may arise when there is a conflict in the demands of roles, when an individual does not agree with the assessment of others concerning his or her performance in his or her role, or from accepting roles that are beyond an individual's capacity. Role making is defined by Graen as leader–member exchange. At the same time, a person may have limited power to negotiate away from accepting roles that cause strain, because he or she is constrained by societal norms, or has limited social status from which to bargain. Criticism and limitations Role theorists have noted that a weakness of role theory is in describing and explaining deviant behavior. Role theory has been criticized for reinforcing commonly held prejudices about how people should behave;[e] have ways they should portray themselves as well as how others should behave, view the individual as responsible for fulfilling the expectations of a role rather than others responsible for creating a role that they can perform,[f] and people have argued that role theory insufficiently explains power relations, as in some situations an individual does not consensually fulfill a role but is forced into behaviors by power.[g] It is also argued that role theory does not explain individual agency in negotiating their role[h] and that role theory artificially merges roles when in practice an individual might combine roles together.[i] Others have argued that the concept of role takes on such a broad definition as to be meaningless.[j] See also Notes References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Akkadian_Empire] | [TOKENS: 10208] |
Contents Akkadian Empire The Akkadian Empire (/əˈkeɪdiən/) or kingdom of Akkad/Agade was an ancient kingdom, often considered to be the first known empire, succeeding the long-lived city-states of Sumer. Centered on the city of Akkad (/ˈækæd/ or /ˈɑːkɑːd/) and its surrounding region in modern-day Iraq, the empire united the Semitic Akkadian and Sumerian speakers under one rule and exercised significant influence across Mesopotamia, the Levant, modern-day Iran and Anatolia, sending military expeditions as far south as Dilmun and Magan in the Arabian Peninsula.[page needed] Established by Sargon of Akkad after defeating the Sumerian king Lugal-zage-si, it replaced the system of independent Sumero-Akkadian city-states and unified a vast region, stretching from the Mediterranean to Iran and from Anatolia to the Persian Gulf, under a centralized government. Sargon and his successors, especially his grandson Naram-Sin, expanded the empire through military conquest, administrative reforms, and cultural integration. Naram-Sin took the unprecedented step of declaring himself a living god and adopted the title "King of the Four Quarters." The Semitic Akkadian language became the empire’s lingua franca, although Sumerian (a language isolate) remained important in religion and literature. The empire was documented through inscriptions, administrative tablets, and seals, including notable sources like the Bassetki Statue. Enheduanna, Sargon’s daughter, served as high priestess and is recognized as the first known named author in history. The Akkadian Empire reached its political peak between the 24th and 22nd centuries BC, following the conquests by its founder Sargon. Under Sargon and his successors, the Akkadian language was briefly imposed on neighbouring conquered states such as Elam, Lullubi, Hatti and Gutium. Akkad is sometimes regarded as the first empire in history, though the meaning of this term is not precise, and there are earlier Sumerian claimants. The Akkadian state was characterized by a planned economy supported by agriculture, taxation, and conquest. It also saw developments in art, technology, and long-distance trade, including connections with the Indus Valley. Despite its strength, the empire faced internal revolts, dynastic instability, and external threats. Sargon’s sons, Rimush and Manishtushu, struggled to maintain control; both died violently. Naram-Sin’s successors were weaker, leading to fragmentation and vulnerability. The empire eventually collapsed due to a combination of internal unrest and severe environmental and economic stress caused by a major drought associated with the 4.2-kiloyear climate event, which led to crop failures, famine, urban decline, and population displacement, followed by an invasion by the Gutians. Contemporary epigraphic sources Epigraphic sources from the Sargonic (Akkadian Empire) period are uncommon, partly because the capital Akkad, like the capitals of the later Mitanni and Sealand, has not yet been located, though there has been much speculation. Some cuneiform tablets have been excavated at cities under Akkadian Empire control such as Eshnunna and Tell Agrab. Other tablets have become available on the antiquities market and are held in museums and private collections such as those from the Akkadian governor in Adab. Internal evidence allows their dating to the Sargonic period and sometimes to the original location. Archives are especially important to historians and only a few have become available. The Me-sag Archive, which commenced publication in 1958, is considered one of the most significant collections. The tablets, about 500 in number with about half published, are held primarily at the Babylonian Collection of the Yale University and Baghdad Museum with a few others scattered about. The tablets date to the period of late in the reign of Naram-Sin to early in the reign of Shar-kali-shari. They are believed to be from a town between Umma and Lagash and Me-sag to be the governor of Umma. An archive of 47 tablets was found at the excavation of Tell Suleimah in the Hamrin Basin. Various royal inscriptions by the Akkadian rulers have also been found. Most of the original examples are short, or very fragmentary like the Victory Stele of Naram-Sin and the Sargonic victory stele from Telloh. A few longer ones are known because of later copies made, often from the much later Old Babylonian period. While these are assumed to be mostly accurate, it is difficult to know if they had been edited to reflect current political conditions. One of the longer surviving examples is the Bassetki Statue, the copper base of a Narim-Sin statue: "Naram-Sin, the mighty, king of Agade, when the four quarters together revolted against him, through the love which the goddess Astar showed him, he was victorious in nine battles in one in 1 year, and the kings whom they (the rebels[?]) had raised (against him), he captured. In view of the fact that he protected the foundations of his city from danger, (the citizens of his city requested from Astar in Eanna, Enlil in Nippur, Dagan in Tuttul, Ninhursag in Kes, Ea in Eridu, Sin in Ur, Samas in Sippar, (and) Nergal in Kutha, that (Naram-Sin) be (made) the god of their city, and they built within Agade a temple (dedicated) to him. As for the one who removes this inscription, may the gods Samas, Astar, Nergal, the bailiff of the king, namely all those gods (mentioned above) tear out his foundations and destroy his progeny." A number of fragments of royal statues of Manishtushu all bearing portions of a "standard inscription". Aside from a few minor short inscriptions this is the only known contemporary source for this ruler. An excerpt: "Man-istusu, king of the world: when he conquered Ansan and Sirihum, had ... ships cross the Lower Sea. The cities across the Sea, thirty-two (in number), assembled for battle, but he was victorious (over them). Further, he conquered their cities, [st]ru[c]k down their rulers and aft[er] he [roused them (his troops)], plundered as far as the Silver Mines. He quarried the black stone of the mountains across the Lower Sea, loaded (it) on ships, and moored (the ships) at the quay of Agade" Before the Akkadian Empire, calendar years were marked by Regnal Numbers. During Sargonic times, a system of year-names was used. This practice continued until the end of the Old Babylonian period, for example, "Year in which the divine Hammu[rabi] the king Esznunna destroyed by a flood.” Afterwards, Regnal Numbers were used by all succeeding kingdoms. During the Akkadian Empire 3 of the presumed 40 Sargon year-names are known, 1 (presumed 9) of Rimush, 20 (presumed 56) of Naram-Sin, and 18 (presumed 18) of Shar-kali-shari. Recently, a single year-name had been found "In the year that Dūr-Maništusu was established.” There are also, perhaps, a dozen more known, which cannot be firmly linked to a ruler. Especially with the paucity of other inscriptions, year-names are extremely important in determining the history of the Akkadian Empire. As an example, from one year-name, we know that the empire was in conflict with the Gutians long before its end. It attests the name of a Gutian ruler and marks the construction of two temples in Babylon as recognition of Akkadian victory. "In the year in which Szarkaliszarri laid the foundations of the temples of the goddess Annunitum and of the god Aba in Babylon and when he defeated Szarlak, king of Gutium" The final contemporary source are seals and their sealing dates. These are especially important here, as markers, with the shortage of other Akkadian Empire epigraphics and very useful to historians. As an example, two seals and one sealing were found in the Royal Cemetery at Ur which contained the name of Sargons's daughter En-hedu-ana. This provided confirmation of her existence. The seals read "En-hedu-ana, daughter of Sargon: Ilum-pal[il] (is) her coiffeur" and "Adda, estate supervisor/majordomo of En-hedu-ana". At Tell Mozan (ancient Urkesh) brought to light a clay sealing of Tar'am-Agade (Akkad loves <her>), a previously unknown daughter of Naram-Sin, who was possibly married to an unidentified local endan (ruler). Later copies and literary compositions So great was the Akkadian Empire, especially Sargon and Narim-Sin, that its history was passed down for millennia. This ranged on one end to purported copies of still existing Sargonic period inscriptions to literary tales made up from the whole cloth at the other. A few examples: "... By the verdict of the goddess Astar-Annunltum, Naram-Sin, the mighty, [was vic]torious over the Kisite in battle at TiWA. [Further], Ili-resi, the general; Ilum-muda, Ibbi-Zababa, Imtalik, (and) Puzur-Asar, captains of Kis; and Puzur-Ningal, governor of TiWA; Ili-re'a, his captain; Kullizum, captain of Eres; Edam'u, captain of Kutha ..." "...Enlil brought out of the mountains those who do not resemble other people, who are not reckoned as part of the Land, the Gutians, an unbridled people, with human intelligence but canine instincts and monkeys' features. Like small birds they swooped on the ground in great flocks. Because of Enlil, they stretched their arms out across the plain like a net for animals. Nothing escaped their clutches, no one left their grasp. Messengers no longer traveled the highways, the courier's boat no longer passed along the rivers. The Gutians drove the trusty (?) goats of Enlil out of their folds and compelled their herdsmen to follow them, they drove the cows out of their pens and compelled their cowherds to follow them. Prisoners manned the watch. Brigands occupied the highways. The doors of the city gates of the Land lay dislodged in mud, and all the foreign lands uttered bitter cries from the walls of their cities ..." There were a number of these, passed down as part of scribel tradition including The Birth Legend of Sargon (Neo-Assyrian), Weidner Chronicle, and the Geographical Treatise on Sargon of Akkad's Empire. Archaeology Identifying architectural remains is hindered by the fact that there are sometimes no clear distinctions between features thought to stem from the preceding Early Dynastic period, and those thought to be Akkadian. Likewise, material that is thought to be Akkadian continues to be in use into the Ur III period. There is a similar issue with cuneiform tablets. In the early Akkadian Empire tablets and the signs on them are much like those from earlier periods, before developing into the much different Classical Sargonic style. With the capital, Akkad, still unlocated, archaeological remains of the empire are still to be found, mainly at the cities where they established regional governors. An example is Adab where Naram-Sin established direct imperial control after Adab joined the "great revolt". After destroying the city of Mari the Akkadian Empire rebuilt it as an administrative center with an imperial governor. The city of Nuzi was established by the Akkadians and a number of economic and administrative texts were found there. Similarly, there are Marad, Nippur, Tutub and Ebla. Excavation at the modern site of Tell Brak has suggested that the Akkadians rebuilt a city ("Brak" or "Nagar") on this site, for use as an administrative center. The city included two large buildings including a complex with temple, offices, courtyard, and large ovens. Dating and periodization The Akkadian period is generally dated to 2,334–2,154 BC (according to the middle chronology). The short-chronology dates of 2,270–2,083 BC are now considered less likely. It was preceded by the Early Dynastic Period of Mesopotamia (ED) and succeeded by the Ur III Period, although both transitions are blurry. For example, it is likely that the rise of Sargon of Akkad coincided with the late ED Period and that the final Akkadian kings ruled simultaneously with the Gutian kings alongside rulers at the city-states of both Uruk and Lagash. The Akkadian Period is contemporary with EB IV (in Israel), EB IVA and EJ IV (in Syria), and EB IIIB (in Turkey). The relative order of Akkadian kings is clear, while noting that the Ur III version of the Sumerian King List inverts the order of Rimush and Manishtushu. The absolute dates of their reigns are approximate (as with all dates prior to the Late Bronze Age collapse c. 1200 BC). History and development of the empire The Akkadian Empire takes its name from the region and the city of Akkad, both of which were localized in the general confluence area of the Tigris and Euphrates Rivers. Although the city of Akkad has not yet been identified on the ground, it is known from various textual sources. Among these is at least one text predating the reign of Sargon. Together with the fact that the name Akkad is of non-Akkadian origin, this suggests that the city of Akkad may have already been occupied in pre-Sargonic times. The earliest records in the Akkadian language date to the time of Sargon of Akkad, who defeated the Sumerian king Lugal-zage-si at the Battle of Uruk and conquered his former territory, establishing the Akkadian Empire. Sargon was claimed to be the son of a gardener in the Sumerian King List. Later legends named his father as La'ibum or Itti-Bel and his birth mother as a priestess (or possibly even a hierodule) of Ishtar, the Akkadian equivalent of the Sumerian goddess Inanna. One legend of Sargon from Neo-Assyrian times quotes him as saying My mother was a changeling, my father I knew not. The brothers of my father loved the hills. My city is Azurpiranu (the wilderness herb fields), which is situated on the banks of the Euphrates. My changeling mother conceived me, in secret she bore me. She set me in a basket of rushes, with bitumen she sealed my lid. She cast me into the river which rose not over me. The river bore me up and carried me to Akki, the drawer of water. Akki, the drawer of water, took me as his son and reared me. Akki the drawer of water, appointed me as his gardener. While I was gardener Ishtar granted me her love, and for four and (fifty?) ... years I exercised kingship. Later claims made on behalf of Sargon were that his mother was an "entu" priestess (high priestess). The claims might have been made to ensure a pedigree of nobility, since only a highly placed family could achieve such a position. Originally a cupbearer (Rabshakeh) to a king of Kish with a Semitic name, Ur-Zababa, Sargon thus became a gardener, responsible for the task of clearing out irrigation canals. The royal cupbearer at this time was in fact a prominent political position, close to the king and with various high level responsibilities not suggested by the title of the position itself. This gave him access to a disciplined corps of workers, who also may have served as his first soldiers. Displacing Ur-Zababa, Sargon was crowned king, and he entered upon a career of foreign conquest. Four times he invaded Syria and Canaan, and he spent three years thoroughly subduing the countries of "the west" to unite them with Mesopotamia "into a single empire". However, Sargon took this process further, conquering many of the surrounding regions to create an empire that reached westward as far as the Mediterranean Sea and perhaps Cyprus (Kaptara); northward as far as the mountains (a later Hittite text asserts he fought the Hattian king Nurdaggal of Burushanda, well into Anatolia); eastward over Elam; and as far south as Magan (Oman) — a region over which he reigned for purportedly 56 years, though only four "year-names" survive. He consolidated his dominion over his territories by replacing the earlier opposing rulers with noble citizens of Akkad, his native city where loyalty was thus ensured. Trade extended from the silver mines of Anatolia to the lapis lazuli mines in modern Afghanistan, the cedars of Lebanon and the copper of Magan. This consolidation of the city-states of Sumer and Akkad reflected the growing economic and political power of Mesopotamia. The empire's breadbasket was the rain-fed agricultural system and a chain of fortresses was built to control the imperial wheat production. Images of Sargon were erected on the shores of the Mediterranean, in token of his victories, and cities and palaces were built at home with the spoils of the conquered lands. Elam and the northern part of Mesopotamia were also subjugated, and rebellions in Sumer were put down. Contract tablets have been found dated in the years of the campaigns against Canaan and against Sarlak, king of Gutium. He also boasted of having subjugated the "four-quarters" — the lands surrounding Akkad to the north, the south (Sumer), the east (Elam), and the west (Martu). Some of the earliest historiographic texts (ABC 19, 20) suggest he rebuilt the city of Babylon (Bab-ilu) in its new location near Akkad. Troubles multiplied toward the end of his reign. A later Babylonian text states: In his old age, all the lands revolted against him, and they besieged him in Akkad (the city) [but] he went forth to battle and defeated them, he knocked them over and destroyed their vast army. It refers to his campaign in "Elam", where he defeated a coalition army led by the King of Awan and forced the vanquished to become his vassals. Also shortly after, another revolt took place: the Subartu the upper country—in their turn attacked, but they submitted to his arms, and Sargon settled their habitations, and he smote them grievously. Sargon had crushed opposition even at old age. These difficulties broke out again in the reign of his sons, where revolts broke out during the nine-year reign of Rimush (2278–2270 BC), who fought hard to retain the empire, and was successful until he was assassinated by some of his own courtiers. According to his inscriptions, he faced widespread revolts, and had to reconquer the cities of Ur, Umma, Adab, Lagash, Der, and Kazallu from rebellious ensis: Rimush introduced mass slaughter and large scale destruction of the Sumerian city-states, and maintained meticulous records of his destructions. Most of the major Sumerian cities were destroyed, and Sumerian human losses were enormous: Rimush's elder brother, Manishtushu (2269–2255 BC) succeeded him. The latter seems to have fought a sea battle against 32 kings who had gathered against him and took control over their pre-Arab country, consisting of modern-day United Arab Emirates and Oman. Despite the success, like his brother he seems to have been assassinated in a palace conspiracy. Manishtushu's son and successor, Naram-Sin (2254–2218 BC), due to vast military conquests, assumed the imperial title "King Naram-Sin, king of the four-quarters" (Lugal Naram-Sîn, Šar kibrat 'arbaim), the four-quarters as a reference to the entire world. He was also for the first time in Sumerian culture, addressed as "the god (Sumerian = DINGIR, Akkadian = ilu) of Agade" (Akkad), in opposition to the previous religious belief that kings were only representatives of the people towards the gods. He also faced revolts at the start of his reign, but quickly crushed them. Naram-Sin also recorded the Akkadian conquest of Ebla as well as Armanum and its king. To better police Syria, he built a royal residence at Tell Brak, a crossroads at the heart of the Khabur River basin of the Jezirah. Naram-Sin campaigned against Magan which also revolted; Naram-Sin "marched against Magan and personally caught Mandannu, its king", where he instated garrisons to protect the main roads. The chief threat seemed to be coming from the northern Zagros Mountains, the Lulubis and the Gutians. A campaign against the Lullubi led to the carving of the "Victory Stele of Naram-Suen", now in the Louvre. Hittite sources claim Naram-Sin of Akkad even ventured into Anatolia, battling the Hittite and Hurrian kings Pamba of Hatti, Zipani of Kanesh, and 15 others. The economy was highly planned. Grain was cleaned, and rations of grain and oil were distributed in standardized vessels made by the city's potters. Taxes were paid in produce and labour on public walls, including city walls, temples, irrigation canals and waterways, producing huge agricultural surpluses. This newfound Akkadian wealth may have been based upon benign climatic conditions, huge agricultural surpluses and the confiscation of the wealth of other peoples. In later Assyrian and Babylonian texts, the name Akkad, together with Sumer, appears as part of the royal title, as in the Sumerian LUGAL KI-EN-GI KI-URI or Akkadian Šar māt Šumeri u Akkadi, translating to "king of Sumer and Akkad". This title was assumed by the king who seized control of Nippur, the intellectual and religious center of southern Mesopotamia. During the Akkadian period, the Akkadian language became the lingua franca of the Middle East, and was officially used for administration, although Sumerian remained as a spoken and literary language. The spread of Akkadian stretched from Syria to Elam, and even the Elamite language was temporarily written in Mesopotamian cuneiform. Akkadian texts later found their way to far-off places, from Egypt (in the Amarna Period) and Anatolia, to Persia (Behistun). The submission of some Sumerian rulers to the Akkadian Empire, is recorded in the seal inscriptions of Sumerian rulers such as Lugal-ushumgal, governor (ensi) of Lagash ("Shirpula"), circa 2230–2210 BC. Several inscriptions of Lugal-ushumgal are known, particularly seal impressions, which refer to him as governor of Lagash and at the time a vassal (𒀵, arad, "servant" or "slave") of Naram-Sin, as well as his successor Shar-kali-sharri. One of these seals proclaims: “Naram-Sin, the mighty God of Agade, king of the four corners of the world, Lugal-ushumgal, the scribe, ensi of Lagash, is thy servant.” — Seal of Lugal-ushumgal as vassal of Naram-sin. It can be considered that Lugal-ushumgal was a collaborator of the Akkadian Empire, as was Meskigal, ruler of Adab. Later however, Lugal-ushumgal was succeeded by Puzer-Mama who, as Akkadian power waned, achieved independence from Shar-Kali-Sharri, assuming the title of "King of Lagash" and starting the illustrious Second Dynasty of Lagash. The empire of Akkad likely fell in the 22nd century BC, within 180 years of its founding, ushering in a "Dark Age" with no prominent imperial authority until the Third Dynasty of Ur. The region's political structure may have reverted to the status quo ante of local governance by city-states. By the end of Sharkalisharri's reign, the empire had begun to unravel. After several years of chaos (and four kings), Shu-turul and Dudu appear to have restored some centralized authority for several decades; however, they were unable to prevent the empire from eventually collapsing outright. In the resulting power vacuum the Gutians, who had been conquered by Akkad during the reign of Sharkalisharri, took control of central Babylonia as far as Adab and Umma and Anshan briefly controlled the Diyalla region and the city of Akkad itself. Estimates of the length of this interregnum have ranged from 40 years to 100 years. In the preamble of the Code of Ur-Nammu he claims to have liberated Akšak, Marada, Girikal, Kazallu, and Uṣarum from Anshan. Little is known about the Gutian period, or how long it endured. Cuneiform sources suggest that the Gutians' administration showed little concern for maintaining agriculture, written records, or public safety; they reputedly released all farm animals to roam about Mesopotamia freely and soon brought about famine and rocketing grain prices. The Sumerian king Ur-Nammu (2112–2095 BC) cleared the Gutians from Mesopotamia during his reign. The Sumerian King List, describing the Akkadian Empire after the death of Shar-kali-shari, states: Who was king? Who was not king? Irgigi the king; Nanum, the king; Imi the king; Ilulu, the king—the four of them were kings but reigned only three years. Dudu reigned 21 years; Shu-Turul, the son of Dudu, reigned 15 years. ... Agade was defeated and its kingship carried off to Uruk. In Uruk, Ur-ningin reigned 7 years, Ur-gigir, son of Ur-ningin, reigned 6 years; Kuda reigned 6 years; Puzur-ili reigned 5 years, Ur-Utu reigned 6 years. Uruk was smitten with weapons and its kingship carried off by the Gutian hordes. However, there are no known year-names or other archaeological evidence verifying any of these later kings of Akkad or Uruk, apart from several artefact referencing king Dudu of Akkad and Shu-turul. The named kings of Uruk may have been contemporaries of the last kings of Akkad, but in any event could not have been very prominent. In the Gutian hordes, (first reigned) a nameless king; (then) Imta reigned 3 years as king; Shulme reigned 6 years; Elulumesh reigned 6 years; Inimbakesh reigned 5 years; Igeshuash reigned 6 years; Iarlagab reigned 15 years; Ibate reigned 3 years; ... reigned 3 years; Kurum reigned 1 year; ... reigned 3 years; ... reigned 2 years; Iararum reigned 2 years; Ibranum reigned 1 year; Hablum reigned 2 years; Puzur-Sin son of Hablum reigned 7 years; Iarlaganda reigned 7 years; ... reigned 7 years; ... reigned 40 days. Total 21 kings reigned 91 years, 40 days. The period between c. 2112 BC and 2004 BC is known as the Ur III period. Documents again began to be written in Sumerian, although Sumerian was becoming a purely literary or liturgical language, much as Latin later became in Medieval Europe. One explanation for the end of the Akkadian empire is simply that the Akkadian dynasty could not maintain its political supremacy over other independently powerful city-states. One theory, which remains controversial, associates regional decline at the end of the Akkadian period (and of the First Intermediary Period following the Old Kingdom in Ancient Egypt) with rapidly increasing aridity, and failing rainfall in the region of the Ancient Near East, caused by a global centennial-scale drought, sometimes called the 4.2 kiloyear event. Harvey Weiss has shown that [A]rchaeological and soil-stratigraphic data define the origin, growth, and collapse of Subir, the third millennium rain-fed agriculture civilization of northern Mesopotamia on the Habur Plains of Syria. At 2200 BC, a marked increase in aridity and wind circulation, subsequent to a volcanic eruption, induced a considerable degradation of land-use conditions. After four centuries of urban life, this abrupt climatic change evidently caused abandonment of Tell Leilan, regional desertion, and the collapse of the Akkadian empire based in southern Mesopotamia. Synchronous collapse in adjacent regions suggests that the impact of the abrupt climatic change was extensive. Peter B. de Menocal has shown "there was an influence of the North Atlantic Oscillation on the streamflow of the Tigris and Euphrates at this time, which led to the collapse of the Akkadian Empire". More recent analysis of simulations from the HadCM3 climate model indicate that there was a shift to a more arid climate on a timescale that is consistent with the collapse of the empire. Excavation at Tell Leilan suggests that this site was abandoned soon after the city's massive walls were constructed, its temple rebuilt and its grain production reorganized. The debris, dust, and sand that followed show no trace of human activity. Soil samples show fine wind-blown sand, no trace of earthworm activity, reduced rainfall and indications of a drier and windier climate. Evidence shows that skeleton-thin sheep and cattle died of drought, and up to 28,000 people abandoned the site, presumably seeking wetter areas elsewhere. Tell Brak shrank in size by 75%. Trade collapsed. Nomadic herders such as the Amorites moved herds closer to reliable water suppliers, bringing them into conflict with Akkadian populations. This climate-induced collapse seems to have affected the whole of the Middle East, and to have coincided with the collapse of the Egyptian Old Kingdom. This collapse of rain-fed agriculture in the Upper Country meant the loss to southern Mesopotamia of the agrarian subsidies which had kept the Akkadian Empire solvent. Water levels within the Tigris and Euphrates fell 1.5 meters beneath the level of 2600 BC, and although they stabilized for a time during the following Ur III period, rivalries between pastoralists and farmers increased. Attempts were undertaken to prevent the former from herding their flocks in agricultural lands, such as the building of a 180 km (112 mi) wall known as the "Repeller of the Amorites" between the Tigris and Euphrates under the Ur III ruler Shu-Sin. Such attempts led to increased political instability; meanwhile, severe depression occurred to re-establish demographic equilibrium with the less favorable climatic conditions. Richard Zettler has critiqued the drought theory, observing that the chronology of the Akkadian empire is very uncertain and that available evidence is not sufficient to show its economic dependence on the northern areas excavated by Weiss and others. He also criticizes Weiss for taking Akkadian writings literally to describe certain catastrophic events. According to Joan Oates, at Tell Brak, the soil "signal" associated with the drought lies below the level of Naram-Sin's palace. However, evidence may suggest a tightening of Akkadian control following the Brak 'event', for example, the construction of the heavily fortified 'palace' itself and the apparent introduction of greater numbers of Akkadian as opposed to local officials, perhaps a reflection of unrest in the countryside of the type that often follows some natural catastrophe. Furthermore, Brak remained occupied and functional after the fall of the Akkadians. In 2019, a study by Hokkaido University on fossil corals in Oman provides an evidence that prolonged winter shamal seasons led to the salinization of the irrigated fields; hence, a dramatic decrease in crop production triggered a widespread famine and eventually the collapse of the ancient Akkadian Empire. Government The Akkadian government formed a "classical standard" with which all future Mesopotamian states compared themselves. Traditionally, the ensi was the highest functionary of the Sumerian city-states. In later traditions, one became an ensi by marrying the goddess Inanna, legitimising the rulership through divine consent. Initially, the monarchical lugal (lu = man, gal =Great) was subordinate to the priestly ensi, and was appointed at times of troubles, but by later dynastic times, it was the lugal who had emerged as the preeminent role, having his own "é" (= house) or "palace", independent from the temple establishment. By the time of Mesalim, whichever dynasty controlled the city of Kish was recognised as šar kiššati (= king of Kish), and was considered preeminent in Sumer, possibly because this was where the two rivers approached, and whoever controlled Kish ultimately controlled the irrigation systems of the other cities downstream. As Sargon extended his conquest from the "Lower Sea" (Persian Gulf), to the "Upper Sea" (Mediterranean), it was felt that he ruled "the totality of the lands under heaven", or "from sunrise to sunset", as contemporary texts put it. Under Sargon, the ensis generally retained their positions, but were seen more as provincial governors. The title šar kiššati became recognised as meaning "lord of the universe". Sargon is even recorded as having organised naval expeditions to Dilmun (Bahrain) and Magan, amongst the first organised military naval expeditions in history. Whether he also did in the case of the Mediterranean with the kingdom of Kaptara (possibly Cyprus), as claimed in later documents, is more questionable. With Naram-Sin, Sargon's grandson, this went further than with Sargon, with the king not only being called "Lord of the Four-Quarters (of the Earth)", but also elevated to the ranks of the dingir (= gods), with his own temple establishment. Previously a ruler could, like Gilgamesh, become divine after death but the Akkadian kings, from Naram-Sin onward, were considered gods on earth in their lifetimes. Their portraits showed them of larger size than mere mortals and at some distance from their retainers. One strategy adopted by both Sargon and Naram-Sin, to maintain control of the country, was to install their daughters, Enheduanna and Emmenanna respectively, as high priestess to Sin, the Akkadian version of the Sumerian moon deity, Nanna, at Ur, in the extreme south of Sumer; to install sons as provincial ensi governors in strategic locations; and to marry their daughters to rulers of peripheral parts of the Empire (Urkesh and Marhashe). A well documented case of the latter is that of Naram-Sin's daughter Tar'am-Agade at Urkesh. Records at the Brak administrative complex suggest that the Akkadians appointed locals as tax collectors. Economy The population of Akkad, like nearly all pre-modern states, was entirely dependent upon the agricultural systems of the region, which seem to have had two principal centres: the irrigated farmlands of southern Iraq that traditionally had a yield of 30 grains returned for each grain sown and the rain-fed agriculture of northern Iraq, known as the "Upper Country." Southern Iraq during Akkadian period seems to have been approaching its modern rainfall level of less than 20 mm (0.8 in) per year, with the result that agriculture was totally dependent upon irrigation. Before the Akkadian period, the progressive salinisation of the soils, produced by poorly drained irrigation, had been reducing yields of wheat in the southern part of the country, leading to the conversion to more salt-tolerant barley growing. Urban populations there had peaked already by 2,600 BC, and demographic pressures were high, contributing to the rise of militarism apparent immediately before the Akkadian period (as seen in the Stele of the Vultures of Eannatum). Warfare between city states had led to a population decline, from which Akkad provided a temporary respite. It was this high degree of agricultural productivity in the south that enabled the growth of the highest population densities in the world at this time, giving Akkad its military advantage. The water table in this region was very high and replenished regularly—by winter storms in the headwaters of the Tigris and Euphrates from October to March and from snow-melt from March to July. Flood levels, that had been stable from about 3,000 to 2,600 BC, had started falling, and by the Akkadian period were a half-meter to a meter lower than recorded previously. Even so, the flat country and weather uncertainties made flooding much more unpredictable than in the case of the Nile; serious deluges seem to have been a regular occurrence, requiring constant maintenance of irrigation ditches and drainage systems. Farmers were recruited into regiments for this work from August to October—a period of food shortage—under the control of city temple authorities, thus acting as a form of unemployment relief. Gwendolyn Leick has suggested that this was Sargon's original employment for the king of Kish, giving him experience in effectively organising large groups of men; a tablet reads, "Sargon, the king, to whom Enlil permitted no rival—5,400 warriors ate bread daily before him". Harvest was in the late spring and during the dry summer months. Nomadic Amorites from the northwest pastured their flocks of sheep and goats to graze on the crop residue and were watered from the river and irrigation canals. For this privilege, they had to pay a tax in wool, meat, milk, and cheese to the temples, who distributed these products to the bureaucracy and priesthood. In good years, all went well, but in bad years, wild winter pastures were in short supply, nomads sought to pasture their flocks in the grain fields, resulting in conflicts with farmers. It appeared that the subsidizing of southern populations by the import of wheat from the north of the Empire temporarily overcame this problem, and it seems to have allowed economic recovery and a growing population within this region. As a result of their economic and agricultural policies, Sumer and Akkad had a surplus of agricultural products but was short of almost everything else, particularly metal ores, timber and building stone, all of which had to be imported. The spread of the Akkadian state as far as the "silver mountain" (possibly the Taurus Mountains), the "cedars" of Lebanon, and the copper deposits of Magan, was largely motivated by the goal of securing control over these imports. One tablet, an Old Babylonian Period copy of an original inscription, reads: "Sargon, the king of Kish, triumphed in thirty-four battles (over the cities) up to the edge of the sea (and) destroyed their walls. He made the ships from Meluhha, the ships from Magan (and) the ships from Dilmun tie up alongside the quay of Agade. Sargon the king prostrated himself before (the god) Dagan (and) made supplication to him; (and) he (Dagan) gave him the upper land, namely Mari, Yarmuti, (and) Ebla, up to the Cedar Forest (and) up to the Silver Mountain" — Inscription by Sargon of Akkad (ca.2270–2215 BC) International trade developed during the Akkadian period. Indus–Mesopotamia relations also seem to have expanded: Sargon of Akkad (circa 2300 or 2250 BC), was the first Mesopotamian ruler to make an explicit reference to the region of Meluhha, which is generally understood as being the Balochistan or the Indus area. Culture In art, there was a great emphasis on the kings of the dynasty, alongside much that continued earlier Sumerian art. Little architecture remains. In large works and small ones such as seals, the degree of realism was considerably increased, but the seals show a "grim world of cruel conflict, of danger and uncertainty, a world in which man is subjected without appeal to the incomprehensible acts of distant and fearful divinities who he must serve but cannot love. This sombre mood ... remained characteristic of Mesopotamian art..." Akkadian sculpture is remarkable for its fineness and realism, which shows a clear advancement compared to the previous period of Sumerian art. The Akkadians used visual arts as a vehicle of ideology. They developed a new style for cylinder seals by reusing traditional animal decorations but organizing them around inscriptions, which often became central parts of the layout. The figures also became more sculptural and naturalistic. New elements were also included, especially in relation to the rich Akkadian mythology. During the 3rd millennium BC, there developed a very intimate cultural symbiosis between the Sumerians and the Akkadians, which included widespread bilingualism. The influence of Sumerian on Akkadian (and vice versa) is evident in all areas, from lexical borrowing on a massive scale, to syntactic, morphological, and phonological convergence. This has prompted scholars to refer to Sumerian and Akkadian in the third millennium as a sprachbund. Akkadian gradually replaced Sumerian as a spoken language somewhere around 2000 BC (the exact dating being a matter of debate), but Sumerian continued to be used as a sacred, ceremonial, literary, and scientific language in Mesopotamia until the 1st century AD. Sumerian literature continued in rich development during the Akkadian period. Enheduanna, the "wife (Sumerian dam = high priestess) of Nanna [the Sumerian moon god] and daughter of Sargon" of the temple of Sin at Ur, who lived c. 2285–2250 BC, is the first poet in history whose name is known. Her known works include hymns to the goddess Inanna, the Exaltation of Inanna and In-nin sa-gur-ra. A third work, the Temple Hymns, a collection of specific hymns, addresses the temples and their occupants, the deities to whom they were consecrated. The works of this poet are significant, because although they start out using the third person, they shift to the first person voice of the poet herself, and they mark a significant development in the use of cuneiform. As poet, princess, and priestess, she was a person who, according to William W. Hallo, "set standards in all three of her roles for many succeeding centuries" In the Exultation of Inanna, Enheduanna depicts Inanna as disciplining mankind as a goddess of battle. She thereby unites the warlike Akkadian Ishtar's qualities to those of the gentler Sumerian goddess of love and fecundity. She likens Inanna to a great storm bird who swoops down on the lesser gods and sends them fluttering off like surprised bats. Then, in probably the most interesting part of the hymn, Enheduanna herself steps forward in the first person to recite her own past glories, establishing her credibility, and explaining her present plight. She has been banished as high priestess from the temple in the city of Ur and from Uruk and exiled to the steppe. She begs the moon god Nanna to intercede for her because the city of Uruk, under the ruler Lugalanne, has rebelled against Sargon. The rebel, Lugalanne, has even destroyed the temple Eanna, one of the greatest temples in the ancient world, and then made advances on his sister-in-law. The kings of Akkad were legendary among later Mesopotamian civilizations, with Sargon understood as the prototype of a strong and wise leader, and his grandson Naram-Sin considered the wicked and impious leader (Unheilsherrscher(ruler of mischief) in the analysis of Hans Gustav Güterbock) who brought ruin upon his kingdom. Technology A tablet from the periods reads, "(From the earliest days) no-one had made a statue of lead, (but) Rimush king of Kish, had a statue of himself made of lead. It stood before Enlil; and it recited his (Rimush's) virtues to the idu of the gods". The copper Bassetki Statue, cast with the lost wax method, testifies to the high level of skill that craftsmen achieved during the Akkadian period. See also Notes Further reading External links (Shamshi-Adad dynasty1808–1736 BCE)(Amorites)Shamshi-Adad I Ishme-Dagan I Mut-Ashkur Rimush Asinum Ashur-dugul Ashur-apla-idi Nasir-Sin Sin-namir Ipqi-Ishtar Adad-salulu Adasi (Non-dynastic usurpers1735–1701 BCE) Puzur-Sin Ashur-dugul Ashur-apla-idi Nasir-Sin Sin-namir Ipqi-Ishtar Adad-salulu Adasi (Adaside dynasty1700–722 BCE)Bel-bani Libaya Sharma-Adad I Iptar-Sin Bazaya Lullaya Shu-Ninua Sharma-Adad II Erishum III Shamshi-Adad II Ishme-Dagan II Shamshi-Adad III Ashur-nirari I Puzur-Ashur III Enlil-nasir I Nur-ili Ashur-shaduni Ashur-rabi I Ashur-nadin-ahhe I Enlil-Nasir II Ashur-nirari II Ashur-bel-nisheshu Ashur-rim-nisheshu Ashur-nadin-ahhe II Second Intermediate PeriodSixteenthDynasty of Egypt AbydosDynasty SeventeenthDynasty of Egypt (1500–1100 BCE)Kidinuid dynastyIgehalkid dynastyUntash-Napirisha Twenty-first Dynasty of EgyptSmendes Amenemnisu Psusennes I Amenemope Osorkon the Elder Siamun Psusennes II Twenty-third Dynasty of EgyptHarsiese A Takelot II Pedubast I Shoshenq VI Osorkon III Takelot III Rudamun Menkheperre Ini Twenty-fourth Dynasty of EgyptTefnakht Bakenranef (Sargonid dynasty)Tiglath-Pileser† Shalmaneser† Marduk-apla-iddina II Sargon† Sennacherib† Marduk-zakir-shumi II Marduk-apla-iddina II Bel-ibni Ashur-nadin-shumi† Nergal-ushezib Mushezib-Marduk Esarhaddon† Ashurbanipal Ashur-etil-ilani Sinsharishkun Sin-shumu-lishir Ashur-uballit II 33°6′N 44°6′E / 33.100°N 44.100°E / 33.100; 44.100 |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Lod#cite_ref-Monterescup16_73-0] | [TOKENS: 4733] |
Contents Lod Lod (Hebrew: לוד, fully vocalized: לֹד), also known as Lydda (Ancient Greek: Λύδδα) and Lidd (Arabic: اللِّدّ, romanized: al-Lidd, or اللُّدّ, al-Ludd), is a city 15 km (9+1⁄2 mi) southeast of Tel Aviv and 40 km (25 mi) northwest of Jerusalem in the Central District of Israel. It is situated between the lower Shephelah on the east and the coastal plain on the west. The city had a population of 90,814 in 2023. Lod has been inhabited since at least the Neolithic period. It is mentioned a few times in the Hebrew Bible and in the New Testament. Between the 5th century BCE and up until the late Roman period, it was a prominent center for Jewish scholarship and trade. Around 200 CE, the city became a Roman colony and was renamed Diospolis (Ancient Greek: Διόσπολις, lit. 'city of Zeus'). Tradition identifies Lod as the 4th century martyrdom site of Saint George; the Church of Saint George and Mosque of Al-Khadr located in the city is believed to have housed his remains. Following the Arab conquest of the Levant, Lod served as the capital of Jund Filastin; however, a few decades later, the seat of power was transferred to Ramla, and Lod slipped in importance. Under Crusader rule, the city was a Catholic diocese of the Latin Church and it remains a titular see to this day.[citation needed] Lod underwent a major change in its population in the mid-20th century. Exclusively Palestinian Arab in 1947, Lod was part of the area designated for an Arab state in the United Nations Partition Plan for Palestine; however, in July 1948, the city was occupied by the Israel Defense Forces, and most of its Arab inhabitants were expelled in the Palestinian expulsion from Lydda and Ramle. The city was largely resettled by Jewish immigrants, most of them expelled from Arab countries. Today, Lod is one of Israel's mixed cities, with an Arab population of 30%. Lod is one of Israel's major transportation hubs. The main international airport, Ben Gurion Airport, is located 8 km (5 miles) north of the city. The city is also a major railway and road junction. Religious references The Hebrew name Lod appears in the Hebrew Bible as a town of Benjamin, founded along with Ono by Shamed or Shamer (1 Chronicles 8:12; Ezra 2:33; Nehemiah 7:37; 11:35). In Ezra 2:33, it is mentioned as one of the cities whose inhabitants returned after the Babylonian captivity. Lod is not mentioned among the towns allocated to the tribe of Benjamin in Joshua 18:11–28. The name Lod derives from a tri-consonantal root not extant in Northwest Semitic, but only in Arabic (“to quarrel; withhold, hinder”). An Arabic etymology of such an ancient name is unlikely (the earliest attestation is from the Achaemenid period). In the New Testament, the town appears in its Greek form, Lydda, as the site of Peter's healing of Aeneas in Acts 9:32–38. The city is also mentioned in an Islamic hadith as the location of the battlefield where the false messiah (al-Masih ad-Dajjal) will be slain before the Day of Judgment. History The first occupation dates to the Neolithic in the Near East and is associated with the Lodian culture. Occupation continued in the Levant Chalcolithic. Pottery finds have dated the initial settlement in the area now occupied by the town to 5600–5250 BCE. In the Early Bronze, it was an important settlement in the central coastal plain between the Judean Shephelah and the Mediterranean coast, along Nahal Ayalon. Other important nearby sites were Tel Dalit, Tel Bareqet, Khirbat Abu Hamid (Shoham North), Tel Afeq, Azor and Jaffa. Two architectural phases belong to the late EB I in Area B. The first phase had a mudbrick wall, while the late phase included a circulat stone structure. Later excavations have produced an occupation later, Stratum IV. It consists of two phases, Stratum IVb with mudbrick wall on stone foundations and rounded exterior corners. In Stratum IVa there was a mudbrick wall with no stone foundations, with imported Egyptian potter and local pottery imitations. Another excavations revealed nine occupation strata. Strata VI-III belonged to Early Bronze IB. The material culture showed Egyptian imports in strata V and IV. Occupation continued into Early Bronze II with four strata (V-II). There was continuity in the material culture and indications of centralized urban planning. North to the tell were scattered MB II burials. The earliest written record is in a list of Canaanite towns drawn up by the Egyptian pharaoh Thutmose III at Karnak in 1465 BCE. From the fifth century BCE until the Roman period, the city was a centre of Jewish scholarship and commerce. According to British historian Martin Gilbert, during the Hasmonean period, Jonathan Maccabee and his brother, Simon Maccabaeus, enlarged the area under Jewish control, which included conquering the city. The Jewish community in Lod during the Mishnah and Talmud era is described in a significant number of sources, including information on its institutions, demographics, and way of life. The city reached its height as a Jewish center between the First Jewish-Roman War and the Bar Kokhba revolt, and again in the days of Judah ha-Nasi and the start of the Amoraim period. The city was then the site of numerous public institutions, including schools, study houses, and synagogues. In 43 BC, Cassius, the Roman governor of Syria, sold the inhabitants of Lod into slavery, but they were set free two years later by Mark Antony. During the First Jewish–Roman War, the Roman proconsul of Syria, Cestius Gallus, razed the town on his way to Jerusalem in Tishrei 66 CE. According to Josephus, "[he] found the city deserted, for the entire population had gone up to Jerusalem for the Feast of Tabernacles. He killed fifty people whom he found, burned the town and marched on". Lydda was occupied by Emperor Vespasian in 68 CE. In the period following the destruction of Jerusalem in 70 CE, Rabbi Tarfon, who appears in many Tannaitic and Jewish legal discussions, served as a rabbinic authority in Lod. During the Kitos War, 115–117 CE, the Roman army laid siege to Lod, where the rebel Jews had gathered under the leadership of Julian and Pappos. Torah study was outlawed by the Romans and pursued mostly in the underground. The distress became so great, the patriarch Rabban Gamaliel II, who was shut up there and died soon afterwards, permitted fasting on Ḥanukkah. Other rabbis disagreed with this ruling. Lydda was next taken and many of the Jews were executed; the "slain of Lydda" are often mentioned in words of reverential praise in the Talmud. In 200 CE, emperor Septimius Severus elevated the town to the status of a city, calling it Colonia Lucia Septimia Severa Diospolis. The name Diospolis ("City of Zeus") may have been bestowed earlier, possibly by Hadrian. At that point, most of its inhabitants were Christian. The earliest known bishop is Aëtius, a friend of Arius. During the following century (200-300CE), it's said that Joshua ben Levi founded a yeshiva in Lod. In December 415, the Council of Diospolis was held here to try Pelagius; he was acquitted. In the sixth century, the city was renamed Georgiopolis after St. George, a soldier in the guard of the emperor Diocletian, who was born there between 256 and 285 CE. The Church of Saint George and Mosque of Al-Khadr is named for him. The 6th-century Madaba map shows Lydda as an unwalled city with a cluster of buildings under a black inscription reading "Lod, also Lydea, also Diospolis". An isolated large building with a semicircular colonnaded plaza in front of it might represent the St George shrine. After the Muslim conquest of Palestine by Amr ibn al-'As in 636 CE, Lod which was referred to as "al-Ludd" in Arabic served as the capital of Jund Filastin ("Military District of Palaestina") before the seat of power was moved to nearby Ramla during the reign of the Umayyad Caliph Suleiman ibn Abd al-Malik in 715–716. The population of al-Ludd was relocated to Ramla, as well. With the relocation of its inhabitants and the construction of the White Mosque in Ramla, al-Ludd lost its importance and fell into decay. The city was visited by the local Arab geographer al-Muqaddasi in 985, when it was under the Fatimid Caliphate, and was noted for its Great Mosque which served the residents of al-Ludd, Ramla, and the nearby villages. He also wrote of the city's "wonderful church (of St. George) at the gate of which Christ will slay the Antichrist." The Crusaders occupied the city in 1099 and named it St Jorge de Lidde. It was briefly conquered by Saladin, but retaken by the Crusaders in 1191. For the English Crusaders, it was a place of great significance as the birthplace of Saint George. The Crusaders made it the seat of a Latin Church diocese, and it remains a titular see. It owed the service of 10 knights and 20 sergeants, and it had its own burgess court during this era. In 1226, Ayyubid Syrian geographer Yaqut al-Hamawi visited al-Ludd and stated it was part of the Jerusalem District during Ayyubid rule. Sultan Baybars brought Lydda again under Muslim control by 1267–8. According to Qalqashandi, Lydda was an administrative centre of a wilaya during the fourteenth and fifteenth century in the Mamluk empire. Mujir al-Din described it as a pleasant village with an active Friday mosque. During this time, Lydda was a station on the postal route between Cairo and Damascus. In 1517, Lydda was incorporated into the Ottoman Empire as part of the Damascus Eyalet, and in the 1550s, the revenues of Lydda were designated for the new waqf of Hasseki Sultan Imaret in Jerusalem, established by Hasseki Hurrem Sultan (Roxelana), the wife of Suleiman the Magnificent. By 1596 Lydda was a part of the nahiya ("subdistrict") of Ramla, which was under the administration of the liwa ("district") of Gaza. It had a population of 241 households and 14 bachelors who were all Muslims, and 233 households who were Christians. They paid a fixed tax-rate of 33,3 % on agricultural products, including wheat, barley, summer crops, vineyards, fruit trees, sesame, special product ("dawalib" =spinning wheels), goats and beehives, in addition to occasional revenues and market toll, a total of 45,000 Akçe. All of the revenue went to the Waqf. In 1051 AH/1641/2, the Bedouin tribe of al-Sawālima from around Jaffa attacked the villages of Subṭāra, Bayt Dajan, al-Sāfiriya, Jindās, Lydda and Yāzūr belonging to Waqf Haseki Sultan. The village appeared as Lydda, though misplaced, on the map of Pierre Jacotin compiled in 1799. Missionary William M. Thomson visited Lydda in the mid-19th century, describing it as a "flourishing village of some 2,000 inhabitants, imbosomed in noble orchards of olive, fig, pomegranate, mulberry, sycamore, and other trees, surrounded every way by a very fertile neighbourhood. The inhabitants are evidently industrious and thriving, and the whole country between this and Ramleh is fast being filled up with their flourishing orchards. Rarely have I beheld a rural scene more delightful than this presented in early harvest ... It must be seen, heard, and enjoyed to be appreciated." In 1869, the population of Ludd was given as: 55 Catholics, 1,940 "Greeks", 5 Protestants and 4,850 Muslims. In 1870, the Church of Saint George was rebuilt. In 1892, the first railway station in the entire region was established in the city. In the second half of the 19th century, Jewish merchants migrated to the city, but left after the 1921 Jaffa riots. In 1882, the Palestine Exploration Fund's Survey of Western Palestine described Lod as "A small town, standing among enclosure of prickly pear, and having fine olive groves around it, especially to the south. The minaret of the mosque is a very conspicuous object over the whole of the plain. The inhabitants are principally Moslim, though the place is the seat of a Greek bishop resident of Jerusalem. The Crusading church has lately been restored, and is used by the Greeks. Wells are found in the gardens...." From 1918, Lydda was under the administration of the British Mandate in Palestine, as per a League of Nations decree that followed the Great War. During the Second World War, the British set up supply posts in and around Lydda and its railway station, also building an airport that was renamed Ben Gurion Airport after the death of Israel's first prime minister in 1973. At the time of the 1922 census of Palestine, Lydda had a population of 8,103 inhabitants (7,166 Muslims, 926 Christians, and 11 Jews), the Christians were 921 Orthodox, 4 Roman Catholics and 1 Melkite. This had increased by the 1931 census to 11,250 (10,002 Muslims, 1,210 Christians, 28 Jews, and 10 Bahai), in a total of 2475 residential houses. In 1938, Lydda had a population of 12,750. In 1945, Lydda had a population of 16,780 (14,910 Muslims, 1,840 Christians, 20 Jews and 10 "other"). Until 1948, Lydda was an Arab town with a population of around 20,000—18,500 Muslims and 1,500 Christians. In 1947, the United Nations proposed dividing Mandatory Palestine into two states, one Jewish state and one Arab; Lydda was to form part of the proposed Arab state. In the ensuing war, Israel captured Arab towns outside the area the UN had allotted it, including Lydda. In December 1947, thirteen Jewish passengers in a seven-car convoy to Ben Shemen Youth Village were ambushed and murdered.In a separate incident, three Jewish youths, two men and a woman were captured, then raped and murdered in a neighbouring village. Their bodies were paraded in Lydda’s principal street. The Israel Defense Forces entered Lydda on 11 July 1948. The following day, under the impression that it was under attack, the 3rd Battalion was ordered to shoot anyone "seen on the streets". According to Israel, 250 Arabs were killed. Other estimates are higher: Arab historian Aref al Aref estimated 400, and Nimr al Khatib 1,700. In 1948, the population rose to 50,000 during the Nakba, as Arab refugees fleeing other areas made their way there. A key event was the Palestinian expulsion from Lydda and Ramle, with the expulsion of 50,000-70,000 Palestinians from Lydda and Ramle by the Israel Defense Forces. All but 700 to 1,056 were expelled by order of the Israeli high command, and forced to walk 17 km (10+1⁄2 mi) to the Jordanian Arab Legion lines. Estimates of those who died from exhaustion and dehydration vary from a handful to 355. The town was subsequently sacked by the Israeli army. Some scholars, including Ilan Pappé, characterize this as ethnic cleansing. The few hundred Arabs who remained in the city were soon outnumbered by the influx of Jews who immigrated to Lod from August 1948 onward, most of them from Arab countries. As a result, Lod became a predominantly Jewish town. After the establishment of the state, the biblical name Lod was readopted. The Jewish immigrants who settled Lod came in waves, first from Morocco and Tunisia, later from Ethiopia, and then from the former Soviet Union. Since 2008, many urban development projects have been undertaken to improve the image of the city. Upscale neighbourhoods have been built, among them Ganei Ya'ar and Ahisemah, expanding the city to the east. According to a 2010 report in the Economist, a three-meter-high wall was built between Jewish and Arab neighbourhoods and construction in Jewish areas was given priority over construction in Arab neighborhoods. The newspaper says that violent crime in the Arab sector revolves mainly around family feuds over turf and honour crimes. In 2010, the Lod Community Foundation organised an event for representatives of bicultural youth movements, volunteer aid organisations, educational start-ups, businessmen, sports organizations, and conservationists working on programmes to better the city. In the 2021 Israel–Palestine crisis, a state of emergency was declared in Lod after Arab rioting led to the death of an Israeli Jew. The Mayor of Lod, Yair Revivio, urged Prime Minister of Israel Benjamin Netanyahu to deploy Israel Border Police to restore order in the city. This was the first time since 1966 that Israel had declared this kind of emergency lockdown. International media noted that both Jewish and Palestinian mobs were active in Lod, but the "crackdown came for one side" only. Demographics In the 19th century and until the Lydda Death March, Lod was an exclusively Muslim-Christian town, with an estimated 6,850 inhabitants, of whom approximately 2,000 (29%) were Christian. According to the Israel Central Bureau of Statistics (CBS), the population of Lod in 2010 was 69,500 people. According to the 2019 census, the population of Lod was 77,223, of which 53,581 people, comprising 69.4% of the city's population, were classified as "Jews and Others", and 23,642 people, comprising 30.6% as "Arab". Education According to CBS, 38 schools and 13,188 pupils are in the city. They are spread out as 26 elementary schools and 8,325 elementary school pupils, and 13 high schools and 4,863 high school pupils. About 52.5% of 12th-grade pupils were entitled to a matriculation certificate in 2001.[citation needed] Economy The airport and related industries are a major source of employment for the residents of Lod. Other important factories in the city are the communication equipment company "Talard", "Cafe-Co" - a subsidiary of the Strauss Group and "Kashev" - the computer center of Bank Leumi. A Jewish Agency Absorption Centre is also located in Lod. According to CBS figures for 2000, 23,032 people were salaried workers and 1,405 were self-employed. The mean monthly wage for a salaried worker was NIS 4,754, a real change of 2.9% over the course of 2000. Salaried men had a mean monthly wage of NIS 5,821 (a real change of 1.4%) versus NIS 3,547 for women (a real change of 4.6%). The mean income for the self-employed was NIS 4,991. About 1,275 people were receiving unemployment benefits and 7,145 were receiving an income supplement. Art and culture In 2009-2010, Dor Guez held an exhibit, Georgeopolis, at the Petach Tikva art museum that focuses on Lod. Archaeology A well-preserved mosaic floor dating to the Roman period was excavated in 1996 as part of a salvage dig conducted on behalf of the Israel Antiquities Authority and the Municipality of Lod, prior to widening HeHalutz Street. According to Jacob Fisch, executive director of the Friends of the Israel Antiquities Authority, a worker at the construction site noticed the tail of a tiger and halted work. The mosaic was initially covered over with soil at the conclusion of the excavation for lack of funds to conserve and develop the site. The mosaic is now part of the Lod Mosaic Archaeological Center. The floor, with its colorful display of birds, fish, exotic animals and merchant ships, is believed to have been commissioned by a wealthy resident of the city for his private home. The Lod Community Archaeology Program, which operates in ten Lod schools, five Jewish and five Israeli Arab, combines archaeological studies with participation in digs in Lod. Sports The city's major football club, Hapoel Bnei Lod, plays in Liga Leumit (the second division). Its home is at the Lod Municipal Stadium. The club was formed by a merger of Bnei Lod and Rakevet Lod in the 1980s. Two other clubs in the city play in the regional leagues: Hapoel MS Ortodoxim Lod in Liga Bet and Maccabi Lod in Liga Gimel. Hapoel Lod played in the top division during the 1960s and 1980s, and won the State Cup in 1984. The club folded in 2002. A new club, Hapoel Maxim Lod (named after former mayor Maxim Levy) was established soon after, but folded in 2007. Notable people Twin towns-sister cities Lod is twinned with: See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/w/index.php?title=Social_network&action=edit§ion=9] | [TOKENS: 1430] |
Editing Social network (section) Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} This page is a member of 6 hidden categories (help): |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Zico_Kolter] | [TOKENS: 183] |
Contents Zico Kolter Jeremy Zico Kolter is a professor at Carnegie Mellon University and director of its machine learning department. He focuses primarily on AI safety research. He is a co-founder and senior advisor of Gray Swan AI, an AI safety and security company. In 2024, he was appointed to the board of directors for OpenAI, and became chair of its safety and security committee. In 2025, he was named as a recipient of funding from Schmidt Sciences AI safety science program. Kolter earned his PhD in computer science at Stanford University and completed a postdoctoral fellowship at Massachusetts Institute of Technology. He joined the CMU faculty in 2012. His other corporate positions have included chief data scientist at C3.ai and chief expert at Bosch Center for AI. At CMU, he has worked on projects such as finding ways to automate assessment of large language model safety. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Post-Zionism] | [TOKENS: 348] |
Contents Post-Zionism Post-Zionism is the opinion of some Israelis, diaspora Jews and others, particularly in academia, that Zionism fulfilled its ideological mission with the formation of the modern State of Israel in 1948, and that Zionist ideology should therefore be considered at an end. The Jewish right also use the term to refer to the Israeli Left in light of the Oslo Accords of 1993 and 1995. Some critics associate post-Zionism with anti-Zionism; proponents strenuously deny this association. Hebrew Universalism Hebrew Universalism is a post-Zionist philosophy developed initially by Rav Abraham Kook and expanded upon by Israeli settler activist Rav Yehuda HaKohen, as well as the Vision Movement. The philosophy attempts to synthesize "three forces" defined by Kook in his 1920 book, Lights of Rebirth. The three forces being: "The Holy" (Orthodox Jews), "The Nation" (secular Jewish Zionists), and "The Humanist" (General Humanism). Kook believed that through his philosophy anti-Zionists, Orthodox Jews, and secular nationalists could work together in Israel. The current ideology, as espoused by the Vision Movement and HaKohen, draws inspiration from Natan Yellin-Mor, Rav Abraham Kook, Canaanism, Avraham Stern, anti-Zionist critics, and the left wing Semitic Action group. Criticism Post-Zionism has been criticized by Shlomo Avineri as a polite recasting of anti-Zionism, and therefore a deceptive term. Some right-wing Israelis have accused Jewish post-Zionists of being self-hating Jews. See also References External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.