text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Thinking_Machines_Lab] | [TOKENS: 439] |
Contents Thinking Machines Lab Thinking Machines Lab Inc. is an American artificial intelligence (AI) startup founded by Mira Murati, the former chief technology officer of OpenAI. The company was founded in February 2025, and by July had completed an early-stage funding round led by Andreessen Horowitz, raising $2 billion at a valuation of $12 billion overall from investors such as Nvidia, AMD, Cisco, and Jane Street. The company is based in San Francisco and structured as a public benefit corporation. History By its launch in February 2025, Thinking Machines Lab was reported to have hired about 30 researchers and engineers from competitors including OpenAI, Meta AI, and Mistral AI. Its founding team members include Barret Zoph, former OpenAI VP of Research (Post-Training), Lilian Weng, former OpenAI VP, and OpenAI cofounder John Schulman, who joined after a brief stint at the lab's competitor Anthropic. Other former OpenAI employees who have been hired include Jonathan Lachman and Andrew Tulloch (although Tulloch departed after getting recruited for Meta Superintelligence Labs). Thinking Machines Lab's advisers include Bob McGrew, previously OpenAI's chief research officer, and Alec Radford, who was a lead researcher for OpenAI. On October 1, 2025, it announced Tinker, an API for fine-tuning language models. Users would submit jobs through the API for fine-tuning one of the various open-weight models supported. The Lab would run the jobs on its internal clusters and training infrastructure. Business structure Thinking Machines Lab grants Mira Murati a deciding vote on board matters, weighted to provide her with a majority decision-making capability. Additionally, founding shareholders possess votes weighted 100 times greater than those of regular shareholders. In July 2025, Andreessen Horowitz was reported to have led the company's initial funding round, raising "about $2 billion at a valuation of $12 billion". The government of Albania (Murati's country of origin) was also included in this round, making a $10 million investment which required an amendment to the country's 2025 budget. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Microthread] | [TOKENS: 209] |
Contents Microthread Microthreads are functions that may run in parallel to gain increased performance in microprocessors. They provide an execution model that uses a few additional instructions in a conventional processor to break code down into fragments that execute simultaneously. Dependencies are managed by making registers in the microprocessors executing the code synchronising, so one microthread will wait for another to produce data. This is a form of dataflow. This model can be applied to an existing instruction set architecture incrementally by providing just five new instructions to implement concurrency controls. A set of microthreads is a static partition of a basic block into concurrently executing fragments, which execute on a single processor and share a microcontext. An iterator over a set provides a dynamic and parametric family of microthreads. Iterators capture loop concurrency and can be scheduled to different processors. An iterator over a set is created dynamically and is called a family of microthreads. This is the mechanism that allows the model to generate concurrency, which can be run on multiple processors or functional units. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Simulink] | [TOKENS: 494] |
Contents Simulink Simulink is a MATLAB-based graphical programming environment for modeling, simulating and analyzing multidomain dynamical systems. Its primary interface is a graphical block diagramming tool and a customizable set of block libraries. It offers tight integration with the rest of the MATLAB environment and can either drive MATLAB or be scripted from it. Simulink is widely used in automatic control and digital signal processing for multidomain simulation and model-based design. Add-on products MathWorks and other third-party hardware and software products can be used with Simulink. For example, Stateflow extends Simulink with a design environment for developing state machines and flow charts. Simscape is an add-on product that extends Simulink with support for physical modeling. Developed by MathWorks, it enables the modeling and simulation of multidomain physical systems by connecting physical components whose interactions represent domains such as electrical, mechanical, thermal, and fluid systems. Simscape models can be combined with standard Simulink block diagrams, allowing the simulation of systems that couple physical dynamics with control algorithms and signal-based components. Coupled with another of their products, Simulink can automatically generate C source code for real-time implementation of systems. As the efficiency and flexibility of the code improves, this is becoming more widely adopted for production systems, in addition to being a tool for embedded system design work because of its flexibility and capacity for quick iteration[citation needed]. Embedded Coder creates code efficient enough for use in embedded systems. Simulink Real-Time (formerly known as xPC Target), together with x86-based real-time systems, is an environment for simulating and testing Simulink and Stateflow models in real-time on the physical system. Another MathWorks product also supports specific embedded targets. When used with other generic products, Simulink and Stateflow can automatically generate synthesizable VHDL and Verilog[citation needed]. Simulink Verification and Validation enables systematic verification and validation of models through modeling style checking, requirements traceability and model coverage analysis. Simulink Design Verifier uses formal methods to identify design errors like integer overflow, division by zero and dead logic, and generates test case scenarios for model checking within the Simulink environment. SimEvents is used to add a library of graphical building blocks for modeling queuing systems to the Simulink environment, and to add an event-based simulation engine to the time-based simulation engine in Simulink. Release history See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Special:BookSources/1-59102-361-0] | [TOKENS: 380] |
Contents Book sources This page allows users to search multiple sources for a book given a 10- or 13-digit International Standard Book Number. Spaces and dashes in the ISBN do not matter. This page links to catalogs of libraries, booksellers, and other book sources where you will be able to search for the book by its International Standard Book Number (ISBN). Online text Google Books and other retail sources below may be helpful if you want to verify citations in Wikipedia articles, because they often let you search an online version of the book for specific words or phrases, or you can browse through the book (although for copyright reasons the entire book is usually not available). At the Open Library (part of the Internet Archive) you can borrow and read entire books online. Online databases Subscription eBook databases Libraries Alabama Alaska California Colorado Connecticut Delaware Florida Georgia Illinois Indiana Iowa Kansas Kentucky Massachusetts Michigan Minnesota Missouri Nebraska New Jersey New Mexico New York North Carolina Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Washington state Wisconsin Bookselling and swapping Find your book on a site that compiles results from other online sites: These sites allow you to search the catalogs of many individual booksellers: Non-English book sources If the book you are looking for is in a language other than English, you might find it helpful to look at the equivalent pages on other Wikipedias, linked below – they are more likely to have sources appropriate for that language. Find other editions The WorldCat xISBN tool for finding other editions is no longer available. However, there is often a "view all editions" link on the results page from an ISBN search. Google books often lists other editions of a book and related books under the "about this book" link. You can convert between 10 and 13 digit ISBNs with these tools: Find on Wikipedia See also Get free access to research! Research tools and services Outreach Get involved |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Extraterrestrial_life#cite_note-39] | [TOKENS: 11349] |
Contents Extraterrestrial life Extraterrestrial life, or alien life (colloquially aliens), is life that originates from another world rather than on Earth. No extraterrestrial life has yet been scientifically or conclusively detected. Such life might range from simple forms such as prokaryotes to intelligent beings, possibly bringing forth civilizations that might be far more, or far less, advanced than humans. The Drake equation speculates about the existence of sapient life elsewhere in the universe. The science of extraterrestrial life is known as astrobiology. Speculation about inhabited worlds beyond Earth dates back to antiquity. Early Christian writers, including Augustine, discussed ideas from thinkers like Democritus and Epicurus about countless worlds in the vast universe. Pre-modern writers typically assumed extraterrestrial "worlds" were inhabited by living beings. William Vorilong, in the 15th century, acknowledged the possibility Jesus could have visited extraterrestrial worlds to redeem their inhabitants.: 26 In 1440, Nicholas of Cusa suggested Earth is a "brilliant star"; he theorized that all celestial bodies, even the Sun, could host life. Descartes wrote that there were no means to prove the stars were not inhabited by "intelligent creatures", but their existence was a matter of speculation.: 67 In comparison to the life-abundant Earth, the vast majority of intrasolar and extrasolar planets and moons have harsh surface conditions and disparate atmospheric chemistry, or lack an atmosphere. However, there are many extreme and chemically harsh ecosystems on Earth that do support forms of life and are often hypothesized to be the origin of life on Earth. Examples include life surrounding hydrothermal vents, acidic hot springs, and volcanic lakes, as well as halophiles and the deep biosphere. Since the mid-20th century, researchers have searched for extraterrestrial life and intelligence. Solar system studies focus on Venus, Mars, Europa, and Titan, while exoplanet discoveries now total 6,022 confirmed planets in 4,490 systems as of October 2025. Depending on the category of search, methods range from analysis of telescope and specimen data to radios used to detect and transmit interstellar communication. Interstellar travel remains largely hypothetical, with only the Voyager 1 and Voyager 2 probes confirmed to have entered the interstellar medium. The concept of extraterrestrial life, especially intelligent life, has greatly influenced culture and fiction. A key debate centers on contacting extraterrestrial intelligence: some advocate active attempts, while others warn it could be risky, given human history of exploiting other societies. Context Initially, after the Big Bang, the universe was too hot to allow life. It is estimated that the temperature of the universe was around 10 billion Kelvin at the one-second mark. Roughly 15 million years later, it cooled to temperate levels, though the elements of organic life were yet nonexistent. The only freely available elements at that point were hydrogen and helium. Carbon and oxygen (and later, water) would not appear until 50 million years later, created through stellar fusion. At that point, the difficulty for life to appear was not the temperature, but the scarcity of free heavy elements. Planetary systems emerged, and the first organic compounds may have formed in the protoplanetary disk of dust grains that would eventually create rocky planets like Earth. Although Earth was in a molten state after its birth and may have burned any organics that fell on it, it would have been more receptive once it cooled down. Once the right conditions on Earth were met, life started by a chemical process known as abiogenesis. Alternatively, life may have formed less frequently, then spread—by meteoroids, for example—between habitable planets in a process called panspermia. During most of its stellar evolution, stars combine hydrogen nuclei to make helium nuclei by stellar fusion, and the comparatively lighter weight of helium allows the star to release the extra energy. The process continues until the star uses all of its available fuel, with the speed of consumption being related to the size of the star. During its last stages, stars start combining helium nuclei to form carbon nuclei. The larger stars can further combine carbon nuclei to create oxygen and silicon, oxygen into neon and sulfur, and so on until iron. Ultimately, the star blows much of its content back into the stellar medium, where it would join clouds that would eventually become new generations of stars and planets. Many of those materials are the raw components of life on Earth. As this process takes place in all the universe, said materials are ubiquitous in the cosmos and not a rarity from the Solar System. Earth is a planet in the Solar System, a planetary system formed by a star at the center, the Sun, and the objects that orbit it: other planets, moons, asteroids, and comets. The sun is part of the Milky Way, a galaxy. The Milky Way is part of the Local Group, a galaxy group that is in turn part of the Laniakea Supercluster. The universe is composed of all similar structures in existence. The immense distances between celestial objects are a difficulty for studying extraterrestrial life. So far, humans have only set foot on the Moon and sent robotic probes to other planets and moons in the Solar System. Although probes can withstand conditions that may be lethal to humans, the distances cause time delays: the New Horizons took nine years after launch to reach Pluto. No probe has ever reached extrasolar planetary systems. The Voyager 2 left the Solar System at a speed of 50,000 kilometers per hour; if it headed towards the Alpha Centauri system, the closest one to Earth at 4.4 light years, it would reach it in 100,000 years. Under current technology, such systems can only be studied by telescopes, which have limitations. It is estimated that dark matter has a larger amount of combined matter than stars and gas clouds, but as it plays no role in the stellar evolution of stars and planets, it is usually not taken into account by astrobiology. There is an area around a star, the circumstellar habitable zone or "Goldilocks zone", wherein water may be at the right temperature to exist in liquid form at a planetary surface. This area is neither too close to the star, where water would become steam, nor too far away, where water would be frozen as ice. However, although useful as an approximation, planetary habitability is complex and defined by several factors. Being in the habitable zone is not enough for a planet to be habitable, not even to actually have such liquid water. Venus is located in the solar system's habitable zone, but does not have liquid water because of the conditions of its atmosphere. Jovian planets or gas giants are not considered habitable even if they orbit close enough to their stars as hot Jupiters, due to crushing atmospheric pressures. The actual distances for the habitable zones vary according to the type of star, and even the solar activity of each specific star influences the local habitability. The type of star also defines the time the habitable zone will exist, as its presence and limits will change along with the star's stellar evolution. The Big Bang occurred 13.8 billion years ago, the Solar System was formed 4.6 billion years ago, and the first hominids appeared 6 million years ago. Life on other planets may have started, evolved, given birth to extraterrestrial intelligences, and perhaps even faced a planetary extinction event millions or billions of years ago. When considered from a cosmic perspective, the brief times of existence of Earth's species may suggest that extraterrestrial life may be equally fleeting under such a scale. During a period of about 7 million years, from about 10 to 17 million years after the Big Bang, the background temperature was between 373 and 273 K (100 and 0 °C; 212 and 32 °F), allowing the possibility of liquid water if any planets existed. Avi Loeb (2014) speculated that primitive life might in principle have appeared during this window, which he called "the Habitable Epoch of the Early Universe". Life on Earth is quite ubiquitous across the planet and has adapted over time to almost all the available environments in it, extremophiles and the deep biosphere thrive at even the most hostile ones. As a result, it is inferred that life in other celestial bodies may be equally adaptive. However, the origin of life is unrelated to its ease of adaptation and may have stricter requirements. A celestial body may not have any life on it, even if it were habitable. Likelihood of existence Life in the cosmos beyond Earth has been observed. The hypothesis of ubiquitous extraterrestrial life relies on three main ideas. The first one, the size of the universe, allows for plenty of planets to have a similar habitability to Earth, and the age of the universe gives enough time for a long process analog to the history of Earth to happen there. The second is that the substances that make life, such as carbon and water, are ubiquitous in the universe. The third is that the physical laws are universal, which means that the forces that would facilitate or prevent the existence of life would be the same ones as on Earth. According to this argument, made by scientists such as Carl Sagan and Stephen Hawking, it would be improbable for life not to exist somewhere else other than Earth. This argument is embodied in the Copernican principle, which states that Earth does not occupy a unique position in the Universe, and the mediocrity principle, which states that there is nothing special about life on Earth. Other authors consider instead that life in the cosmos, or at least multicellular life, may actually be rare. The Rare Earth hypothesis maintains that life on Earth is possible because of a series of factors that range from the location in the galaxy and the configuration of the Solar System to local characteristics of the planet, and that it is unlikely that another planet simultaneously meets all such requirements. The proponents of this hypothesis consider that very little evidence suggests the existence of extraterrestrial life and that, at this point, it is just a desired result and not a reasonable scientific explanation for any gathered data. In 1961, astronomer and astrophysicist Frank Drake devised the Drake equation as a way to stimulate scientific dialogue at a meeting on the search for extraterrestrial intelligence (SETI). The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy. The Drake equation is:: xix where: and Drake's proposed estimates are as follows, but numbers on the right side of the equation are agreed as speculative and open to substitution: 10,000 = 5 ⋅ 0.5 ⋅ 2 ⋅ 1 ⋅ 0.2 ⋅ 1 ⋅ 10,000 {\displaystyle 10{,}000=5\cdot 0.5\cdot 2\cdot 1\cdot 0.2\cdot 1\cdot 10{,}000} [better source needed] The Drake equation has proved controversial since, although it is written as a math equation, none of its values were known at the time. Although some values may eventually be measured, others are based on social sciences and are not knowable by their very nature. This does not allow one to make noteworthy conclusions from the equation. Based on observations from the Hubble Space Telescope, there are nearly 2 trillion galaxies in the observable universe. It is estimated that at least ten percent of all Sun-like stars have a system of planets. In other words, there are 6.25×1018 stars with planets orbiting them in the observable universe. Even if it is assumed that only one out of a billion of these stars has planets supporting life, there would be some 6.25 billion life-supporting planetary systems in the observable universe. A 2013 study based on results from the Kepler spacecraft estimated that the Milky Way contains at least as many planets as it does stars, resulting in 100–400 billion exoplanets. The Nebular hypothesis that explains the formation of the Solar System and other planetary systems would suggest that those can have several configurations, and not all of them may have rocky planets within the habitable zone. The apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilisations and the lack of evidence for such civilisations is known as the Fermi paradox. Dennis W. Sciama claimed that life's existence in the universe depends on various fundamental constants. Zhi-Wei Wang and Samuel L. Braunstein suggest that a random universe capable of supporting life is likely to be just barely able to do so, giving a potential explanation to the Fermi paradox. Biochemical basis If extraterrestrial life exists, it could range from simple microorganisms and multicellular organisms similar to animals or plants, to complex alien intelligences akin to humans. When scientists talk about extraterrestrial life, they consider all those types. Although it is possible that extraterrestrial life may have other configurations, scientists use the hierarchy of lifeforms from Earth for simplicity, as it is the only one known to exist. The first basic requirement for life is an environment with non-equilibrium thermodynamics, which means that the thermodynamic equilibrium must be broken by a source of energy. The traditional sources of energy in the cosmos are the stars, such as for life on Earth, which depends on the energy of the sun. However, there are other alternative energy sources, such as volcanoes, plate tectonics, and hydrothermal vents. There are ecosystems on Earth in deep areas of the ocean that do not receive sunlight, and take energy from black smokers instead. Magnetic fields and radioactivity have also been proposed as sources of energy, although they would be less efficient ones. Life on Earth requires water in a liquid state as a solvent in which biochemical reactions take place. It is highly unlikely that an abiogenesis process can start within a gaseous or solid medium: the atom speeds, either too fast or too slow, make it difficult for specific ones to meet and start chemical reactions. A liquid medium also allows the transport of nutrients and substances required for metabolism. Sufficient quantities of carbon and other elements, along with water, might enable the formation of living organisms on terrestrial planets with a chemical make-up and temperature range similar to that of Earth. Life based on ammonia rather than water has been suggested as an alternative, though this solvent appears less suitable than water. It is also conceivable that there are forms of life whose solvent is a liquid hydrocarbon, such as methane, ethane or propane. Another unknown aspect of potential extraterrestrial life would be the chemical elements that would compose it. Life on Earth is largely composed of carbon, but there could be other hypothetical types of biochemistry. A replacement for carbon would need to be able to create complex molecules, store information required for evolution, and be freely available in the medium. To create DNA, RNA, or a close analog, such an element should be able to bind its atoms with many others, creating complex and stable molecules. It should be able to create at least three covalent bonds: two for making long strings and at least a third to add new links and allow for diverse information. Only nine elements meet this requirement: boron, nitrogen, phosphorus, arsenic, antimony (three bonds), carbon, silicon, germanium and tin (four bonds). As for abundance, carbon, nitrogen, and silicon are the most abundant ones in the universe, far more than the others. On Earth's crust the most abundant of those elements is silicon, in the Hydrosphere it is carbon and in the atmosphere, it is carbon and nitrogen. Silicon, however, has disadvantages over carbon. The molecules formed with silicon atoms are less stable, and more vulnerable to acids, oxygen, and light. An ecosystem of silicon-based lifeforms would require very low temperatures, high atmospheric pressure, an atmosphere devoid of oxygen, and a solvent other than water. The low temperatures required would add an extra problem, the difficulty to kickstart a process of abiogenesis to create life in the first place. Norman Horowitz, head of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976 considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life. Even if extraterrestrial life is based on carbon and uses water as a solvent, like Earth life, it may still have a radically different biochemistry. Life is generally considered to be a product of natural selection. It has been proposed that to undergo natural selection a living entity must have the capacity to replicate itself, the capacity to avoid damage/decay, and the capacity to acquire and process resources in support of the first two capacities. Life on Earth may have started with an RNA world and later evolved to its current form, where some of the RNA tasks were transferred to DNA and proteins. Extraterrestrial life may still be stuck using RNA, or evolve into other configurations. It is unclear if our biochemistry is the most efficient one that could be generated, or which elements would follow a similar pattern. However, it is likely that, even if cells had a different composition to those from Earth, they would still have a cell membrane. Life on Earth jumped from prokaryotes to eukaryotes and from unicellular organisms to multicellular organisms through evolution. So far no alternative process to achieve such a result has been conceived, even if hypothetical. Evolution requires life to be divided into individual organisms, and no alternative organisation has been satisfactorily proposed either. At the basic level, membranes define the limit of a cell, between it and its environment, while remaining partially open to exchange energy and resources with it. The evolution from simple cells to eukaryotes, and from them to multicellular lifeforms, is not guaranteed. The Cambrian explosion took place thousands of millions of years after the origin of life, and its causes are not fully known yet. On the other hand, the jump to multicellularity took place several times, which suggests that it could be a case of convergent evolution, and so likely to take place on other planets as well. Palaeontologist Simon Conway Morris considers that convergent evolution would lead to kingdoms similar to our plants and animals, and that many features are likely to develop in alien animals as well, such as bilateral symmetry, limbs, digestive systems and heads with sensory organs. Scientists from the University of Oxford analysed it from the perspective of evolutionary theory and wrote in a study in the International Journal of Astrobiology that aliens may be similar to humans. The planetary context would also have an influence: a planet with higher gravity would have smaller animals, and other types of stars can lead to non-green photosynthesizers. The amount of energy available would also affect biodiversity, as an ecosystem sustained by black smokers or hydrothermal vents would have less energy available than those sustained by a star's light and heat, and so its lifeforms would not grow beyond a certain complexity. There is also research in assessing the capacity of life for developing intelligence. It has been suggested that this capacity arises with the number of potential niches a planet contains, and that the complexity of life itself is reflected in the information density of planetary environments, which in turn can be computed from its niches. It is common knowledge that the conditions on other planets in the solar system, in addition to the many galaxies outside of the Milky Way galaxy, are very harsh and seem to be too extreme to harbor any life. The environmental conditions on these planets can have intense UV radiation paired with extreme temperatures, lack of water, and much more that can lead to conditions that don't seem to favor the creation or maintenance of extraterrestrial life. However, there has been much historical evidence that some of the earliest and most basic forms of life on Earth originated in some extreme environments that seem unlikely to have harbored life at least at one point in Earth's history. Fossil evidence as well as many historical theories backed up by years of research and studies have marked environments like hydrothermal vents or acidic hot springs as some of the first places that life could have originated on Earth. These environments can be considered extreme when compared to the typical ecosystems that the majority of life on Earth now inhabit, as hydrothermal vents are scorching hot due to the magma escaping from the Earth's mantle and meeting the much colder oceanic water. Even in today's world, there can be a diverse population of bacteria found inhabiting the area surrounding these hydrothermal vents which can suggest that some form of life can be supported even in the harshest of environments like the other planets in the solar system. The aspects of these harsh environments that make them ideal for the origin of life on Earth, as well as the possibility of creation of life on other planets, is the chemical reactions forming spontaneously. For example, the hydrothermal vents found on the ocean floor are known to support many chemosynthetic processes which allow organisms to utilize energy through reduced chemical compounds that fix carbon. In return, these reactions will allow for organisms to live in relatively low oxygenated environments while maintaining enough energy to support themselves. The early Earth environment was reducing and therefore, these carbon fixing compounds were necessary for the survival and possible origin of life on Earth. With the little amount of information that scientists have found regarding the atmosphere on other planets in the Milky Way galaxy and beyond, the atmospheres are most likely reducing or with very low oxygen levels, especially when compared with Earth's atmosphere. If there were the necessary elements and ions on these planets, the same carbon fixing, reduced chemical compounds occurring around hydrothermal vents could also occur on these planets' surfaces and possibly result in the origin of extraterrestrial life. Planetary habitability in the Solar System The Solar System has a wide variety of planets, dwarf planets, and moons, and each one is studied for its potential to host life. Each one has its own specific conditions that may benefit or harm life. So far, the only lifeforms found are those from Earth. No extraterrestrial intelligence other than humans exists or has ever existed within the Solar System. Astrobiologist Mary Voytek points out that it would be unlikely to find large ecosystems, as they would have already been detected by now. The inner Solar System is likely devoid of life. However, Venus is still of interest to astrobiologists, as it is a terrestrial planet that was likely similar to Earth in its early stages and developed in a different way. There is a greenhouse effect, the surface is the hottest in the Solar System, sulfuric acid clouds, all surface liquid water is lost, and it has a thick carbon-dioxide atmosphere with huge pressure. Comparing both helps to understand the precise differences that lead to beneficial or harmful conditions for life. And despite the conditions against life on Venus, there are suspicions that microbial life-forms may still survive in high-altitude clouds. Mars is a cold and almost airless desert, inhospitable to life. However, recent studies revealed that water on Mars used to be quite abundant, forming rivers, lakes, and perhaps even oceans. Mars may have been habitable back then, and life on Mars may have been possible. But when the planetary core ceased to generate a magnetic field, solar winds removed the atmosphere and the planet became vulnerable to solar radiation. Ancient life-forms may still have left fossilised remains, and microbes may still survive deep underground. As mentioned, the gas giants and ice giants are unlikely to contain life. The most distant solar system bodies, found in the Kuiper Belt and outwards, are locked in permanent deep-freeze, but cannot be ruled out completely. Although the giant planets themselves are highly unlikely to have life, there is much hope to find it on moons orbiting these planets. Europa, from the Jovian system, has a subsurface ocean below a thick layer of ice. Ganymede and Callisto also have subsurface oceans, but life is less likely in them because water is sandwiched between layers of solid ice. Europa would have contact between the ocean and the rocky surface, which helps the chemical reactions. It may be difficult to dig so deep in order to study those oceans, though. Enceladus, a tiny moon of Saturn with another subsurface ocean, may not need to be dug, as it releases water to space in eruption columns. The space probe Cassini flew inside one of these, but could not make a full study because NASA did not expect this phenomenon and did not equip the probe to study ocean water. Still, Cassini detected complex organic molecules, salts, evidence of hydrothermal activity, hydrogen, and methane. Titan is the only celestial body in the Solar System besides Earth that has liquid bodies on the surface. It has rivers, lakes, and rain of hydrocarbons, methane, and ethane, and even a cycle similar to Earth's water cycle. This special context encourages speculations about lifeforms with different biochemistry, but the cold temperatures would make such chemistry take place at a very slow pace. Water is rock-solid on the surface, but Titan does have a subsurface water ocean like several other moons. However, it is of such a great depth that it would be very difficult to access it for study. Scientific search The science that searches and studies life in the universe, both on Earth and elsewhere, is called astrobiology. With the study of Earth's life, the only known form of life, astrobiology seeks to study how life starts and evolves and the requirements for its continuous existence. This helps to determine what to look for when searching for life in other celestial bodies. This is a complex area of study, and uses the combined perspectives of several scientific disciplines, such as astronomy, biology, chemistry, geology, oceanography, and atmospheric sciences. The scientific search for extraterrestrial life is being carried out both directly and indirectly. As of September 2017[update], 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in the Solar System hold the potential for hosting primitive life such as microorganisms. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. Although all the unusual properties of the meteorite were eventually explained as the result of inorganic processes, the controversy over its discovery laid the groundwork for the development of astrobiology. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In February 2005 NASA scientists reported they may have found some evidence of extraterrestrial life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions. In November 2011, NASA launched the Mars Science Laboratory that landed the Curiosity rover on Mars. It is designed to assess the past and present habitability on Mars using a variety of scientific instruments. The rover landed on Mars at Gale Crater in August 2012. A group of scientists at Cornell University started a catalog of microorganisms, with the way each one reacts to sunlight. The goal is to help with the search for similar organisms in exoplanets, as the starlight reflected by planets rich in such organisms would have a specific spectrum, unlike that of starlight reflected from lifeless planets. If Earth was studied from afar with this system, it would reveal a shade of green, as a result of the abundance of plants with photosynthesis. In August 2011, NASA studied meteorites found on Antarctica, finding adenine, guanine, hypoxanthine, and xanthine. Adenine and guanine are components of DNA, and the others are used in other biological processes. The studies ruled out pollution of the meteorites on Earth, as those components would not be freely available the way they were found in the samples. This discovery suggests that several organic molecules that serve as building blocks of life may be generated within asteroids and comets. In October 2011, scientists reported that cosmic dust contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. It is still unclear if those compounds played a role in the creation of life on Earth, but Sun Kwok, of the University of Hong Kong, thinks so. "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." In August 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. In December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life." Although most searches are focused on the biology of extraterrestrial life, an extraterrestrial intelligence capable enough to develop a civilization may be detectable by other means as well. Technology may generate technosignatures, effects on the native planet that may not be caused by natural causes. There are three main types of techno-signatures considered: interstellar communications, effects on the atmosphere, and planetary-sized structures such as Dyson spheres. Organizations such as the SETI Institute search the cosmos for potential forms of communication. They started with radio waves, and now search for laser pulses as well. The challenge for this search is that there are natural sources of such signals as well, such as gamma-ray bursts and supernovae, and the difference between a natural signal and an artificial one would be in its specific patterns. Astronomers intend to use artificial intelligence for this, as it can manage large amounts of data and is devoid of biases and preconceptions. Besides, even if there is an advanced extraterrestrial civilization, there is no guarantee that it is transmitting radio communications in the direction of Earth. The length of time required for a signal to travel across space means that a potential answer may arrive decades or centuries after the initial message. The atmosphere of Earth is rich in nitrogen dioxide as a result of air pollution, which can be detectable. The natural abundance of carbon, which is also relatively reactive, makes it likely to be a basic component of the development of a potential extraterrestrial technological civilization, as it is on Earth. Fossil fuels may likely be generated and used on such worlds as well. The abundance of chlorofluorocarbons in the atmosphere can also be a clear technosignature, considering their role in ozone depletion. Light pollution may be another technosignature, as multiple lights on the night side of a rocky planet can be a sign of advanced technological development. However, modern telescopes are not strong enough to study exoplanets with the required level of detail to perceive it. The Kardashev scale proposes that a civilization may eventually start consuming energy directly from its local star. This would require giant structures built next to it, called Dyson spheres. Those speculative structures would cause an excess infrared radiation, that telescopes may notice. The infrared radiation is typical of young stars, surrounded by dusty protoplanetary disks that will eventually form planets. An older star such as the Sun would have no natural reason to have excess infrared radiation. The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star were being used as an incinerator/repository for nuclear waste products. Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zones of their stars. Since 1992, over four thousand exoplanets have been discovered (6,128 planets in 4,584 planetary systems including 1,017 multiple planetary systems as of 30 October 2025). The extrasolar planets so far discovered range in size from that of terrestrial planets similar to Earth's size to that of gas giants larger than Jupiter. The number of observed exoplanets is expected to increase greatly in the coming years.[better source needed] The Kepler space telescope has also detected a few thousand candidate planets, of which about 11% may be false positives. There is at least one planet on average per star. About 1 in 5 Sun-like stars[a] have an "Earth-sized"[b] planet in the habitable zone,[c] with the nearest expected to be within 12 light-years distance from Earth. Assuming 200 billion stars in the Milky Way,[d] that would be 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if red dwarfs are included. The rogue planets in the Milky Way possibly number in the trillions. The nearest known exoplanet is Proxima Centauri b, located 4.2 light-years (1.3 pc) from Earth in the southern constellation of Centaurus. As of March 2014[update], the least massive exoplanet known is PSR B1257+12 A, which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is DENIS-P J082303.1−491201 b, about 29 times the mass of Jupiter, although according to most definitions of a planet, it is too massive to be a planet and may be a brown dwarf instead. Almost all of the planets detected so far are within the Milky Way, but there have also been a few possible detections of extragalactic planets. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life. One sign that a planet probably already contains life is the presence of an atmosphere with significant amounts of oxygen, since that gas is highly reactive and generally would not last long without constant replenishment. This replenishment occurs on Earth through photosynthetic organisms. One way to analyse the atmosphere of an exoplanet is through spectrography when it transits its star, though this might only be feasible with dim stars like white dwarfs. History and cultural impact The modern concept of extraterrestrial life is based on assumptions that were not commonplace during the early days of astronomy. The first explanations for the celestial objects seen in the night sky were based on mythology. Scholars from Ancient Greece were the first to consider that the universe is inherently understandable and rejected explanations based on supernatural incomprehensible forces, such as the myth of the Sun being pulled across the sky in the chariot of Apollo. They had not developed the scientific method yet and based their ideas on pure thought and speculation, but they developed precursor ideas to it, such as that explanations had to be discarded if they contradict observable facts. The discussions of those Greek scholars established many of the pillars that would eventually lead to the idea of extraterrestrial life, such as Earth being round and not flat. The cosmos was first structured in a geocentric model that considered that the sun and all other celestial bodies revolve around Earth. However, they did not consider them as worlds. In Greek understanding, the world was composed by both Earth and the celestial objects with noticeable movements. Anaximander thought that the cosmos was made from apeiron, a substance that created the world, and that the world would eventually return to the cosmos. Eventually two groups emerged, the atomists that thought that matter at both Earth and the cosmos was equally made of small atoms of the classical elements (earth, water, fire and air), and the Aristotelians who thought that those elements were exclusive of Earth and that the cosmos was made of a fifth one, the aether. Atomist Epicurus thought that the processes that created the world, its animals and plants should have created other worlds elsewhere, along with their own animals and plants. Aristotle thought instead that all the earth element naturally fell towards the center of the universe, and that would make it impossible for other planets to exist elsewhere. Under that reasoning, Earth was not only in the center, it was also the only planet in the universe. Cosmic pluralism, the plurality of worlds, or simply pluralism, describes the philosophical belief in numerous "worlds" in addition to Earth, which might harbor extraterrestrial life. The earliest recorded assertion of extraterrestrial human life is found in ancient scriptures of Jainism. There are multiple "worlds" mentioned in Jain scriptures that support human life. These include, among others, Bharat Kshetra, Mahavideh Kshetra, Airavat Kshetra, and Hari kshetra. Medieval Muslim writers like Fakhr al-Din al-Razi and Muhammad al-Baqir supported cosmic pluralism on the basis of the Qur'an. Chaucer's poem The House of Fame engaged in medieval thought experiments that postulated the plurality of worlds. However, those ideas about other worlds were different from the current knowledge about the structure of the universe, and did not postulate the existence of planetary systems other than the Solar System. When those authors talk about other worlds, they talk about places located at the center of their own systems, and with their own stellar vaults and cosmos surrounding them. The Greek ideas and the disputes between atomists and Aristotelians outlived the fall of the Greek empire. The Great Library of Alexandria compiled information about it, part of which was translated by Islamic scholars and thus survived the end of the Library. Baghdad combined the knowledge of the Greeks, the Indians, the Chinese and its own scholars, and the knowledge expanded through the Byzantine Empire. From there it eventually returned to Europe by the time of the Middle Ages. However, as the Greek atomist doctrine held that the world was created by random movements of atoms, with no need for a creator deity, it became associated with atheism, and the dispute intertwined with religious ones. Still, the Church did not react to those topics in a homogeneous way, and there were stricter and more permissive views within the church itself. The first known mention of the term 'panspermia' was in the writings of the 5th-century BC Greek philosopher Anaxagoras. He proposed the idea that life exists everywhere. By the time of the late Middle Ages there were many known inaccuracies in the geocentric model, but it was kept in use because naked eye observations provided limited data. Nicolaus Copernicus started the Copernican Revolution by proposing that the planets revolve around the sun rather than Earth. His proposal had little acceptance at first because, as he kept the assumption that orbits were perfect circles, his model led to as many inaccuracies as the geocentric one. Tycho Brahe improved the available data with naked-eye observatories, which worked with highly complex sextants and quadrants. Tycho could not make sense of his observations, but Johannes Kepler did: orbits were not perfect circles, but ellipses. This knowledge benefited the Copernican model, which worked now almost perfectly. The invention of the telescope a short time later, perfected by Galileo Galilei, clarified the final doubts, and the paradigm shift was completed. Under this new understanding, the notion of extraterrestrial life became feasible: if Earth is but just a planet orbiting around a star, there may be planets similar to Earth elsewhere. The astronomical study of distant bodies also proved that physical laws are the same elsewhere in the universe as on Earth, with nothing making the planet truly special. The new ideas were met with resistance from the Catholic church. Galileo was tried for the heliocentric model, which was considered heretical, and forced to recant it. The best-known early-modern proponent of ideas of extraterrestrial life was the Italian philosopher Giordano Bruno, who argued in the 16th century for an infinite universe in which every star is surrounded by its own planetary system. Bruno wrote that other worlds "have no less virtue nor a nature different to that of our earth" and, like Earth, "contain animals and inhabitants". Bruno's belief in the plurality of worlds was one of the charges leveled against him by the Venetian Holy Inquisition, which tried and executed him. The heliocentric model was further strengthened by the postulation of the theory of gravity by Sir Isaac Newton. This theory provided the mathematics that explains the motions of all things in the universe, including planetary orbits. By this point, the geocentric model was definitely discarded. By this time, the use of the scientific method had become a standard, and new discoveries were expected to provide evidence and rigorous mathematical explanations. Science also took a deeper interest in the mechanics of natural phenomena, trying to explain not just the way nature works but also the reasons for working that way. There was very little actual discussion about extraterrestrial life before this point, as the Aristotelian ideas remained influential while geocentrism was still accepted. When it was finally proved wrong, it not only meant that Earth was not the center of the universe, but also that the lights seen in the sky were not just lights, but physical objects. The notion that life may exist in them as well soon became an ongoing topic of discussion, although one with no practical ways to investigate. The possibility of extraterrestrials remained a widespread speculation as scientific discovery accelerated. William Herschel, the discoverer of Uranus, was one of many 18th–19th-century astronomers who believed that the Solar System is populated by alien life. Other scholars of the period who championed "cosmic pluralism" included Immanuel Kant and Benjamin Franklin. At the height of the Enlightenment, even the Sun and Moon were considered candidates for extraterrestrial inhabitants. Speculation about life on Mars increased in the late 19th century, following telescopic observation of apparent Martian canals – which soon, however, turned out to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, proposing that the canals were the work of a long-gone civilisation. Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen was present in the Martian atmosphere. By 1909 better telescopes and the best perihelic opposition of Mars since 1877 conclusively put an end to the canal hypothesis. As a consequence of the belief in the spontaneous generation there was little thought about the conditions of each celestial body: it was simply assumed that life would thrive anywhere. This theory was disproved by Louis Pasteur in the 19th century. Popular belief in thriving alien civilisations elsewhere in the solar system still remained strong until Mariner 4 and Mariner 9 provided close images of Mars, which debunked forever the idea of the existence of Martians and decreased the previous expectations of finding alien life in general. The end of the spontaneous generation belief forced investigation into the origin of life. Although abiogenesis is the more accepted theory, a number of authors reclaimed the term "panspermia" and proposed that life was brought to Earth from elsewhere. Some of those authors are Jöns Jacob Berzelius (1834), Kelvin (1871), Hermann von Helmholtz (1879) and, somewhat later, by Svante Arrhenius (1903). The science fiction genre, although not so named during the time, developed during the late 19th century. The expansion of the genre of extraterrestrials in fiction influenced the popular perception over the real-life topic, making people eager to jump to conclusions about the discovery of aliens. Science marched at a slower pace, some discoveries fueled expectations and others dashed excessive hopes. For example, with the advent of telescopes, most structures seen on the Moon or Mars were immediately attributed to Selenites or Martians, and later ones (such as more powerful telescopes) revealed that all such discoveries were natural features. A famous case is the Cydonia region of Mars, first imaged by the Viking 1 orbiter. The low-resolution photos showed a rock formation that resembled a human face, but later spacecraft took photos in higher detail that showed that there was nothing special about the site. The search and study of extraterrestrial life became a science of its own, astrobiology. Also known as exobiology, this discipline is studied by the NASA, the ESA, the INAF, and others. Astrobiology studies life from Earth as well, but with a cosmic perspective. For example, abiogenesis is of interest to astrobiology, not because of the origin of life on Earth, but for the chances of a similar process taking place in other celestial bodies. Many aspects of life, from its definition to its chemistry, are analyzed as either likely to be similar in all forms of life across the cosmos or only native to Earth. Astrobiology, however, remains constrained by the current lack of extraterrestrial life-forms to study, as all life on Earth comes from the same ancestor, and it is hard to infer general characteristics from a group with a single example to analyse. The 20th century came with great technological advances, speculations about future hypothetical technologies, and an increased basic knowledge of science by the general population thanks to science divulgation through the mass media. The public interest in extraterrestrial life and the lack of discoveries by mainstream science led to the emergence of pseudosciences that provided affirmative, if questionable, answers to the existence of aliens. Ufology claims that many unidentified flying objects (UFOs) would be spaceships from alien species, and ancient astronauts hypothesis claim that aliens would have visited Earth in antiquity and prehistoric times but people would have failed to understand it by then. Most UFOs or UFO sightings can be readily explained as sightings of Earth-based aircraft (including top-secret aircraft), known astronomical objects or weather phenomenons, or as hoaxes. Looking beyond the pseudosciences, Lewis White Beck strove to elevate the level of public discourse on the topic of extraterrestrial life by tracing the evolution of philosophical thought over the centuries from ancient times into the modern era. His review of the contributions made by Lucretius, Plutarch, Aristotle, Copernicus, Immanuel Kant, John Wilkins, Charles Darwin and Karl Marx demonstrated that even in modern times, humanity could be profoundly influenced in its search for extraterrestrial life by subtle and comforting archetypal ideas which are largely derived from firmly held religious, philosophical and existential belief systems. On a positive note, however, Beck further argued that even if the search for extraterrestrial life proves to be unsuccessful, the endeavor itself could have beneficial consequences by assisting humanity in its attempt to actualize superior ways of living here on Earth. By the 21st century, it was accepted that multicellular life in the Solar System can only exist on Earth, but the interest in extraterrestrial life increased regardless. This is a result of the advances in several sciences. The knowledge of planetary habitability allows to consider on scientific terms the likelihood of finding life at each specific celestial body, as it is known which features are beneficial and harmful for life. Astronomy and telescopes also improved to the point exoplanets can be confirmed and even studied, increasing the number of search places. Life may still exist elsewhere in the Solar System in unicellular form, but the advances in spacecraft allow to send robots to study samples in situ, with tools of growing complexity and reliability. Although no extraterrestrial life has been found and life may still be just a rarity from Earth, there are scientific reasons to suspect that it can exist elsewhere, and technological advances that may detect it if it does. Many scientists are optimistic about the chances of finding alien life. In the words of SETI's Frank Drake, "All we know for sure is that the sky is not littered with powerful microwave transmitters". Drake noted that it is entirely possible that advanced technology results in communication being carried out in some way other than conventional radio transmission. At the same time, the data returned by space probes, and giant strides in detection methods, have allowed science to begin delineating habitability criteria on other worlds, and to confirm that at least other planets are plentiful, though aliens remain a question mark. The Wow! signal, detected in 1977 by a SETI project, remains a subject of speculative debate. On the other hand, other scientists are pessimistic. Jacques Monod wrote that "Man knows at last that he is alone in the indifferent immensity of the universe, whence which he has emerged by chance". In 2000, geologist and paleontologist Peter Ward and astrobiologist Donald Brownlee published a book entitled Rare Earth: Why Complex Life is Uncommon in the Universe.[better source needed] In it, they discussed the Rare Earth hypothesis, in which they claim that Earth-like life is rare in the universe, whereas microbial life is common. Ward and Brownlee are open to the idea of evolution on other planets that is not based on essential Earth-like characteristics such as DNA and carbon. As for the possible risks, theoretical physicist Stephen Hawking warned in 2010 that humans should not try to contact alien life forms. He warned that aliens might pillage Earth for resources. "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans", he said. Jared Diamond had earlier expressed similar concerns. On 20 July 2015, Hawking and Russian billionaire Yuri Milner, along with the SETI Institute, announced a well-funded effort, called the Breakthrough Initiatives, to expand efforts to search for extraterrestrial life. The group contracted the services of the 100-meter Robert C. Byrd Green Bank Telescope in West Virginia in the United States and the 64-meter Parkes Telescope in New South Wales, Australia. On 13 February 2015, scientists (including Geoffrey Marcy, Seth Shostak, Frank Drake and David Brin) at a convention of the American Association for the Advancement of Science, discussed Active SETI and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea; one result was a statement, signed by many, that a "worldwide scientific, political and humanitarian discussion must occur before any message is sent". Government responses The 1967 Outer Space Treaty and the 1979 Moon Agreement define rules of planetary protection against potentially hazardous extraterrestrial life. COSPAR also provides guidelines for planetary protection. A committee of the United Nations Office for Outer Space Affairs had in 1977 discussed for a year strategies for interacting with extraterrestrial life or intelligence. The discussion ended without any conclusions. As of 2010, the UN lacks response mechanisms for the case of an extraterrestrial contact. One of the NASA divisions is the Office of Safety and Mission Assurance (OSMA), also known as the Planetary Protection Office. A part of its mission is to "rigorously preclude backward contamination of Earth by extraterrestrial life." In 2016, the Chinese Government released a white paper detailing its space program. According to the document, one of the research objectives of the program is the search for extraterrestrial life. It is also one of the objectives of the Chinese Five-hundred-meter Aperture Spherical Telescope (FAST) program. In 2020, Dmitry Rogozin, the head of the Russian space agency, said the search for extraterrestrial life is one of the main goals of deep space research. He also acknowledged the possibility of existence of primitive life on other planets of the Solar System. The French space agency has an office for the study of "non-identified aero spatial phenomena". The agency is maintaining a publicly accessible database of such phenomena, with over 1600 detailed entries. According to the head of the office, the vast majority of entries have a mundane explanation; but for 25% of entries, their extraterrestrial origin can neither be confirmed nor denied. In 2020, chairman of the Israel Space Agency Isaac Ben-Israel stated that the probability of detecting life in outer space is "quite large". But he disagrees with his former colleague Haim Eshed who stated that there are contacts between an advanced alien civilisation and some of Earth's governments. In fiction Although the idea of extraterrestrial peoples became feasible once astronomy developed enough to understand the nature of planets, they were not thought of as being any different from humans. Having no scientific explanation for the origin of mankind and its relation to other species, there was no reason to expect them to be any other way. This was changed by the 1859 book On the Origin of Species by Charles Darwin, which proposed the theory of evolution. Now with the notion that evolution on other planets may take other directions, science fiction authors created bizarre aliens, clearly distinct from humans. A usual way to do that was to add body features from other animals, such as insects or octopuses. Costuming and special effects feasibility alongside budget considerations forced films and TV series to tone down the fantasy, but these limitations lessened since the 1990s with the advent of computer-generated imagery (CGI), and later on as CGI became more effective and less expensive. Real-life events sometimes captivate people's imagination and this influences the works of fiction. For example, during the Barney and Betty Hill incident, the first recorded claim of an alien abduction, the couple reported that they were abducted and experimented on by aliens with oversized heads, big eyes, pale grey skin, and small noses, a description that eventually became the grey alien archetype once used in works of fiction. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/American_frontier] | [TOKENS: 28846] |
Contents American frontier The American frontier, also known as the Old West, and popularly known as the Wild West, encompasses the geography, history, folklore, and culture associated with the forward wave of American expansion in mainland North America that began with European colonial settlements in the early 17th century and ended with the admission of the last few contiguous western territories as states in 1912. This era of massive migration and settlement was particularly encouraged by President Thomas Jefferson following the Louisiana Purchase, giving rise to the expansionist attitude known as "manifest destiny" and historians' "Frontier Thesis". The legends, historical events and folklore of the American frontier, known as the frontier myth, have embedded themselves into United States culture so much so that the Old West, and the Western genre of media specifically, has become one of the defining features of American national identity. Periodization Historians have debated at length as to when the frontier era began, when it ended, and which were its key sub-periods. For example, the Old West subperiod is sometimes used by historians regarding the time from the end of the American Civil War in 1865 to when the Superintendent of the Census, William Rush Merriam, stated that the U.S. Census Bureau would stop recording western frontier settlement as part of its census categories after the 1890 U.S. census. His successors however continued the practice until the 1920 census. Others, including the Library of Congress and the University of Oxford, often cite differing points reaching into the early 1900s; typically within the first two decades before American entry into World War I. A period known as "The Western Civil War of Incorporation" lasted from the 1850s to 1919. This period includes historical events synonymous with the archetypical Old West or "Wild West" such as violent conflict arising from encroaching settlement into frontier land, the removal and assimilation of natives, consolidation of property to large corporations and government, vigilantism, and the attempted enforcement of laws upon outlaws. In 1890, the Superintendent of the Census, William Rush Merriam, stated: "Up to and including 1880 the country had a frontier of land , but at present the unsettled area has been so broken into by isolated bodies of settlement that there can hardly be said to be a frontier line. In the discussion of its extent, its westward movement, etc., it can not, therefore, any longer have a place in the census reports." Despite this, the later 1900 U.S. census continued to show the westward frontier line, and his successors continued the practice. By the 1910 U.S. census however, the frontier had shrunk into divided areas without a singular westward line of settlement. An influx of agricultural homesteaders in the first two decades of the 20th century, taking up more acreage than homestead grants in the entirety of the 19th century, is cited to have significantly reduced open land. A frontier is a zone of contact at the edge of a line of settlement. Theorist Frederick Jackson Turner argued that the frontier was the scene of a defining process of American civilization: "The frontier," he asserted, "promoted the formation of a composite nationality for the American people." He theorized it was a process of development: "This perennial rebirth, this fluidity of American life, this expansion westward...furnish[es] the forces dominating American character." Turner's ideas since 1893 have inspired generations of historians (and critics) to explore multiple individual American frontiers, but the popular folk frontier concentrates on the conquest and settlement of Native American lands west of the Mississippi River, in what is now the Midwest, Texas, the Great Plains, the Rocky Mountains, the Southwest, and the West Coast. Enormous popular attention was focused on the Western United States (especially the Southwest) in the second half of the 19th century and the early 20th century, from the 1850s to the 1910s. Such media typically exaggerated the romance, anarchy, and chaotic violence of the period for greater dramatic effect. This inspired the Western genre of film, along with television shows, novels, comic books, video games, children's toys, and costumes. As defined by Hine and Faragher, "frontier history tells the story of the creation and defense of communities, the use of the land, the development of crops and hotels, and the formation of states." They explain, "It is a tale of conquest, but also one of survival, persistence, and the merging of peoples and cultures that gave birth and continuing life to America." Turner himself repeatedly emphasized how the availability of "free land" to start new farms attracted pioneering Americans: "The existence of an area of free land, its continuous recession, and the advance of American settlement westward, explain American development." Through treaties with foreign nations and native tribes, political compromise, military conquest, the establishment of law and order, the building of farms, ranches, and towns, the marking of trails and digging of mines, combined with successive waves of immigrants moving into the region, the United States expanded from coast to coast, fulfilling the ideology of Manifest Destiny. In his "Frontier Thesis" (1893), Turner theorized that the frontier was a process that transformed Europeans into a new people, the Americans, whose values focused on equality, democracy, and optimism, as well as individualism, self-reliance, and even violence. The frontier is the margin of undeveloped territory that would comprise the United States beyond the established frontier line. The U.S. Census Bureau designated frontier territory as generally unoccupied land with a population density of fewer than 2 people per square mile (0.77 people per square kilometer). The frontier line was the outer boundary of European-American settlement into this land. Beginning with the first permanent European settlements on the East Coast, it has moved steadily westward from the 1600s to the 1900s (decades) with occasional movements north into Maine and New Hampshire, south into Florida, and east from California into Nevada. Pockets of settlements would also appear far past the established frontier line, particularly on the West Coast and the deep interior, with settlements such as Los Angeles and Salt Lake City respectively. The "West" was the recently settled area near that boundary. Thus, parts of the Midwest and American South, though no longer considered "western", have a frontier heritage along with the modern western states. Richard W. Slatta, in his view of the frontier, writes that "historians sometimes define the American West as lands west of the 98th meridian or 98° west longitude," and that other definitions of the region "include all lands west of the Mississippi or Missouri rivers." Maps of United States territories Key: States Territories Disputed areas Other countries History In the colonial era, before 1776, the west was of high priority for settlers and politicians. The American frontier began when Jamestown, Virginia, was settled by the English in 1607. In the earliest days of European settlement on the Atlantic coast, until about 1680, the frontier was essentially any part of the interior of the continent beyond the fringe of existing settlements along the Atlantic coast. English, French, Spanish, and Dutch patterns of expansion and settlement were quite different. Only a few thousand French migrated to Canada; these habitants settled in villages along the St. Lawrence River, building communities that remained stable for long stretches. Although French fur traders ranged widely through the Great Lakes and midwest region, they seldom settled down. French settlement was limited to a few very small villages such as Kaskaskia, Illinois, as well as a larger settlement around New Orleans. In what is now New York state the Dutch set up fur trading posts in the Hudson River valley, followed by large grants of land to rich landowning patroons who brought in tenant farmers who created compact, permanent villages. They created a dense rural settlement in upstate New York, but they did not push westward. Areas in the north that were in the frontier stage by 1700 generally had poor transportation facilities, so the opportunity for commercial agriculture was low. These areas remained primarily in subsistence agriculture, and as a result, by the 1760s these societies were highly egalitarian, as explained by historian Jackson Turner Main: The typical frontier society, therefore, was one in which class distinctions were minimized. The wealthy speculator, if one was involved, usually remained at home, so that ordinarily no one of wealth was a resident. The class of landless poor was small. The great majority were landowners, most of whom were also poor because they were starting with little property and had not yet cleared much land nor had they acquired the farm tools and animals which would one day make them prosperous. Few artisans settled on the frontier except for those who practiced a trade to supplement their primary occupation of farming. There might be a storekeeper, a minister, and perhaps a doctor; and there were several landless laborers. All the rest were farmers. In the South, frontier areas that lacked transportation, such as the Appalachian Mountains region, remained based on subsistence farming and resembled the egalitarianism of their northern counterparts, although they had a larger upper-class of slaveowners. North Carolina was representative. However, frontier areas of 1700 that had good river connections were increasingly transformed into plantation agriculture. Rich men came in, bought up the good land, and worked it with slaves. The area was no longer "frontier". It had a stratified society comprising a powerful upper-class white landowning gentry, a small middle-class, a fairly large group of landless or tenant white farmers, and a growing slave population at the bottom of the social pyramid. Unlike the North, where small towns and even cities were common, the South was overwhelmingly rural. The seaboard colonial settlements gave priority to land ownership for individual farmers, and as the population grew they pushed westward for fresh farmland. Unlike Britain, where a small number of landlords owned most of the land, ownership in America was cheap, easy and widespread. Land ownership brought a degree of independence as well as a vote for local and provincial offices. The typical New England settlements were quite compact and small, under a square mile. Conflict with the Native Americans arose out of political issues, namely who would rule. Early frontier areas east of the Appalachian Mountains included the Connecticut River valley, and northern New England (which was a move to the north, not the west). Settlers on the frontier often connected isolated incidents to indicate Indian conspiracies to attack them, but these lacked a French diplomatic dimension after 1763, or a Spanish connection after 1820. Most of the frontiers experienced numerous conflicts. The French and Indian War broke out between Britain and France, with the French making up for their small colonial population base by enlisting Native war parties as allies. The series of large wars spilling over from European wars ended in a complete victory for the British in the worldwide Seven Years' War. In the peace treaty of 1763, France ceded practically everything, as the lands west of the Mississippi River, in addition to Florida and New Orleans, went to Spain. Otherwise, lands east of the Mississippi River and what is now Canada went to Britain.[citation needed] Regardless of wars, Americans were moving across the Appalachians into western Pennsylvania, what is now West Virginia, and areas of the Ohio Country, Kentucky, and Tennessee. In the southern settlements via the Cumberland Gap, their most famous leader was Daniel Boone. Young George Washington promoted settlements in western Virginia and what is now West Virginia on lands awarded to him and his soldiers by the Royal government in payment for their wartime service in Virginia's militia. Settlements west of the Appalachian Mountains were curtailed briefly by the Royal Proclamation of 1763, forbidding settlement in this area. The Treaty of Fort Stanwix (1768) re-opened most of the western lands for frontiersmen to settle. The nation was at peace after 1783. The states gave Congress control of the western lands and an effective system for population expansion was developed. The Northwest Ordinance of 1787 abolished slavery in the area north of the Ohio River and promised statehood when a territory reached a threshold population, as Ohio did in 1803. The first major movement west of the Appalachian mountains originated in Pennsylvania, Virginia, and North Carolina as soon as the Revolutionary War ended in 1781. Pioneers housed themselves in a rough lean-to or at most a one-room log cabin. The main food supply at first came from hunting deer, turkeys, and other abundant game. Clad in typical frontier garb, leather breeches, moccasins, fur cap, and hunting shirt, and girded by a belt from which hung a hunting knife and a shot pouch—all homemade—the pioneer presented a unique appearance. In a short time he opened in the woods a patch, or clearing, on which he grew corn, wheat, flax, tobacco, and other products, even fruit. In a few years, the pioneer added hogs, sheep, and cattle, and perhaps acquired a horse. Homespun clothing replaced the animal skins. The more restless pioneers grew dissatisfied with over civilized life and uprooted themselves again to move 50 or a hundred miles (80 or 160 km) further west. The land policy of the new nation was conservative, paying special attention to the needs of the settled East. The goals sought by both parties in the 1790–1820 era were to grow the economy, avoid draining away the skilled workers needed in the East, distribute the land wisely, sell it at prices that were reasonable to settlers yet high enough to pay off the national debt, clear legal titles, and create a diversified Western economy that would be closely interconnected with the settled areas with minimal risk of a breakaway movement. By the 1830s, however, the West was filling up with squatters who had no legal deed, although they may have paid money to previous settlers. The Jacksonian Democrats favored the squatters by promising rapid access to cheap land. By contrast, Henry Clay was alarmed at the "lawless rabble" heading West who were undermining the utopian concept of a law-abiding, stable middle-class republican community. Rich southerners, meanwhile, looked for opportunities to buy high-quality land to set up slave plantations. The Free Soil movement of the 1840s called for low-cost land for free white farmers, a position enacted into law by the new Republican Party in 1862, offering free 160 acres (65 ha) homesteads to all adults, male and female, black and white, native-born or immigrant. After winning the Revolutionary War (1783), American settlers in large numbers poured into the west. In 1788, American pioneers to the Northwest Territory established Marietta, Ohio, as the first permanent American settlement in the Northwest Territory. In 1775, Daniel Boone blazed a trail for the Transylvania Company from Virginia through the Cumberland Gap into central Kentucky. It was later lengthened to reach the Falls of the Ohio at Louisville. The Wilderness Road was steep and rough, and it could only be traversed on foot or horseback, but it was the best route for thousands of settlers moving into Kentucky. In some areas they had to face Native attacks. In 1784 alone, Natives killed over 100 travelers on the Wilderness Road. Kentucky at this time had been depopulated—it was "empty of Indian villages." However raiding parties sometimes came through. One of those intercepted was Abraham Lincoln's grandfather, who was scalped in 1784 near Louisville. The War of 1812 marked the final confrontation involving major British and Native forces fighting to stop American expansion. The British war goal included the creation of an Indian barrier state under British auspices in the Midwest which would halt American expansion westward. American frontier militiamen under General Andrew Jackson defeated the Creeks and opened the Southwest, while militia under Governor William Henry Harrison defeated the Native-British alliance at the Battle of the Thames in Canada in 1813. The death in battle of the Native leader Tecumseh dissolved the coalition of hostile Native tribes. Meanwhile, General Andrew Jackson ended the Native military threat in the Southeast at the Battle of Horseshoe Bend in 1814 in Alabama. In general, the frontiersmen battled the Natives with little help from the U.S. Army or the federal government. To end the war, American diplomats negotiated the Treaty of Ghent, signed towards the end of 1814, with Britain. They rejected the British plan to set up a Native state in U.S. territory south of the Great Lakes. They explained the American policy toward the acquisition of Native lands: The United States, while intending never to acquire lands from the Indians otherwise than peaceably, and with their free consent, are fully determined, in that manner, progressively, and in proportion as their growing population may require, to reclaim from the state of nature, and to bring into cultivation every portion of the territory contained within their acknowledged boundaries. In thus providing for the support of millions of civilized beings, they will not violate any dictate of justice or humanity; for they will not only give to the few thousand savages scattered over that territory an ample equivalent for any right they may surrender, but will always leave them the possession of lands more than they can cultivate, and more than adequate to their subsistence, comfort, and enjoyment, by cultivation. If this is a spirit of aggrandizement, the undersigned are prepared to admit, in that sense, its existence; but they must deny that it affords the slightest proof of an intention not to respect the boundaries between them and European nations, or of a desire to encroach upon the territories of Great Britain. [...] They will not suppose that that Government will avow, as the basis of their policy towards the United States a system of arresting their natural growth within their territories, for the sake of preserving a perpetual desert for savages. As settlers poured in, the frontier districts first became territories, with an elected legislature and a governor appointed by the president. Then when the population reached 100,000 the territory applied for statehood. Frontiersmen typically dropped the legalistic formalities and restrictive franchise favored by eastern upper classes and adopting more democracy and more egalitarianism. In 1810, the western frontier had reached the Mississippi River. St. Louis, Missouri, was the largest town on the frontier, the gateway for travel westward, and a principal trading center for Mississippi River traffic and inland commerce but remained under Spanish control until 1803. Thomas Jefferson thought of himself as a man of the frontier and was keenly interested in expanding and exploring the West. Jefferson's Louisiana Purchase of 1803 doubled the size of the nation at the cost of $15 million, or about $0.04 per acre ($323 million in 2025 dollars, less than 42 cents per acre). Federalists opposed the expansion, but Jeffersonians hailed the opportunity to create millions of new farms to expand the domain of land-owning yeomen; the ownership would strengthen the ideal republican society, based on agriculture (not commerce), governed lightly, and promoting self-reliance and virtue, as well as form the political base for Jeffersonian Democracy. France was paid for its sovereignty over the territory in terms of international law. Between 1803 and the 1870s, the federal government purchased the land from the Native tribes then in possession of it. 20th-century accountants and courts have calculated the value of the payments made to the Natives, which included future payments of cash, food, horses, cattle, supplies, buildings, schooling, and medical care. In cash terms, the total paid to the tribes in the area of the Louisiana Purchase amounted to about $2.6 billion, or nearly $9 billion in 2016 dollars. Additional sums were paid to the Natives living east of the Mississippi for their lands, as well as payments to Natives living in parts of the west outside the Louisiana Purchase. Even before the purchase, Jefferson was planning expeditions to explore and map the lands. He charged Lewis and Clark to "explore the Missouri River, and such principal stream of it, as, by its course and communication with the waters of the Pacific Ocean; whether the Columbia, Oregon, Colorado, or any other river may offer the most direct and practicable communication across the continent for commerce". Jefferson also instructed the expedition to study the region's native tribes (including their morals, language, and culture), weather, soil, rivers, commercial trading, and animal and plant life. Entrepreneurs, most notably John Jacob Astor quickly seized the opportunity and expanded fur trading operations into the Pacific Northwest. Astor's "Fort Astoria" (later Fort George), at the mouth of the Columbia River, became the first permanent white settlement in that area, although it was not profitable for Astor. He set up the American Fur Company in an attempt to break the hold that the Hudson's Bay Company monopoly had over the region. By 1820, Astor had taken over independent traders to create a profitable monopoly; he left the business as a multi-millionaire in 1834. As the frontier moved west, trappers and hunters moved ahead of settlers, searching out new supplies of beaver and other skins for shipment to Europe. The hunters were the first Europeans in much of the Old West and they formed the first working relationships with the Native Americans in the West. They added extensive knowledge of the Northwest terrain, including the important South Pass through the central Rocky Mountains. Discovered about 1812, it later became a major route for settlers to Oregon and Washington. By 1820, however, a new "brigade-rendezvous" system sent company men in "brigades" cross-country on long expeditions, bypassing many tribes. It also encouraged "free trappers" to explore new regions on their own. At the end of the gathering season, the trappers would "rendezvous" and turn in their goods for pay at river ports along the Green River, Upper Missouri, and the Upper Mississippi. St. Louis was the largest of the rendezvous towns. By 1830, however, fashions changed and beaver hats were replaced by silk hats, ending the demand for expensive American furs. Thus ended the era of the mountain men, trappers, and scouts such as Jedediah Smith, Hugh Glass, Davy Crockett, Jack Omohundro, and others. The trade in beaver fur virtually ceased by 1845. There was wide agreement on the need to settle the new territories quickly, but the debate polarized over the price the government should charge. The conservatives and Whigs, typified by the president John Quincy Adams, wanted a moderated pace that charged the newcomers enough to pay the costs of the federal government. The Democrats, however, tolerated a wild scramble for land at very low prices. The final resolution came in the Homestead Law of 1862, with a moderated pace that gave settlers 160 acres free after they worked on it for five years. The private profit motive dominated the movement westward, but the federal government played a supporting role in securing the land through treaties and setting up territorial governments, with governors appointed by the President. The federal government first acquired western territory through treaties with other nations or native tribes. Then it sent surveyors to map and document the land. By the 20th century, Washington bureaucracies managed the federal lands such as the United States General Land Office in the Interior Department, and after 1891, the Forest Service in the Department of Agriculture. After 1900, dam building and flood control became major concerns. Transportation was a key issue and the Army (especially the Army Corps of Engineers) was given full responsibility for facilitating navigation on the rivers. The steamboat, first used on the Ohio River in 1811, made possible inexpensive travel using the river systems, especially the Mississippi and Missouri rivers and their tributaries. Army expeditions up the Missouri River in 1818–1825 allowed engineers to improve the technology. For example, the Army's steamboat "Western Engineer" of 1819 combined a very shallow draft with one of the earliest stern wheels. In 1819–1825, Colonel Henry Atkinson developed keelboats with hand-powered paddle wheels. The federal postal system played a crucial role in national expansion. It facilitated expansion into the West by creating an inexpensive, fast, convenient communication system. Letters from early settlers provided information and boosterism to encourage increased migration to the West, helped scattered families stay in touch and provide neutral help, assisted entrepreneurs to find business opportunities, and made possible regular commercial relationships between merchants and the West and wholesalers and factories back east. The postal service likewise assisted the Army in expanding control over the vast western territories. The widespread circulation of important newspapers by mail, such as the New York Weekly Tribune, facilitated coordination among politicians in different states. The postal service helped to integrate already established areas with the frontier, creating a spirit of nationalism and providing a necessary infrastructure. The army early on assumed the mission of protecting settlers along with the Westward Expansion Trails, a policy that was described by U.S. Secretary of War John B. Floyd in 1857: A line of posts running parallel without frontier, but near to the Indians' usual habitations, placed at convenient distances and suitable positions, and occupied by infantry, would exercise a salutary restraint upon the tribes, who would feel that any foray by their warriors upon the white settlements would meet with prompt retaliation upon their own homes. There was a debate at the time about the best size for the forts with Jefferson Davis, Winfield Scott, and Thomas Jesup supporting forts that were larger but fewer in number than Floyd. Floyd's plan was more expensive but had the support of settlers and the general public who preferred that the military remain as close as possible. The frontier area was vast and even Davis conceded that "concentration would have exposed portions of the frontier to Native hostilities without any protection." Government and private enterprise sent many explorers to the West. In 1805–1806, Army lieutenant Zebulon Pike (1779–1813) led a party of 20 soldiers to find the headwaters of the Mississippi. He later explored the Red and Arkansas Rivers in Spanish territory, eventually reaching the Rio Grande. On his return, Pike sighted the peak in Colorado named after him. Major Stephen Harriman Long (1784–1864) led the Yellowstone and Missouri expeditions of 1819–1820, but his categorizing in 1823 of the Great Plains as arid and useless led to the region getting a bad reputation as the "Great American Desert", which discouraged settlement in that area for several decades. In 1811, naturalists Thomas Nuttall (1786–1859) and John Bradbury (1768–1823) traveled up the Missouri River documenting and drawing plant and animal life. Artist George Catlin (1796–1872) painted accurate paintings of Native American culture. Swiss artist Karl Bodmer made compelling landscapes and portraits. John James Audubon (1785–1851) is famous for classifying and painting in minute details 500 species of birds, published in Birds of America. The most famous of the explorers was John Charles Frémont (1813–1890), an Army officer in the Corps of Topographical Engineers. He displayed a talent for exploration and a genius at self-promotion that gave him the sobriquet of "Pathmarker of the West" and led him to the presidential nomination of the new Republican Party in 1856. He led a series of expeditions in the 1840s which answered many of the outstanding geographic questions about the little-known region. He crossed through the Rocky Mountains by five different routes and mapped parts of Oregon and California. In 1846–1847, he played a role in conquering California. In 1848–1849, Frémont was assigned to locate a central route through the mountains for the proposed transcontinental railroad, but his expedition ended in near-disaster when it became lost and was trapped by heavy snow. His reports mixed narrative of exciting adventure with scientific data and detailed practical information for travelers. It caught the public imagination and inspired many to head west. Goetzman says it was "monumental in its breadth, a classic of exploring literature". While colleges were springing up across the Northeast, there was little competition on the western frontier for Transylvania University, founded in Lexington, Kentucky, in 1780. It boasted of a law school in addition to its undergraduate and medical programs. Transylvania attracted politically ambitious young men from across the Southwest, including 50 who became United States senators, 101 representatives, 36 governors, and 34 ambassadors, as well as Jefferson Davis, the president of the Confederacy. Most frontiersmen showed little commitment to religion until traveling evangelists began to appear and to produce "revivals". The local pioneers responded enthusiastically to these events and, in effect, evolved their populist religions, especially during the Second Great Awakening (1790–1840), which featured outdoor camp meetings lasting a week or more and which introduced many people to organized religion for the first time. One of the largest and most famous camp meetings took place at Cane Ridge, Kentucky, in 1801. The local Baptists set up small independent churches—Baptists abjured centralized authority; each local church was founded on the principle of independence of the local congregation. On the other hand, bishops of the well-organized, centralized Methodists assigned circuit riders to specific areas for several years at a time, then moved them to fresh territory. Several new denominations were formed, of which the largest was the Disciples of Christ. The established Eastern churches were slow to meet the needs of the frontier. The Presbyterians and Congregationalists, since they depended on well-educated ministers, were shorthanded in evangelizing the frontier. They set up a Plan of Union of 1801 to combine resources on the frontier. Historian Mark Wyman calls Wisconsin a "palimpsest" of layer upon layer of peoples and forces, each imprinting permanent influences. He identified these layers as multiple "frontiers" over three centuries: Native American frontier, French frontier, English frontier, fur-trade frontier, mining frontier, and the logging frontier. Finally, the coming of the railroad brought the end of the frontier. Frederick Jackson Turner grew up in Wisconsin during its last frontier stage, and in his travels around the state, he could see the layers of social and political development. One of Turner's last students, Merle Curti used an in-depth analysis of local Wisconsin history to test Turner's thesis about democracy. Turner's view was that American democracy, "involved widespread participation in the making of decisions affecting the common life, the development of initiative and self-reliance, and equality of economic and cultural opportunity. It thus also involved Americanization of immigrant." Curti found that from 1840 to 1860 in Wisconsin the poorest groups gained rapidly in land ownership, and often rose to political leadership at the local level. He found that even landless young farmworkers were soon able to obtain their farms. Free land on the frontier, therefore, created opportunity and democracy, for both European immigrants as well as old stock Yankees. From the 1770s to the 1830s, pioneers moved into the new lands that stretched from Kentucky to Alabama to Texas. Most were farmers who moved in family groups. Historian Louis Hacker shows how wasteful the first generation of pioneers was; they were too ignorant to cultivate the land properly and when the natural fertility of virgin land was used up, they sold out and moved west to try again. Hacker describes that in Kentucky about 1812: Farms were for sale with from ten to fifty acres cleared, possessing log houses, peach and sometimes apple orchards, enclosed in fences, and having plenty of standing timber for fuel. The land was sown in wheat and corn, which were the staples, while hemp [for making rope] was being cultivated in increasing quantities in the fertile river bottoms.... Yet, on the whole, it was an agricultural society without skill or resources. It committed all those sins which characterize wasteful and ignorant husbandry. Grass seed was not sown for hay and as a result, the farm animals had to forage for themselves in the forests; the fields were not permitted to lie in pasturage; a single crop was planted in the soil until the land was exhausted; the manure was not returned to the fields; only a small part of the farm was brought under cultivation, the rest being permitted to stand in timber. Instruments of cultivation were rude and clumsy and only too few, many of them being made on the farm. It is plain why the American frontier settler was on the move continually. It was, not his fear of too close contact with the comforts and restraints of a civilized society that stirred him into a ceaseless activity, nor merely the chance of selling out at a profit to the coming wave of settlers; it was his wasting land that drove him on. Hunger was the goad. The pioneer farmer's ignorance, his inadequate facilities for cultivation, his limited means, of transport necessitated his frequent changes of scene. He could succeed only with virgin soil. Hacker adds that the second wave of settlers reclaimed the land, repaired the damage, and practiced more sustainable agriculture. Historian Frederick Jackson Turner explored the individualistic worldview and values of the first generation: What they objected to was arbitrary obstacles, artificial limitations upon the freedom of each member of this frontier folk to work out his career without fear or favor. What they instinctively opposed was the crystallization of differences, the monopolization of opportunity, and the fixing of that monopoly by government or by social customs. The road must be open. The game must be played according to the rules. There must be no artificial stifling of equality of opportunity, no closed doors to the able, no stopping the free game before it was played to the end. More than that, there was an unformulated, perhaps, but very real feeling, that mere success in the game, by which the abler men were able to achieve preëminence gave to the successful ones no right to look down upon their neighbors, no vested title to assert superiority as a matter of pride and to the diminution of the equal right and dignity of the less successful. Manifest Destiny was the controversial belief that the United States was preordained to expand from the Atlantic coast to the Pacific coast, and efforts made to realize that belief. The concept has appeared during colonial times, but the term was coined in the 1840s by a popular magazine which editorialized, "the fulfillment of our manifest destiny...to overspread the continent allotted by Providence for the free development of our yearly multiplying millions." As the nation grew, "Manifest Destiny" became a rallying cry for expansionists in the Democratic Party. In the 1840s, the Tyler and Polk administrations (1841–1849) successfully promoted this nationalistic doctrine. However, the Whig Party, which represented business and financial interests, stood opposed to Manifest Destiny. Whig leaders such as Henry Clay and Abraham Lincoln called for deepening the society through modernization and urbanization instead of simple horizontal expansion. Starting with the annexation of Texas, the expansionists got the upper hand. John Quincy Adams, an anti-slavery Whig, felt the Texas annexation in 1845 to be "the heaviest calamity that ever befell myself and my country". Helping settlers move westward were the emigrant "guide books" of the 1840s featuring route information supplied by the fur traders and the Frémont expeditions, and promising fertile farmland beyond the Rockies.[nb 1] Mexico became independent of Spain in 1821 and took over Spain's northern possessions stretching from Texas to California. American caravans began delivering goods to the Mexican city Santa Fe along the Santa Fe Trail, over the 870-mile (1,400 km) journey which took 48 days from Kansas City, Missouri (then known as Westport). Santa Fe was also the trailhead for the "El Camino Real" (the King's Highway), a trade route which carried American manufactured goods southward deep into Mexico and returned silver, furs, and mules northward (not to be confused with another "Camino Real" which connected the missions in California). A branch also ran eastward near the Gulf (also called the Old San Antonio Road). Santa Fe connected to California via the Old Spanish Trail. The Spanish and Mexican governments attracted American settlers to Texas with generous terms. Stephen F. Austin became an "empresario", receiving contracts from the Mexican officials to bring in immigrants. In doing so, he also became the de facto political and military commander of the area. Tensions rose, however, after an abortive attempt to establish the independent nation of Fredonia in 1826. William Travis, leading the "war party", advocated for independence from Mexico, while the "peace party" led by Austin attempted to get more autonomy within the current relationship. When Mexican president Santa Anna shifted alliances and joined the conservative Centralist party, he declared himself dictator and ordered soldiers into Texas to curtail new immigration and unrest. However, immigration continued and 30,000 Anglos with 3,000 slaves were settled in Texas by 1835. In 1836, the Texas Revolution erupted. Following losses at the Alamo and Goliad, the Texians won the decisive Battle of San Jacinto to secure independence. At San Jacinto, Sam Houston, commander-in-chief of the Texian Army and future President of the Republic of Texas famously shouted "Remember the Alamo! Remember Goliad". The U.S. Congress declined to annex Texas, stalemated by contentious arguments over slavery and regional power. Thus, the Republic of Texas remained an independent power for nearly a decade before it was annexed as the 28th state in 1845. The government of Mexico, however, viewed Texas as a runaway province and asserted its ownership. Mexico refused to recognize the independence of Texas in 1836, but the U.S. and European powers did so. Mexico threatened war if Texas joined the U.S., which it did in 1845. American negotiators were turned away by a Mexican government in turmoil. When the Mexican army killed 16 American soldiers in disputed territory war was at hand. Whigs such as Congressman Abraham Lincoln denounced the war, but it was quite popular outside New England. The Mexican strategy was defensive; the American strategy was a three-pronged offensive, using large numbers of volunteer soldiers. Overland forces seized New Mexico with little resistance and headed to California, which quickly fell to the American land and naval forces. From the main American base at New Orleans, General Zachary Taylor led forces into northern Mexico, winning a series of battles that ensued. The U.S. Navy transported General Winfield Scott to Veracruz. He then marched his 12,000-man force west to Mexico City, winning the final battle at Chapultepec. Talk of acquiring all of Mexico fell away when the army discovered the Mexican political and cultural values were so alien to America's. As the Cincinnati Herald asked, what would the U.S. do with eight million Mexicans "with their idol worship, heathen superstition, and degraded mongrel races?" The Treaty of Guadalupe Hidalgo of 1848 ceded the territories of California and New Mexico to the United States for $18.5 million (which included the assumption of claims against Mexico by settlers). The Gadsden Purchase in 1853 added southern Arizona, which was needed for a railroad route to California. In all Mexico ceded half a million square miles (1.3 million km2) and included the states-to-be of California, Utah, Arizona, Nevada, New Mexico, and parts of Colorado and Wyoming, in addition to Texas. Managing the new territories and dealing with the slavery issue caused intense controversy, particularly over the Wilmot Proviso, which would have outlawed slavery in the new territories. Congress never passed it, but rather temporarily resolved the issue of slavery in the West with the Compromise of 1850. California entered the Union in 1850 as a free state; the other areas remained territories for many years. The new state grew rapidly as migrants poured into the fertile cotton lands of east Texas. German immigrants started to arrive in the early 1840s because of negative economic, social, and political pressures in Germany. With their investments in cotton lands and slaves, planters established cotton plantations in the eastern districts. The central area of the state was developed more by subsistence farmers who seldom owned slaves. Texas in its Wild West days attracted men who could shoot straight and possessed the zest for adventure, "for masculine renown, patriotic service, martial glory, and meaningful deaths". In 1846, about 10,000 Californios (Hispanics) lived in California, primarily on cattle ranches in what is now the Los Angeles area. A few hundred foreigners were scattered in the northern districts, including some Americans. With the outbreak of war with Mexico in 1846 the U.S. sent in Frémont and a U.S. Army unit, as well as naval forces, and quickly took control. As the war was ending, gold was discovered in the north, and the word soon spread worldwide. Thousands of "forty-niners" reached California, by sailing around South America (or taking a short-cut through disease-ridden Panama), or walked the California trail. The population soared to over 200,000 in 1852, mostly in the gold districts that stretched into the mountains east of San Francisco. Housing in San Francisco was at a premium, and abandoned ships whose crews had headed for the mines were often converted to temporary lodging. In the goldfields themselves, living conditions were primitive, though the mild climate proved attractive. Supplies were expensive and food poor, typical diets consisting mostly of pork, beans, and whiskey. These highly male, transient communities with no established institutions were prone to high levels of violence, drunkenness, profanity, and greed-driven behavior. Without courts or law officers in the mining communities to enforce claims and justice, miners developed their ad hoc legal system, based on the "mining codes" used in other mining communities abroad. Each camp had its own rules and often handed out justice by popular vote, sometimes acting fairly and at times exercising vigilantes; with Native Americans (Indians), Mexicans, and Chinese generally receiving the harshest sentences. The gold rush radically changed the California economy and brought in an array of professionals, including precious metal specialists, merchants, doctors, and attorneys, who added to the population of miners, saloon keepers, gamblers, and prostitutes. A San Francisco newspaper stated, "The whole country... resounds to the sordid cry of gold! Gold! Gold! while the field is left half planted, the house half-built, and everything neglected but the manufacture of shovels and pickaxes." Over 250,000 miners found a total of more than $200 million in gold in the five years of the California gold rush. As thousands arrived, however, fewer and fewer miners struck their fortune, and most ended exhausted and broke. Violent bandits often preyed upon the miners, such as the case of Jonathan R. Davis' killing of eleven bandits single-handedly. Camps spread out north and south of the American River and eastward into the Sierras. In a few years, nearly all of the independent miners were displaced as mines were purchased and run by mining companies, who then hired low-paid salaried miners. As gold became harder to find and more difficult to extract, individual prospectors gave way to paid work gangs, specialized skills, and mining machinery. Bigger mines, however, caused greater environmental damage. In the mountains, shaft mining predominated, producing large amounts of waste. Beginning in 1852, at the end of the '49 gold rush, through 1883, hydraulic mining was used. Despite huge profits being made, it fell into the hands of a few capitalists, displaced numerous miners, vast amounts of waste entered river systems, and did heavy ecological damage to the environment. Hydraulic mining ended when the public outcry over the destruction of farmlands led to the outlawing of this practice. The mountainous areas of the triangle from New Mexico to California to South Dakota contained hundreds of hard rock mining sites, where prospectors discovered gold, silver, copper and other minerals (as well as some soft-rock coal). Temporary mining camps sprang up overnight; most became ghost towns when the ores were depleted. Prospectors spread out and hunted for gold and silver along the Rockies and in the southwest. Soon gold was discovered in Colorado, Utah, Arizona, New Mexico, Idaho, Montana, and South Dakota (by 1864). The discovery of the Comstock Lode, containing vast amounts of silver, resulted in the Nevada boomtowns of Virginia City, Carson City, and Silver City. The wealth from silver, more than from gold, fueled the maturation of San Francisco in the 1860s and helped the rise of some of its wealthiest families, such as that of George Hearst. To get to the rich new lands of the West Coast, there were three options: some sailed around the southern tip of South America during a six-month voyage, some took the treacherous journey across the Panama Isthmus, but 400,000 others walked there on an overland route of more than 2,000 miles (3,200 km); their wagon trains usually left from Missouri. They moved in large groups under an experienced wagonmaster, bringing their clothing, farm supplies, weapons, and animals. These wagon trains followed major rivers, crossed prairies and mountains, and typically ended in Oregon and California. Pioneers generally attempted to complete the journey during a single warm season, usually for six months. By 1836, when the first migrant wagon train was organized in Independence, Missouri, a wagon trail had been cleared to Fort Hall, Idaho. Trails were cleared further and further west, eventually reaching the Willamette Valley in Oregon. This network of wagon trails leading to the Pacific Northwest was later called the Oregon Trail. The eastern half of the route was also used by travelers on the California Trail (from 1843), Mormon Trail (from 1847), and Bozeman Trail (from 1863) before they turned off to their separate destinations. In the "Wagon Train of 1843", some 700 to 1,000 emigrants headed for Oregon; missionary Marcus Whitman led the wagons on the last leg. In 1846, the Barlow Road was completed around Mount Hood, providing a rough but passable wagon trail from the Missouri River to the Willamette Valley: about 2,000 miles (3,200 km). Though the main direction of travel on the early wagon trails was westward, people also used the Oregon Trail to travel eastward. Some did so because they were discouraged and defeated. Some returned with bags of gold and silver. Most were returning to pick up their families and move them all back west. These "gobacks" were a major source of information and excitement about the wonders and promises—and dangers and disappointments—of the far West. Not all emigrants made it to their destination. The dangers of the overland route were numerous: snakebites, wagon accidents, violence from other travelers, suicide, malnutrition, stampedes, Native attacks, a variety of diseases (dysentery, typhoid, and cholera were among the most common), exposure, avalanches, etc. One particularly well-known example of the treacherous nature of the journey is the story of the ill-fated Donner Party, which became trapped in the Sierra Nevada mountains during the winter of 1846–1847. Half of the 90 people traveling with the group died from starvation and exposure, and some resorted to cannibalism to survive. Another story of cannibalism featured Alferd Packer and his trek to Colorado in 1874. There were also frequent attacks from bandits and highwaymen, such as the infamous Harpe brothers who patrolled the frontier routes and targeted migrant groups. In Missouri and Illinois, animosity between the Mormon settlers and locals grew, which would mirror those in other states such as Utah years later. Violence finally erupted on October 24, 1838, when militias from both sides clashed and a mass killing of Mormons in Livingston County occurred 6 days later. A Mormon Extermination Order was filed during these conflicts, and the Mormons were forced to scatter. Brigham Young, seeking to leave American jurisdiction to escape religious persecution in Illinois and Missouri, led the Mormons to the valley of the Great Salt Lake, owned at the time by Mexico but not controlled by them. A hundred rural Mormon settlements sprang up in what Young called "Deseret", which he ruled as a theocracy. It later became Utah Territory. Young's Salt Lake City settlement served as the hub of their network, which reached into neighboring territories as well. The communalism and advanced farming practices of the Mormons enabled them to succeed. The Mormons often sold goods to wagon trains passing through. After the Battle at Fort Utah in 1850, Brigham Young began to articulate an Indian policy often paraphrased as "It is cheaper to feed them than to fight them." However, Wakara's War and Utah's Black Hawk War demonstrate that hostilities persisted until federal and territorial leaders came to agree that the nearby Indigenous people should all be displaced to the Uinta Reservation. Education became a high priority to protect the beleaguered group, reduce heresy and maintain group solidarity. Following the end of the Mexican–American War in 1848, Utah was ceded to the United States by Mexico. Though the Mormons in Utah had supported U.S. efforts during the war; the federal government, pushed by the Protestant churches, rejected theocracy and polygamy. Founded in 1852, the Republican Party was openly hostile towards the Church of Jesus Christ of Latter-day Saints (LDS Church) in Utah over the practice of polygamy, viewed by most of the American public as an affront to religious, cultural, and moral values of modern civilization. Confrontations verged on open warfare in the late 1850s as President Buchanan sent in troops. Although there were no military battles fought, and negotiations led to a stand down, violence still escalated and there were several casualties. After the Civil War, the federal government systematically took control of Utah, the LDS Church was legally disincorporated in the territory and members of the church's hierarchy, including Young, were summarily removed and barred from virtually every public office. Meanwhile, successful missionary work in the U.S. and Europe brought a flood of Mormon converts to Utah. During this time, Congress refused to admit Utah into the Union as a state and statehood would mean an end to direct federal control over the territory and the possible ascension of politicians chosen and controlled by the LDS Church into most if not all federal, state and local elected offices from the new state. Finally, in 1890, the church leadership announced polygamy was no longer a central tenet, thereafter a compromise. In 1896, Utah was admitted as the 45th state with the Mormons dividing between Republicans and Democrats. The federal government provided subsidies for the development of mail and freight delivery, and by 1856, Congress authorized road improvements and an overland mail service to California. The new commercial wagon trains service primarily hauled freight. In 1858 John Butterfield (1801–1869) established a stage service that went from Saint Louis to San Francisco in 24 days along a southern route. This route was abandoned in 1861 after Texas joined the Confederacy, in favor of stagecoach services established via Fort Laramie and Salt Lake City, a 24-day journey, with Wells Fargo & Co. as the foremost provider (initially using the old "Butterfield" name). William Russell, hoping to get a government contract for more rapid mail delivery service, started the Pony Express in 1860, cutting delivery time to ten days. He set up over 150 stations about 15 miles (24 km) apart. In 1861, Congress passed the Land-Grant Telegraph Act which financed the construction of Western Union's transcontinental telegraph lines. Hiram Sibley, Western Union's head, negotiated exclusive agreements with railroads to run telegraph lines along their right-of-way. Eight years before the transcontinental railroad opened, the first transcontinental telegraph linked Omaha, Nebraska, to San Francisco on October 24, 1861. The Pony Express ended in just 18 months because it could not compete with the telegraph. Constitutionally, Congress could not deal with slavery in the states but it did have jurisdiction in the western territories. California unanimously rejected slavery in 1850 and became a free state. New Mexico allowed slavery, but it was rarely seen there. Kansas was off-limits to slavery by the Compromise of 1820. Free Soil elements feared that if slavery were allowed rich planters would buy up the best lands and work them with gangs of slaves, leaving little opportunity for free white men to own farms. Few Southern planters were interested in Kansas, but the idea that slavery was illegal there implied they had a second-class status that was intolerable to their sense of honor, and seemed to violate the principle of states' rights. With the passage of the extremely controversial Kansas–Nebraska Act in 1854, Congress left the decision up to the voters on the ground in Kansas. Across the North, a new major party was formed to fight slavery: the Republican Party, with numerous westerners in leadership positions, most notably Abraham Lincoln of Illinois. To influence the territorial decision, anti-slavery elements (also called "Jayhawkers" or "Free-soilers") financed the migration of politically determined settlers. But pro-slavery advocates fought back with pro-slavery settlers from Missouri. Violence on both sides was the result; in all 56 men were killed by the time the violence abated in 1859. By 1860 the pro-slavery forces were in control—but Kansas had only two slaves. The antislavery forces took over by 1861, as Kansas became a free state. The episode demonstrated that a democratic compromise between North and South over slavery was impossible and served to hasten the Civil War. Despite its large territory, the trans-Mississippi West had a small population and its wartime story has to a large extent been underplayed in the historiography of the American Civil War. The Confederacy engaged in several important campaigns in the West. However, Kansas, a major area of conflict building up to the war, was the scene of only one battle, at Mine Creek. But its proximity to Confederate lines enabled pro-Confederate guerrillas, such as Quantrill's Raiders, to attack Union strongholds and massacre the residents. In Texas, citizens voted to join the Confederacy; anti-war Germans were hanged. Local troops took over the federal arsenal in San Antonio, with plans to grab the territories of northern New Mexico, Utah, and Colorado, and possibly California. Confederate Arizona was created by Arizona citizens who wanted protection against Apache raids after the United States Army units were moved out. The Confederacy then sets its sight to gain control of the New Mexico Territory. General Henry Hopkins Sibley was tasked for the campaign, and together with his New Mexico Army, marched right up the Rio Grande in an attempt to take the mineral wealth of Colorado as well as California. The First Regiment of Volunteers discovered the rebels, and they immediately warned and joined the Yankees at Fort Union. The Battle of Glorieta Pass soon erupted, and the Union ended the Confederate campaign and the area west of Texas remained in Union hands. Missouri, a Border South state where slavery was legal, became a battleground when the pro-secession governor, against the vote of the legislature, led troops to the federal arsenal at St. Louis; he was aided by Confederate forces from Arkansas and Louisiana. The Governor of Missouri and part of the state legislature signed an Ordinance of Secession at Neosho, forming the Confederate government of Missouri, and the Confederacy controlling Southern Missouri. However, Union General Samuel Curtis regained St. Louis and all of Missouri for the Union. The state was the scene of numerous raids and guerrilla warfare in the west. The U.S. Army after 1850 established a series of military posts across the frontier, designed to stop warfare among Native tribes or between Natives and settlers. Throughout the 19th century, Army officers typically built their careers in peacekeeper roles moving from fort to fort until retirement. Actual combat experience was uncommon for any one soldier. The most dramatic conflict was the Sioux war in Minnesota in 1862 when Dakota tribes systematically attacked German farms to drive out the settlers. For several days, Dakota attacks at the Lower Sioux Agency, New Ulm, and Hutchinson killed 300 to 400 white settlers. The state militia fought back and Lincoln sent in federal troops. The ensuing battles at Fort Ridgely, Birch Coulee, Fort Abercrombie, and Wood Lake punctuated a six-week war, which ended in an American victory. The federal government tried 425 Natives for murder, and 303 were convicted and sentenced to death. Lincoln pardoned the majority, but 38 leaders were hanged. The decreased presence of Union troops in the West left behind untrained militias; hostile tribes used the opportunity to attack settlers. The militia struck back hard, most notably by attacking the winter quarters of the Cheyenne and Arapaho tribes, filled with women and children, at the Sand Creek massacre in eastern Colorado in late 1864. Kit Carson and the U.S. Army in 1864 trapped the entire Navajo tribe in New Mexico, where they had been raiding settlers and put them on a reservation. Within the Indian Territory, now Oklahoma, conflicts arose among the Five Civilized Tribes, most of which sided with the South being slaveholders themselves. In 1862, Congress enacted two major laws to facilitate settlement of the West: the Homestead Act and the Pacific Railroad Act. The result by 1890 was millions of new farms in the Plains states, many operated by new immigrants from Germany and Scandinavia. With the war over and slavery abolished, the federal government focused on improving the governance of the territories. It subdivided several territories, preparing them for statehood, following the precedents set by the Northwest Ordinance of 1787. It standardized procedures and the supervision of territorial governments, taking away some local powers, and imposing much "red tape", growing the federal bureaucracy significantly. Federal involvement in the territories was considerable. In addition to direct subsidies, the federal government maintained military posts, provided safety from Native attacks, bankrolled treaty obligations, conducted surveys and land sales, built roads, staffed land offices, made harbor improvements, and subsidized overland mail delivery. Territorial citizens came to both decry federal power and local corruption, and at the same time, lament that more federal dollars were not sent their way. Territorial governors were political appointees and beholden to Washington so they usually governed with a light hand, allowing the legislatures to deal with the local issues. In addition to his role as civil governor, a territorial governor was also a militia commander, a local superintendent of Native affairs, and the state liaison with federal agencies. The legislatures, on the other hand, spoke for the local citizens and they were given considerable leeway by the federal government to make local law. These improvements to governance still left plenty of room for profiteering. As Mark Twain wrote while working for his brother, the secretary of Nevada, "The government of my country snubs honest simplicity but fondles artistic villainy, and I think I might have developed into a very capable pickpocket if I had remained in the public service a year or two." "Territorial rings", corrupt associations of local politicians and business owners buttressed with federal patronage, embezzled from Native tribes and local citizens, especially in the Dakota and New Mexico territories. In acquiring, preparing, and distributing public land to private ownership, the federal government generally followed the system set forth by the Land Ordinance of 1785. Federal exploration and scientific teams would undertake reconnaissance of the land and determine Native American habitation. Through treaties, the land titles would be ceded by the resident tribes. Then surveyors would create detailed maps marking the land into squares of six miles (10 km) on each side, subdivided first into one square mile blocks, then into 160-acre (0.65 km2) lots. Townships would be formed from the lots and sold at public auction. Unsold land could be purchased from the land office at a minimum price of $1.25 per acre. As part of public policy, the government would award public land to certain groups such as veterans, through the use of "land script". The script traded in a financial market, often at below the $1.25 per acre minimum price set by law, which gave speculators, investors, and developers another way to acquire large tracts of land cheaply. Land policy became politicized by competing factions and interests, and the question of slavery on new lands was contentious. As a counter to land speculators, farmers formed "claims clubs" to enable them to buy larger tracts than the 160-acre (0.65 km2) allotments by trading among themselves at controlled prices. In 1862, Congress passed three important bills that transformed the land system. The Homestead Act granted 160 acres (0.65 km2) free to each settler who improved the land for five years; citizens and non-citizens including squatters and women were all eligible. The only cost was a modest filing fee. The law was especially important in the settling of the Plains states. Many took a free homestead and others purchased their land from railroads at low rates. The Pacific Railroad Act of 1862 provided for the land needed to build the transcontinental railroad. The land was given the railroads alternated with government-owned tracts saved for free distribution to homesteaders. To be equitable, the federal government reduced each tract to 80 acres (32 ha) because of its perceived higher value given its proximity to the rail line. Railroads had up to five years to sell or mortgage their land, after tracks were laid, after which unsold land could be purchased by anyone. Often railroads sold some of their government acquired land to homesteaders immediately to encourage settlement and the growth of markets the railroads would then be able to serve. Nebraska railroads in the 1870s were strong boosters of lands along their routes. They sent agents to Germany and Scandinavia with package deals that included cheap transportation for the family as well as its furniture and farm tools, and they offered long-term credit at low rates. Boosterism succeeded in attracting adventurous American and European families to Nebraska, helping them purchase land grant parcels on good terms. The selling price depended on such factors as soil quality, water, and distance from the railroad. The Morrill Act of 1862 provided land grants to states to begin colleges of agriculture and mechanical arts (engineering). Black colleges became eligible for these land grants in 1890. The Act succeeded in its goals to open new universities and make farming more scientific and profitable. In the 1850s, the U.S. government sponsored surveys that charted the remaining unexplored regions of the West in order to plan possible routes for a transcontinental railroad. Much of this work was undertaken by the Corps of Engineers, Corps of Topographical Engineers, and Bureau of Explorations and Surveys, and became known as "The Great Reconnaissance". Regionalism animated debates in Congress regarding the choice of a northern, central, or southern route. Engineering requirements for the rail route were an adequate supply of water and wood, and as nearly-level route as possible, given the weak locomotives of the era. Proposals to build a transcontinental failed because of Congressional disputes over slavery. With the secession of the Confederate states in 1861, the modernizers in the Republican party took over Congress and wanted a line to link to California. Private companies were to build and operate the line. Construction would be done by unskilled laborers who would live in temporary camps along the way. Immigrants from China and Ireland did most of the construction work. Theodore Judah, the chief engineer of the Central Pacific surveyed the route from San Francisco east. Judah's tireless lobbying efforts in Washington were largely responsible for the passage of the 1862 Pacific Railroad Act, which authorized construction of both the Central Pacific and the Union Pacific (which built west from Omaha). In 1862 four rich San Francisco merchants (Leland Stanford, Collis Huntington, Charles Crocker, and Mark Hopkins) took charge, with Crocker in charge of construction. The line was completed in May 1869. Coast-to-coast passenger travel in 8 days now replaced wagon trains or sea voyages that took 6 to 10 months and cost much more. The road was built with mortgages from New York, Boston, and London, backed by land grants. There were no federal cash subsidies, But there was a loan to the Central Pacific that was eventually repaid at six percent interest. The federal government offered land-grants in a checkerboard pattern. The railroad sold every-other square, with the government opening its half to homesteaders. The government also loaned money—later repaid—at $16,000 per mile on level stretches, and $32,000 to $48,000 in mountainous terrain. Local and state governments also aided the financing. Most of the manual laborers on the Central Pacific were new arrivals from China. Kraus shows how these men lived and worked, and how they managed their money. He concludes that senior officials quickly realized the high degree of cleanliness and reliability of the Chinese. The Central Pacific employed over 12,000 Chinese workers, 90% of its manual workforce. Ong explores whether or not the Chinese railroad workers were exploited by the railroad, with whites in better positions. He finds the railroad set different wage rates for whites and Chinese and used the latter in the more menial and dangerous jobs, such as the handling and the pouring of nitroglycerin. However the railroad also provided camps and food the Chinese wanted and protected the Chinese workers from threats from whites. Building the railroad required six main activities: surveying the route, blasting a right of way, building tunnels and bridges, clearing and laying the roadbed, laying the ties and rails, and maintaining and supplying the crews with food and tools. The work was highly physical, using horse-drawn plows and scrapers, and manual picks, axes, sledgehammers, and handcarts. A few steam-driven machines, such as shovels, were used. The rails were iron (steel came a few years later), weighed 700 lb (320 kg) and required five men to lift. For blasting, they used black powder. The Union Pacific construction crews, mostly Irish Americans, averaged about two miles (3 km) of new track per day. Six transcontinental railroads were built in the Gilded Age (plus two in Canada); they opened up the West to farmers and ranchers. From north to south they were the Northern Pacific, Milwaukee Road, and Great Northern along the Canada–U.S. border; the Union Pacific/Central Pacific in the middle, and to the south the Santa Fe, and the Southern Pacific. All but the Great Northern of James J. Hill relied on land grants. The financial stories were often complex. For example, the Northern Pacific received its major land grant in 1864. Financier Jay Cooke (1821–1905) was in charge until 1873 when he went bankrupt. Federal courts, however, kept bankrupt railroads in operation. In 1881 Henry Villard (1835–1900) took over and finally completed the line to Seattle. But the line went bankrupt in the Panic of 1893 and Hill took it over. He then merged several lines with financing from J.P. Morgan, but President Theodore Roosevelt broke them up in 1904. In the first year of operation, 1869–70, 150,000 passengers made the long trip. Settlers were encouraged with promotions to come West on free scouting trips to buy railroad land on easy terms spread over several years. The railroads had "Immigration Bureaus" which advertised package low-cost deals including passage and land on easy terms for farmers in Germany and Scandinavia. The prairies, they were promised, did not mean backbreaking toil because "settling on the prairie which is ready for the plow is different from plunging into a region covered with timber". The settlers were customers of the railroads, shipping their crops and cattle out, and bringing in manufactured products. All manufacturers benefited from the lower costs of transportation and the much larger radius of business. White concludes with a mixed verdict. The transcontinentals did open up the West to settlement, brought in many thousands of high-tech, highly paid workers and managers, created thousands of towns and cities, oriented the nation onto an east–west axis, and proved highly valuable for the nation as a whole. On the other hand, too many were built, and they were built too far ahead of actual demand. The result was a bubble that left heavy losses to investors and led to poor management practices. By contrast, as White notes, the lines in the Midwest and East supported by a very large population base, fostered farming, industry, and mining while generating steady profits and receiving few government benefits. After the Civil War, many from the East Coast and Europe were lured west by reports from relatives and by extensive advertising campaigns promising "the Best Prairie Lands", "Low Prices", "Large Discounts For Cash", and "Better Terms Than Ever!". The new railroads provided the opportunity for migrants to go out and take a look, with special family tickets, the cost of which could be applied to land purchases offered by the railroads. Farming the plains was indeed more difficult than back east. Water management was more critical, lightning fires were more prevalent, the weather was more extreme, rainfall was less predictable. The fearful stayed home. The actual migrants looked beyond fears of the unknown. Their chief motivation to move west was to find a better economic life than the one they had. Farmers sought larger, cheaper, and more fertile land; merchants and tradesmen sought new customers and new leadership opportunities. Laborers wanted higher paying work and better conditions. As settlers moved west, they had to face challenges along the way, such as the lack of wood for housing, bad weather like blizzards and droughts, and fearsome tornadoes. In the treeless prairies homesteaders built sod houses. One of the greatest plagues that hit the homesteaders was the 1874 Locust Plague which devastated the Great Plains. These challenges hardened these settlers in taming the frontier. After Russia's defeat in the Crimean War, Tsar Alexander II of Russia decided to sell the Russian American territory of Alaska to the United States. The decision was motivated in part by a need for money and in part a recognition amongst the Russian state that Britain could easily capture Alaska in any future conflict between the two nations. U.S. Secretary of State William Seward negotiated with the Russians to acquire the tremendous landmass of Alaska, an area roughly one-fifth the size of the rest of the United States. On March 30, 1867, the U.S. purchased the territory from the Russians for $7.2 million ($166 million in 2025 dollars). The transfer ceremony was completed in Sitka on October 18, 1867, as Russian soldiers handed over the territory to the United States Army. Seward and other proponents of the Alaska Purchase had intended to continue acquiring territory on the northern frontier, with purchases of Arctic regions such as Greenland potentially culminating in an annexation of Canada, or after its 1867 Confederation, at least the western regions that had not yet joined Canada. Critics at the time decried the purchase as "Seward's Folly", reasoning that there were no natural resources in the new territory and no one can be bothered to live in such a cold, icy climate. Although the development and settlement of Alaska grew slowly, the discovery of goldfields during the Klondike Gold Rush in 1896, Nome Gold Rush in 1898, and Fairbanks Gold Rush in 1902 brought thousands of miners into the territory, thus propelling Alaska's prosperity for decades to come. Major oil discoveries in the late 20th century made the state rich. In 1889, Washington opened 2,000,000 acres (8,100 km2) of unoccupied lands in the Oklahoma territory. On April 22, over 100,000 settlers and cattlemen (known as "boomers") lined up at the border, and when the army's guns and bugles giving the signal, began a mad dash to stake their claims in the Land Run of 1889. A witness wrote, "The horsemen had the best of it from the start. It was a fine race for a few minutes, but soon the riders began to spread out like a fan, and by the time they reached the horizon they were scattered about as far as the eye could see". In a single day, the towns of Oklahoma City, Norman, and Guthrie came into existence. In the same manner, millions of acres of additional land were opened up and settled in the following four years. Indian wars have occurred throughout the United States though the conflicts are generally separated into two categories; the Indian wars east of the Mississippi River and the Indian wars west of the Mississippi. The U.S. Bureau of the Census (1894) provided an estimate of deaths: The "Indian" wars under the government of the United States have been more than 40 in number. They have cost the lives of about 19,000 white men, women and children, including those killed in individual combats, and the lives of about 30,000 Indians. The actual number of killed and wounded Indians must be very much higher than the given... Fifty percent additional would be a safe estimate... Historian Russell Thornton estimates that from 1800 to 1890, the Native population declined from 600,000 to as few as 250,000. The depopulation was principally caused by smallpox and other Infectious diseases. Many tribes in Texas, such as the Karankawan, Akokisa, Bidui and others, were extinguished due to conflicts with Texan settlers. The rapid depopulation of the Native Americans after the Civil War alarmed the U.S. government, and the Doolittle Committee was formed to investigate the causes as well as provide recommendations for preserving the population. The solutions presented by the committee, such as the establishment of the five boards of inspection to prevent Native abuses, had little effect as large Western migration commenced. British merchants and government agents began supplying weapons to Indians living in the United States following the Revolution (1783–1812) in the hope that, if a war broke out, they would fight on the British side. The British further planned to set up an Indian nation in the Ohio-Wisconsin area to block further American expansion. The US protested and declared war in 1812. Most Indian tribes supported the British, especially those allied with Tecumseh, but they were ultimately defeated by General William Henry Harrison. Many refugees from defeated tribes went over the border to Canada; those in the South went to Florida while it was under Spanish control under the Viceroyalty of New Spain. After the U.S. purchased Florida from Spain, it paid the local Indians to relocate to Indian Territory (Oklahoma) and most did so. However the Seminole Wars comprised a series of military operations against several hundred Seminole warriors on the other side of the frontier who refused to abandon their ancestral graves. It was a matter of ambushes and raids in swampy country and ended in the frontier region in 1842. The conflict dragged on for years, received massive newspaper coverage, and helped shape the southern identity. Native warriors in the West, using their traditional style of limited, battle-oriented warfare, confronted the U.S. Army. The Natives emphasized bravery in combat while the Army put its emphasis not so much on individual combat as on building networks of forts, developing a logistics system, and using the telegraph and railroads to coordinate and concentrate its forces. Plains Indian intertribal warfare bore no resemblance to the "modern" warfare practiced by the Americans along European lines, using its vast advantages in population and resources. Many tribes avoided warfare and others supported the U.S. Army. The tribes hostile to the government continued to pursue their traditional brand of fighting and, therefore, were unable to have any permanent success against the Army. Indian wars were fought throughout the western regions, with more conflicts in the states bordering Mexico than in the interior states. Arizona ranked highest, with 310 known battles fought within the state's boundaries between Americans and the Natives. Arizona ranked highest in war deaths, with 4,340 killed, including soldiers, civilians, and Native Americans. That was more than twice as many as occurred in Texas, the second-highest-ranking state. Most of the deaths in Arizona were caused by the Apache. Michno also says that fifty-one percent of the Indian war battles between 1850 and 1890 took place in Arizona, Texas, and New Mexico, as well as thirty-seven percent of the casualties in the county west of the Mississippi River. The Comanche fought a number of conflicts against Spanish and later Mexican and American armies. Comanche power peaked in the 1840s when they conducted large-scale raids hundreds of miles into Mexico proper, while also warring against the Anglo-Americans and Tejanos who had settled in independent Texas. One of the deadliest Indian wars fought was the Snake War in 1864–1868, which was conducted by a confederacy of Northern Paiute, Bannock and Shoshone Native Americans, called the "Snake Indians" against the United States Army in the states of Oregon, Nevada, California, and Idaho which ran along the Snake River. The war started when tension arose between the local Natives and the flooding pioneer trains encroaching through their lands, which resulted in competition for food and resources. Natives included in this group attacked and harassed emigrant parties and miners crossing the Snake River Valley, which resulted in further retaliation of the white settlements and the intervention of the United States army. The war resulted in a total of 1,762 men who have been killed, wounded, and captured from both sides. Unlike other Indian Wars, the Snake War has widely forgotten in United States history due to having only limited coverage of the war. The Colorado War fought by Cheyenne, Arapaho and Sioux, was fought in the territories of Colorado to Nebraska. The conflict was fought in 1863–1865 while the American Civil War was still ongoing. Caused by dissolution between the Natives and the white settlers in the region, the war was infamous for the atrocities done between the two parties. White militias destroyed Native villages and killed Native women and children such as the bloody Sand Creek massacre, and the Natives also raided ranches, farms and killed white families such as the American Ranch massacre and Raid on Godfrey Ranch. In the Apache Wars, Colonel Christopher "Kit" Carson forced the Mescalero Apache onto a reservation in 1862. In 1863–1864, Carson used a scorched earth policy in the Navajo Campaign, burning Navajo fields and homes, and capturing or killing their livestock. He was aided by other Native tribes with long-standing enmity toward the Navajos, chiefly the Utes. Another prominent conflict of this war was Geronimo's fight against settlements in Texas in the 1880s. The Apaches under his command conducted ambushes on US cavalries and forts, such as their attack on Cibecue Creek, while also raiding upon prominent farms and ranches, such as their infamous attack on the Empire Ranch that killed three cowboys. The U.S. finally induced the last hostile Apache band under Geronimo to surrender in 1886. During the Comanche Campaign, the Red River War was fought in 1874–1875 in response to the Comanche's dwindling food supply of buffalo, as well as the refusal of a few bands to be inducted in reservations. Comanches started raiding small settlements in Texas, which led to the Battle of Buffalo Wallow and Second Battle of Adobe Walls fought by buffalo hunters, and the Battle of Lost Valley against the Texas Rangers. The war finally ended with a final confrontation between the Comanches and the U.S. Cavalry in Palo Duro Canyon. The last Comanche war chief, Quanah Parker, surrendered in June 1875, which would finally end the wars fought by Texans and Natives. Red Cloud's War was led by the Lakota chief Red Cloud against the military who were erecting forts along the Bozeman Trail. It was the most successful campaign against the U.S. during the Indian Wars. By the Treaty of Fort Laramie (1868), the U.S. granted a large reservation to the Lakota, without military presence; it included the entire Black Hills. Captain Jack was a chief of the Native American Modoc tribe of California and Oregon, and was their leader during the Modoc War. With 53 Modoc warriors, Captain Jack held off 1,000 men of the U.S. Army for seven months. Captain Jack killed Edward Canby. In June 1877, in the Nez Perce War the Nez Perce under Chief Joseph, unwilling to give up their traditional lands and move to a reservation, undertook a 1,200-mile (2,000 km) fighting retreat from Oregon to near the Canada–U.S. border in Montana. Numbering only 200 warriors, the Nez Perce "battled some 2,000 American regulars and volunteers of different military units, together with their Native auxiliaries of many tribes, in a total of eighteen engagements, including four major battles and at least four fiercely contested skirmishes." The Nez Perce were finally surrounded at the Battle of Bear Paw and surrendered. The Great Sioux War of 1876 was conducted by the Lakota under Sitting Bull and Crazy Horse. The conflict began after repeated violations of the Treaty of Fort Laramie (1868) once gold was discovered in the hills. One of its famous battles was the Battle of the Little Bighorn, in which combined Sioux and Cheyenne forces defeated the 7th Cavalry, led by General George Armstrong Custer. The Ute War, fought by the Ute people against settlers in Utah and Colorado, led to two battles; the Meeker massacre which killed 11 Native agents, and the Pinhook massacre which killed 13 armed ranchers and cowboys. The Ute conflicts finally ended after the events of the Posey War in 1923 which was fought against settlers and law enforcement. The end of the major Indian wars came at the Wounded Knee massacre on December 29, 1890, where the 7th Cavalry attempted to disarm a Sioux man and precipitated a massacre in which about 150 Sioux men, women, and children were killed. Only thirteen days before, Sitting Bull had been killed with his son Crow Foot in a gun battle with a group of Native police that had been sent by the American government to arrest him. Additional conflicts and incidents though, such as the Bluff War (1914–1915) and Posey War, would occur into the early 1920s. The last combat engagement between U.S. Army soldiers and Native Americans though occurred in the Battle of Bear Valley on January 9, 1918. As the frontier moved westward, the establishment of U.S. military forts moved with it, representing and maintaining federal sovereignty over new territories. The military garrisons usually lacked defensible walls but were seldom attacked. They served as bases for troops at or near strategic areas, particularly for counteracting the Native presence. For example, Fort Bowie protected Apache Pass in southern Arizona along the mail route between Tucson and El Paso and was used to launch attacks against Cochise and Geronimo. Fort Laramie and Fort Kearny helped protect immigrants crossing the Great Plains and a series of posts in California protected miners. Forts were constructed to launch attacks against the Sioux. As Indian reservations sprang up, the military set up forts to protect them. Forts also guarded the Union Pacific and other rail lines. Other important forts were Fort Sill, Oklahoma, Fort Smith, Arkansas, Fort Snelling, Minnesota, Fort Union, New Mexico, Fort Worth, Texas, and Fort Walla Walla in Washington. Fort Omaha, Nebraska, was home to the Department of the Platte, and was responsible for outfitting most Western posts for more than 20 years after its founding in the late 1870s. Fort Huachuca in Arizona was also originally a frontier post and is still in use by the United States Army. Settlers on their way overland to Oregon and California became targets of Native threats. Robert L. Munkres read 66 diaries of parties traveling the Oregon Trail between 1834 and 1860 to estimate the actual dangers they faced from Native attacks in Nebraska and Wyoming. The vast majority of diarists reported no armed attacks at all. However many did report harassment by Natives who begged or demanded tolls, and stole horses and cattle. Madsen reports that the Shoshoni and Bannock tribes north and west of Utah were more aggressive toward wagon trains. The federal government attempted to reduce tensions and create new tribal boundaries in the Great Plains with two new treaties in early 1850, The Treaty of Fort Laramie established tribal zones for the Sioux, Cheyennes, Arapahos, Crows, and others, and allowed for the building of roads and posts across the tribal lands. A second treaty secured safe passage along the Santa Fe Trail for wagon trains. In return, the tribes would receive, for ten years, annual compensation for damages caused by migrants. The Kansas and Nebraska territories also became contentious areas as the federal government sought those lands for the future transcontinental railroad. In the Far West settlers began to occupy land in Oregon and California before the federal government secured title from the native tribes, causing considerable friction. In Utah, the Mormons also moved in before federal ownership was obtained. A new policy of establishing reservations came gradually into shape after the boundaries of the "Indian Territory" began to be ignored. In providing for Indian reservations, Congress and the Office of Indian Affairs hoped to de-tribalize Native Americans and prepare them for integration with the rest of American society, the "ultimate incorporation into the great body of our citizen population". This allowed for the development of dozens of riverfront towns along the Missouri River in the new Nebraska Territory, which was carved from the remainder of the Louisiana Purchase after the Kansas–Nebraska Act. Influential pioneer towns included Omaha, Nebraska City, and St. Joseph. American attitudes towards Natives during this period ranged from malevolence ("the only good Indian is a dead Indian") to misdirected humanitarianism (Indians live in "inferior" societies and by assimilation into white society they can be redeemed) to somewhat realistic (Native Americans and settlers could co-exist in separate but equal societies, dividing up the remaining western land). Dealing with nomadic tribes complicated the reservation strategy and decentralized tribal power made treaty making difficult among the Plains Indians. Conflicts erupted in the 1850s, resulting in various Indian wars. In these times of conflict, Natives become more stringent about white men entering their territory. Such as in the case of Oliver Loving, they would sometimes attack cowboys and their cattle if ever caught crossing in the borders of their land. They would also prey upon livestock if the food was scarce during hard times. However, the relationship between cowboys and Native Americans were more mutual than they are portrayed, and the former would occasionally pay a fine of 10 cents per cow for the latter to allow them to travel through their land. Natives also preyed upon stagecoaches travelling in the frontier for its horses and valuables. After the Civil War, as the volunteer armies disbanded, the regular army cavalry regiments increased in number from six to ten, among them Custer's U.S. 7th Cavalry Regiment of Little Bighorn fame, and the African-American U.S. 9th Cavalry Regiment and U.S. 10th Cavalry Regiment. The black units, along with others (both cavalry and infantry), collectively became known as the Buffalo Soldiers. According to Robert M. Utley: The frontier army was a conventional military force trying to control, by conventional military methods, a people that did not behave like conventional enemies and, indeed, quite often were not enemies at all. This is the most difficult of all military assignments, whether in Africa, Asia, or the American West. Westerners were proud of their leadership in the movement for democracy and equality, a major theme for Frederick Jackson Turner. The new states of Kentucky, Tennessee, Alabama, and Ohio were more democratic than the parent states back East in terms of politics and society. The Western states were the first to give women the right to vote. By 1900 the West, especially California and Oregon, led the Progressive movement. Scholars have examined the social history of the west in search of the American character. The history of Kansas, argued historian Carl L. Becker a century ago, reflects American ideals. He wrote: "The Kansas spirit is the American spirit double distilled. It is a new grafted product of American individualism, American idealism, American intolerance. Kansas is America in microcosm." The cities played an essential role in the development of the frontier, as transportation hubs, financial and communications centers, and providers of merchandise, services, and entertainment. As the railroads pushed westward into the unsettled territory after 1860, they build service towns to handle the needs of railroad construction crews, train crews, and passengers who ate meals at scheduled stops. In most of the South, there were very few cities of any size for miles around, and this pattern held for Texas as well, so railroads did not arrive until the 1880s. They then shipped the cattle out and cattle drives became short-distance affairs. However, the passenger trains were often the targets of armed gangs. Denver's economy before 1870 had been rooted in mining; it then grew by expanding its role in railroads, wholesale trade, manufacturing, food processing, and servicing the growing agricultural and ranching hinterland. Between 1870 and 1890, manufacturing output soared from $600,000 to $40 million, and the population grew by a factor of 20 times to 107,000. Denver had always attracted miners, workers, whores, and travelers. Saloons and gambling dens sprung up overnight. The city fathers boasted of its fine theaters, and especially the Tabor Grand Opera House built in 1881. By 1890, Denver had grown to be the 26th largest city in America, and the fifth-largest city west of the Mississippi River. The boom times attracted millionaires and their mansions, as well as hustlers, poverty, and crime. Denver gained regional notoriety with its range of bawdy houses, from the sumptuous quarters of renowned madams to the squalid "cribs" located a few blocks away. Business was good; visitors spent lavishly, then left town. As long as madams conducted their business discreetly, and "crib girls" did not advertise their availability too crudely, authorities took their bribes and looked the other way. Occasional cleanups and crack downs satisfied the demands for reform. With its giant mountain of copper, Butte, Montana, was the largest, richest, and rowdiest mining camp on the frontier. It was an ethnic stronghold, with the Irish Catholics in control of politics and of the best jobs at the leading mining corporation Anaconda Copper. City boosters opened a public library in 1894. Ring argues that the library was originally a mechanism of social control, "an antidote to the miners' proclivity for drinking, whoring, and gambling". It was also designed to promote middle-class values and to convince Easterners that Butte was a cultivated city. European immigrants often built communities of similar religious and ethnic backgrounds. For example, many Finns went to Minnesota and Michigan, Swedes and Norwegians to Minnesota and the Dakotas, Irish to railroad centers along the transcontinental lines, Volga Germans to North Dakota, English converts to the LDS church went to Utah including English immigrants who settled in the Rocky Mountain states (Colorado, Wyoming and Idaho) and German Jews to Portland, Oregon. African Americans moved West as soldiers, as well as cowboys (see Black cowboy), farmhands, saloon workers, cooks, and outlaws. The Buffalo Soldiers were soldiers in the all-black 9th and 10th Cavalry regiments, and 24th and 25th Infantry Regiments of the U.S. Army. They had white officers and served in numerous western forts. About 4,000 black people came to California in Gold Rush days. In 1879, after the end of Reconstruction in the South, several thousand Freedmen moved from Southern states to Kansas. Known as the Exodusters, they were lured by the prospect of good, cheap Homestead Law land and better treatment. The all-black town of Nicodemus, Kansas, which was founded in 1877, was an organized settlement that predates the Exodusters but is often associated with them. At the time in America, women had few rights and were usually confined to domestic spheres. Black and other minority women had even fewer rights and privileges than white women.The number of documented Black women who moved to the West during the 1800s and early 1900s is small, although they were certainly present. Many worked as schoolteachers on the plains, while others were horse-breakers, midwives, businesswomen, barkeepers, nurses, and mail carriers. Black women still faced racial and gender prejudice, but often had more freedoms than they would have in the East, due to the new and diverse nature of the American Frontier. Key Black women in the American West are often overlooked. Mary Fields, commonly referred to as "Stagecoach Mary" or "Black Mary" was a Black woman who paved her own way in the West. She was an enslaved person in Tennessee and then moved out West where she worked as a mail carrier, a barkeeper, and a whorehouse owner in Miles City, Montana. Known for her large stature and tough nature, Fields challenged common gender and racial stereotypes through her demeanor and her business pursuits. Clara Brown, or "Aunt Clara," is another example of a Black woman making her own way in this region. Brown arrived in Colorado and with her frugal strategies, slowly began to acquire land and capital, near Central City and Denver. She was known to be very kind and philanthropic, and often fed needy miners and opened her home to those less fortunate than herself. Unfortunately Clara's story does not end well, as she lost nearly all of her property when a flood destroyed her land records and she could no longer prove ownership. Mary Ellen Pleasant, also known as "Mammy Pleas" or the "Angel of the West," is another key Black female figure from the region. Pleasant was known for caring for troubled men, women, and children in her area. She created safe havens for abused women, a rarity in the region at the time. Defying gender and racial stereotypes, Pleasant did not confine herself to the domestic sphere or just caring for others. She was also a business owner, a civil rights activist, and a self made millionaire in California during the Gold Rush era. The California gold rush included thousands of Mexican and Chinese arrivals. Chinese migrants, many of whom were impoverished peasants, provided the major part of the workforce for the building of the Central Pacific portion of the transcontinental railroad. Most of them went home by 1870 when the railroad was finished. Those who stayed on worked in mining, agriculture, and opened small shops such as groceries, laundries, and restaurants. Hostility against the Chinese remained high in the western states/territories as seen by the Chinese Massacre Cove episode and the Rock Springs massacre. The Chinese were generally forced into self-sufficient "Chinatowns" in cities such as San Francisco, Portland, Seattle, and Los Angeles. In Los Angeles, the last major anti-Chinese riot took place in 1871, after which local law enforcement grew stronger. In the late 19th century, Chinatowns were squalid slums known for their vice, prostitution, drugs, and violent battles between "tongs". By the 1930s, however, Chinatowns had become clean, safe and attractive tourist destinations. The first Japanese arrived in the U.S. in 1869, with the arrival of 22 people from samurai families, settling in Placer County, California, to establish the Wakamatsu Tea and Silk Farm Colony. Japanese were recruited to work on plantations in Hawaii, beginning in 1885. By the late 19th century, more Japanese emigrated to Hawaii and the American mainland. The Issei, or first-generation Japanese immigrants, were not allowed to become U.S. citizens because they were not "free white person[s]", per the United States Naturalization Law of 1790. This did not change until the passage of the Immigration and Nationality Act of 1952, known as the McCarran-Walter Act, which allowed Japanese immigrants to become naturalized U.S. citizens. By 1920, Japanese-American farmers produced US$67 million worth of crops, more than ten percent of California's total crop value. There were 111,000 Japanese Americans in the U.S., of which 82,000 were immigrants and 29,000 were U.S. born. Congress passed the Immigration Act of 1924 effectively ending all Japanese immigration to the U.S. The U.S.-born children of the Issei were citizens, in accordance to the 14th Amendment to the United States Constitution. The great majority of Hispanics who had been living in the former territories of New Spain remained and became American citizens in 1848. The 10,000 or so Californios also became U.S. citizens. They lived in southern California and after 1880 were overshadowed by the hundreds of thousands of new arrivals from the eastern states. Those in New Mexico dominated towns and villages that changed little until well into the 20th century. New arrivals from Mexico arrived, especially after the Revolution of 1911 terrorized thousands of villages all across Mexico. Most refugees went to Texas or California, and soon poor barrios appeared in many border towns. The California "Robin Hood", Joaquin Murrieta, led a gang in the 1850s which burned houses, killed exploiting miners, robbed stagecoaches of landowners and fought against violence and discrimination against Latin Americans. In Texas, Juan Cortina led a 20-year campaign against Anglos and the Texas Rangers, starting around 1859. On the Great Plains very few single men attempted to operate a farm or ranch; farmers clearly understood the need for a hard-working wife, and numerous children, to handle the many chores, including child-rearing, feeding, and clothing the family, managing the housework, and feeding the hired hands. During the early years of settlement, farm women played an integral role in assuring family survival by working outdoors. After a generation or so, women increasingly left the fields, thus redefining their roles within the family. New conveniences such as sewing and washing machines encouraged women to turn to domestic roles. The scientific housekeeping movement, promoted across the land by the media and government extension agents, as well as county fairs which featured achievements in home cookery and canning, advice columns for women in the farm papers, and home economics courses in the schools all contributed to this trend. Although the eastern image of farm life on the prairies emphasizes the isolation of the lonely farmer and farm life, in reality, rural folk created a rich social life for themselves. They often sponsored activities that combined work, food, and entertainment such as barn raisings, corn huskings, quilting bees, Grange meetings, church activities, and school functions. The womenfolk organized shared meals and potluck events, as well as extended visits between families. Childhood on the American frontier is contested territory. One group of scholars, following the lead of novelists Willa Cather and Laura Ingalls Wilder, argue the rural environment was beneficial to the child's upbringing. Historians Katherine Harris and Elliott West write that rural upbringing allowed children to break loose from urban hierarchies of age and gender, promoted family interdependence, and at the end produced children who were more self-reliant, mobile, adaptable, responsible, independent and more in touch with nature than their urban or eastern counterparts. On the other hand, historians Elizabeth Hampsten and Lillian Schlissel offer a grim portrait of loneliness, privation, abuse, and demanding physical labor from an early age. Riney-Kehrberg takes a middle position. Entrepreneurs set up shops and businesses to cater to the miners. World-famous were the houses of prostitution found in every mining camp worldwide. Prostitution was a growth industry attracting sex workers from around the globe, pulled in by the money, despite the harsh and dangerous working conditions and low prestige. Chinese women were frequently sold by their families and taken to the camps as prostitutes; they had to send their earnings back to the family in China. In Virginia City, Nevada, a prostitute, Julia Bulette, was one of the few who achieved "respectable" status. She nursed victims of an influenza epidemic; this gave her acceptance in the community and the support of the sheriff. The townspeople were shocked when she was murdered in 1867; they gave her a lavish funeral and speedily tried and hanged her assailant. Until the 1890s, madams predominantly ran the businesses, after which male pimps took over, and the treatment of the women generally declined. It was not uncommon for bordellos in Western towns to operate openly, without the stigma of East Coast cities. Gambling and prostitution were central to life in these western towns, and only later—as the female population increased, reformers moved in, and other civilizing influences arrived—did prostitution become less blatant and less common. After a decade or so the mining towns attracted respectable women who ran boarding houses, organized church societies, worked as laundresses and seamstresses and strove for independent status. Whenever a new settlement or mining camp started one of the first buildings or tents erected would be a gambling hall. As the population grew, gambling halls were typically the largest and most ornately decorated buildings in any town and often housed a bar, stage for entertainment, and hotel rooms for guests. These establishments were a driving force behind the local economy and many towns measured their prosperity by the number of gambling halls and professional gamblers they had. Towns that were friendly to gambling were typically known to sports as "wide-awake" or "wide-open". Cattle towns in Texas, Oklahoma, Kansas, and Nebraska became famous centers of gambling. The cowboys had been accumulating their wages and postponing their pleasures until they finally arrived in town with money to wager. Abilene, Dodge City, Wichita, Omaha, and Kansas City all had an atmosphere that was convivial to gaming. Such an atmosphere also invited trouble and such towns also developed reputations as lawless and dangerous places. Historian Waddy W. Moore uses court records to show that on the sparsely settled Arkansas frontier lawlessness was common. He distinguished two types of crimes: unprofessional (dueling, crimes of drunkenness, selling whiskey to the Natives, cutting trees on federal land) and professional (rustling, highway robbery, counterfeiting). Criminals found many opportunities to rob pioneer families of their possessions, while the few underfunded lawmen had great difficulty detecting, arresting, holding, and convicting wrongdoers. Bandits, typically in groups of two or three, rarely attacked stagecoaches with a guard carrying a sawed-off, double-barreled shotgun; it proved less risky to rob teamsters, people on foot, and solitary horsemen, while bank robberies themselves were harder to pull off due to the security of the establishment. According to historian Brian Robb, the earliest form of organized crime in America was born from the gangs of the Old West. When criminals were convicted, the punishment was severe. Aside from the occasional Western sheriff and Marshal, there were other various law enforcement agencies throughout the American frontier, such as the Texas Rangers. These lawmen were not just instrumental in keeping the peace, but also in protecting the locals from Native and Mexican threats at the border. Law enforcement tended to be more stringent in towns than in rural areas. Law enforcement emphasized maintaining stability more than armed combat, focusing on drunkenness, disarming cowboys who violated gun-control edicts and dealing with flagrant breaches of gambling and prostitution ordinances. Dykstra argues that the violent image of the cattle towns in film and fiction is largely a myth. The real Dodge City, he says, was the headquarters for the buffalo-hide trade of the Southern Plains and one of the West's principal cattle towns, a sale and shipping point for cattle arriving from Texas. He states there is a "second Dodge City" that belongs to the popular imagination and thrives as a cultural metaphor for violence, chaos, and depravity. For the cowboy arriving with money in hand after two months on the trail, the town was exciting. A contemporary eyewitness of Hays City, Kansas, paints a vivid image of this cattle town: Hays City by lamplight was remarkably lively, but not very moral. The streets blazed with a reflection from saloons, and a glance within showed floors crowded with dancers, the gaily dressed women striving to hide with ribbons and paint the terrible lines which that grim artist, Dissipation, loves to draw upon such faces... To the music of violins and the stamping of feet the dance went on, and we saw in the giddy maze old men who must have been pirouetting on the very edge of their graves. It has been acknowledged that the popular portrayal of Dodge City in film and fiction carries a note of truth, however, as gun crime was rampant in the city before the establishment of a local government. Soon after the city's residents officially established their first municipal government, however, a law banning concealed firearms was enacted and crime was reduced soon afterward. Similar laws were passed in other frontier towns to reduce the rate of gun crime as well. As UCLA law professor Adam Wrinkler noted: Carrying of guns within the city limits of a frontier town was generally prohibited. Laws barring people from carrying weapons were commonplace, from Dodge City to Tombstone. When Dodge City residents first formed their municipal government, one of the very first laws enacted was a ban on concealed carry. The ban was soon after expanded to open carry, too. The Hollywood image of the gunslinger marching through town with two Colts on his hips is just that—a Hollywood image, created for its dramatic effect. Tombstone, Arizona, was a turbulent mining town that flourished longer than most, from 1877 to 1929. Silver was discovered in 1877, and by 1881 the town had a population of over 10,000. In 1879 the newly arrived Earp brothers bought shares in the Vizina mine, water rights, and gambling concessions, but Virgil, Wyatt, and Morgan Earp obtained positions at different times as federal and local lawmen. After more than a year of threats and feuding, they, along with Doc Holliday, killed three outlaws in the Gunfight at the O.K. Corral, the most famous gunfight of the Old West. In the aftermath, Virgil Earp was maimed in an ambush, and Morgan Earp was assassinated while playing billiards. Wyatt and others, including his brothers James Earp and Warren Earp, pursued those they believed responsible in an extra-legal vendetta and warrants were issued for their arrest in the murder of Frank Stilwell. The Cochise County Cowboys were one of the first organized crime syndicates in the United States, and their demise came at the hands of Wyatt Earp. Western story tellers and film makers featured the gunfight in many Western productions. Walter Noble Burns's novel Tombstone (1927) made Earp famous. Hollywood celebrated Earp's Tombstone days with John Ford's My Darling Clementine (1946), John Sturges's Gunfight at the O.K. Corral (1957) and Hour of the Gun (1967), Frank Perry's Doc (1971), George Cosmatos's Tombstone (1993), and Lawrence Kasdan's Wyatt Earp (1994). They solidified Earp's modern reputation as the Old West's deadliest gunman. The major type of banditry was conducted by the infamous outlaws of the West, including the James–Younger Gang, Billy the Kid, the Dalton Gang, Black Bart, Sam Bass, Butch Cassidy's Wild Bunch, and hundreds of others who preyed on banks, trains, stagecoaches, and in some cases even armed government transports such as the Wham Paymaster robbery and the Skeleton Canyon robbery. Some of the outlaws, such as Jesse James, were products of the violence of the Civil War (James had ridden with Quantrill's Raiders) and others became outlaws during hard times in the cattle industry. Many were misfits and drifters who roamed the West avoiding the law. In rural areas Joaquin Murieta, Jack Powers, Augustine Chacon and other bandits terrorized the state. When outlaw gangs were near, towns would occasionally raise a posse to drive them out or capture them. Seeing that the need to combat the bandits was a growing business opportunity, Allan Pinkerton ordered his National Detective Agency, founded in 1850, to open branches in the West, and they got into the business of pursuing and capturing outlaws. To take refuge from the law, outlaws would use the advantages of the open range, remote passes, and badlands to hide. While some settlements and towns in the frontier also house outlaws and criminals, which were called "outlaw towns". Some lesser known outlaws and bandits of the Wild West include certain Black women. Eliza Stewart, known as "Big Jack" to her friends was arrested for the attempted murder of her paramour, and again later for assault. Stewart did time in a prison in Laramie, Wyoming. Caroline Hayes was another Black female outlaw who was notorious for theft and spent time in prisons in Laramie and Cheyenne, Wyoming. Banditry was a major issue in California after 1849, as thousands of young men detached from family or community moved into a land with few law enforcement mechanisms. To combat this, the San Francisco Committee of Vigilance was established to give drumhead trials and death sentences to well-known offenders. As such, other earlier settlements created their private agencies to protect communities due to the lack of peace-keeping establishments. These vigilance committees reflected different occupations in the frontier, such as land clubs, cattlemen's associations and mining camps. Similar vigilance committees also existed in Texas, and their main objective was to stamp out lawlessness and rid communities of desperadoes and rustlers. These committees would sometimes form mob rule for private vigilante groups, but usually were made up of responsible citizens who wanted only to maintain order. Criminals caught by these vigilance committees were treated cruelly; often hung or shot without any form of trial. Civilians also took arms to defend themselves in the Old West, sometimes siding with lawmen (Coffeyville Bank Robbery), or siding with outlaws (Battle of Ingalls). Mary Fields, also known as "Stagecoach Mary" for her role as the first female postal worker in the West, was known to be a brawler and a shotgun rider. As a civilian and a Black woman she defended herself with her shotgun. In the Post-Civil War frontier, over 523 whites, 34 blacks, and 75 others were victims of lynching. However, cases of lynching in the Old West wasn't primarily caused by the absence of a legal system, but also because of social class. Historian Michael J. Pfeifer writes, "Contrary to the popular understanding, early territorial lynching did not flow from an absence or distance of law enforcement but rather from the social instability of early communities and their contest for property, status, and the definition of social order." Range wars were infamous armed conflicts that took place in the "open range" of the American frontier. The subject of these conflicts was the control of lands freely used for farming and cattle grazing which gave the conflict its name. Range wars became more common by the end of the American Civil War, and numerous conflicts were fought such as the Pleasant Valley War, Johnson County War, Pecos War, Mason County War, Colorado Range War, Fence Cutting War, Colfax County War, Castaic Range War, Spring Creek raid, Porum Range War, Barber–Mizell feud, San Elizario Salt War and others. During a range war in Montana, a vigilante group called Stuart's Stranglers, which were made up of cattlemen and cowboys, killed up to 20 criminals and range squatters in 1884 alone. In Nebraska, stock grower Isom Olive led a range war in 1878 that killed a number of homesteaders from lynchings and shootouts before eventually leading to his own murder. Another infamous type of open range conflict were the Sheep Wars, which were fought between sheep ranchers and cattle ranchers over grazing rights and mainly occurred in Texas, Arizona and the border region of Wyoming and Colorado. In most cases, formal military involvement were used to quickly put an end to these conflicts. Other conflicts over land and territory were also fought such as the Regulator–Moderator War, Cortina Troubles, Las Cuevas War and the Bandit War. Feuds involving families and bloodlines also occurred much in the frontier. Since private agencies and vigilance committees were the substitute for proper courts, many families initially depended on themselves and their communities for their security and justice. These wars include the Lincoln County War, Tutt–Everett War, Flynn–Doran feud, Early–Hasley feud, Brooks-Baxter War, Sutton–Taylor feud, Horrell Brothers feud, Brooks–McFarland Feud, Reese–Townsend feud and the Earp Vendetta Ride. The end of the bison herds opened up millions of acres for cattle ranching. Spanish cattlemen had introduced cattle ranching and longhorn cattle to the Southwest in the 17th century, and the men who worked the ranches, called "vaqueros", were the first "cowboys" in the West. After the Civil War, Texas ranchers raised large herds of longhorn cattle. The nearest railheads were 800 or more miles (1300+ km) north in Kansas (Abilene, Kansas City, Dodge City, and Wichita). So once fattened, the ranchers and their cowboys drove the herds north along the Western, Chisholm, and Shawnee trails. The cattle were shipped to Chicago, St. Louis, and points east for slaughter and consumption in the fast-growing cities. The Chisholm Trail, laid out by cattleman Joseph McCoy along an old trail marked by Jesse Chisholm, was the major artery of cattle commerce, carrying over 1.5 million head of cattle between 1867 and 1871 over the 800 miles (1,300 km) from south Texas to Abilene, Kansas. The long drives were treacherous, especially crossing water such as the Brazos and the Red River and when they had to fend off Natives and rustlers looking to make off with their cattle. A typical drive would take three to four months and contained two miles (3 km) of cattle six abreast. Despite the risks, a successful drive proved very profitable to everyone involved, as the price of one steer was $4 in Texas and $40 in the East. By the 1870s and 1880s, cattle ranches expanded further north into new grazing grounds and replaced the bison herds in Wyoming, Montana, Colorado, Nebraska, and the Dakota territory, using the rails to ship to both coasts. Many of the largest ranches were owned by Scottish and English financiers. The single largest cattle ranch in the entire West was owned by American John W. Iliff, "cattle king of the Plains", operating in Colorado and Wyoming. Gradually, longhorns were replaced by the British breeds of Hereford and Angus, introduced by settlers from the Northwest. Though less hardy and more disease-prone, these breeds produced better-tasting beef and matured faster. The funding for the cattle industry came largely from British sources, as the European investors engaged in a speculative extravaganza—a "bubble". Graham concludes the mania was founded on genuine opportunity, as well as "exaggeration, gullibility, inadequate communications, dishonesty, and incompetence". A severe winter engulfed the plains toward the end of 1886 and well into 1887, locking the prairie grass under ice and crusted snow which starving herds could not penetrate. The British lost most of their money—as did eastern investors like Theodore Roosevelt, but their investments did create a large industry that continues to cycle through boom and bust periods. On a much smaller scale, sheep grazing was locally popular; sheep were easier to feed and needed less water. However, Americans did not eat mutton. As farmers moved in open range cattle ranching came to an end and was replaced by barbed wire spreads where water, breeding, feeding, and grazing could be controlled. This led to "fence wars" which erupted over disputes about water rights. Anchoring the booming cattle industry of the 1860s and 1870s were the cattle towns in Kansas and Missouri. Like the mining towns in California and Nevada, cattle towns such as Abilene, Dodge City, and Ellsworth experienced a short period of boom and bust lasting about five years. The cattle towns would spring up as land speculators would rush in ahead of a proposed rail line and build a town and the supporting services attractive to the cattlemen and the cowboys. If the railroads complied, the new grazing ground and supporting town would secure the cattle trade. However, unlike the mining towns which in many cases became ghost towns and ceased to exist after the ore played out, cattle towns often evolved from cattle to farming and continued after the grazing lands were exhausted. The concern with the protection of the environment became a new issue in the late 19th century, pitting different interests. On the one side were the lumber and coal companies who called for maximum exploitation of natural resources to maximize jobs, economic growth, and their own profit. In the center were the conservationists, led by Theodore Roosevelt and his coalition of outdoorsmen, sportsmen, bird watchers, and scientists. They wanted to reduce waste; emphasized the value of natural beauty for tourism and ample wildlife for hunters; and argued that careful management would not only enhance these goals but also increase the long-term economic benefits to society by planned harvesting and environmental protections. Roosevelt worked his entire career to put the issue high on the national agenda. He was deeply committed to conserving natural resources. He worked closely with Gifford Pinchot and used the Newlands Reclamation Act of 1902 to promote federal construction of dams to irrigate small farms and placed 230 million acres (360,000 mi2 or 930,000 km2) under federal protection. Roosevelt set aside more Federal land, national parks, and nature preserves than all of his predecessors combined. Roosevelt explained his position in 1910: Conservation means development as much as it does protection. I recognize the right and duty of this generation to develop and use the natural resources of our land but I do not recognize the right to waste them, or to rob, by wasteful use, the generations that come after us. The third element, smallest at first but growing rapidly after 1870, were the environmentalists who honored nature for its own sake, and rejected the goal of maximizing human benefits. Their leader was John Muir (1838–1914), a widely read author and naturalist and pioneer advocate of preservation of wilderness for its own sake, and founder of the Sierra Club. Muir, a Scottish-American based in California, in 1889 started organizing support to preserve the sequoias in the Yosemite Valley; Congress did pass the Yosemite National Park bill (1890). In 1897 President Grover Cleveland created thirteen protected forests but lumber interests had Congress cancel the move. Muir, taking the persona of an Old Testament prophet, crusaded against the lumberman, portraying it as a contest "between landscape righteousness and the devil". A master publicist, Muir's magazine articles, in Harper's Weekly (June 5, 1897) and the Atlantic Monthly turned the tide of public sentiment. He mobilized public opinion to support Roosevelt's program of setting aside national monuments, national forest reserves, and national parks. However, Muir broke with Roosevelt and especially President William Howard Taft on the Hetch Hetchy dam, which was built in the Yosemite National Park to supply water to San Francisco. Biographer Donald Worster says, "Saving the American soul from a total surrender to materialism was the cause for which he fought." The rise of the cattle industry and the cowboy is directly tied to the demise of the huge herds of bison—usually called the "buffalo". Once numbering over 25 million on the Great Plains, the grass-eating herds were a vital resource animal for the Plains Indians, providing food, hides for clothing and shelter, and bones for implements. Loss of habitat, disease, and over-hunting steadily reduced the herds through the 19th century to the point of near extinction. The last 10–15 million died out in a decade 1872–1883; only 100 survived. The tribes that depended on the buffalo had little choice but to accept the government offer of reservations, where the government would feed and supply them. Conservationists founded the American Bison Society in 1905; it lobbied Congress to establish public bison herds. Several national parks in the U.S. and Canada were created, in part to provide a sanctuary for bison and other large wildlife. The bison population reached 500,000 by 2003. Territorial governments west of the Mississippi River were created by the United States government by Organic Acts and there structures were very similar with little variation. Each territory had: a governor, a secretary for the territory, "Indian agent" and three justices for a court which were all appointed by the US president and confirmed by the "advice and consent" of the US Senate. Each territory had a bicameral legislature. Judges, justices of the peace, territorial attorney, marshal and county offices were either popularly elected or selected by a territorial governor and confirmed by the upper house of the territorial legislature. Any law passed by a territory needed to get congressional approval and if not it was invalid. Following the 1890 U.S. census, the superintendent announced that there was no longer a clear line of advancing settlement, and hence no longer a contiguous frontier in the continental United States. When examining the later 1900 U.S. census population distribution results though, the contiguous frontier line does remain. But by the 1910 U.S. census, only pockets of the frontier remain without a clear westward line, allowing travel across the continent without ever crossing a frontier line. Virgin farmland was increasingly hard to find after 1890—although the railroads advertised some in eastern Montana. Bicha shows that nearly 600,000 American farmers sought cheap land by moving to the Prairie frontier of the Canadian West from 1897 to 1914. However, about two-thirds of them grew disillusioned and returned to the U.S. Despite this, homesteaders claimed more land in the first two decades of the 20th century than the 19th century. The Homestead Acts and proliferation of railroads are often credited as being important factors in shrinking the frontier, by efficiently bringing in settlers and required infrastructure. The increased size of land grants from 160 to 320 acres in 1909 and then rangeland to 640 acres in 1916 accelerated this process. Barbed wire is also reasoned to reduce the traditional open range. In addition, the growing adoption of automobiles and their required network of adequate roads, first federally subsidized by the Federal Aid Highway Act of 1916, solidified the frontier's end. The admission of Oklahoma as a state in 1907 upon the combination of the Oklahoma Territory and the last remaining Indian Territory, and the Arizona and New Mexico territories as states in 1912, marks the end of the frontier story for many scholars. Due to their low and uneven populations during this period though, frontier territory remained for the meantime. Of course, a few typical frontier episodes still happened such as the last stagecoach robbery occurred in Nevada's remaining frontier in December 1916. A period known as "The Western Civil War of Incorporation" that often was violent, lasted from the 1850s to 1919. The Mexican Revolution also led to significant conflict reaching across the US-Mexico border which was still mostly within frontier territory, known as the Mexican Border War (1910–1919). Flashpoints included the Battle of Columbus (1916) and the Punitive Expedition (1916–1917). The Bandit War (1915–1919) involved attacks targeted against Texan settlers. Also, skirmishes involving Natives happened as late as the Bluff War (1914–1915) and the Posey War (1923). The westward expansion of American influence and jurisdiction across the Pacific in the late 19th century was in some sense a new "Asia–Pacific frontier", with Frederick Jackson Turner arguing this to be a necessary element of the U.S.'s growth, as its identity as a civilized and ideals-based nation depended on constantly overcoming a savage 'other'. Alaska was not admitted as a state until 1959. With that, the driving ethos and storyline of the "American frontier" had passed. American frontier in popular culture The exploration, settlement, exploitation, and conflicts of the "American Old West" created a cultural milieu that has been celebrated by Americans and foreigners alike—in art, music, dance, novels, magazines, short stories, poetry, theater, video games, movies, radio, television, song, and oral tradition. Beth E. Levy argues that the actual and mythological west inspired composers Aaron Copland, Roy Harris, Virgil Thomson, Charles Wakefield Cadman, and Arthur Farwell. Religious themes have inspired many environmentalists as they contemplate the pristine West before the frontiersmen violated its spirituality. Actually, as a historian William Cronon has demonstrated, the concept of "wilderness" was highly negative and the antithesis of religiosity before the romantic movement of the 19th century. The Frontier Thesis of historian Frederick Jackson Turner, proclaimed in 1893, established the main lines of historiography which fashioned scholarship for three or four generations and appeared in the textbooks used by practically all American students. The mythologizing of the West began with minstrel shows and popular music in the 1840s. During the same period, P. T. Barnum presented Native chiefs, dances, and other Wild West exhibits in his museums. However, large scale awareness took off when the dime novel appeared in 1859, the first being Malaeska, the Indian Wife of the White Hunter. By simplifying reality and grossly exaggerating the truth, the novels captured the public's attention with sensational tales of violence and heroism and fixed in the public's mind stereotypical images of heroes and villains—courageous cowboys and savage Natives, virtuous lawmen and ruthless outlaws, brave settlers and predatory cattlemen. Millions of copies and thousands of titles were sold. The novels relied on a series of predictable literary formulas appealing to mass tastes and were often written in as little as a few days. The most successful of all dime novels was Edward S. Ellis' Seth Jones (1860). Ned Buntline's stories glamorized Buffalo Bill Cody, and Edward L. Wheeler created "Deadwood Dick" and "Hurricane Nell" while featuring Calamity Jane. Buffalo Bill Cody was the most effective popularizer of the Old West in the U.S. and Europe. He presented the first "Wild West" show in 1883, featuring a recreation of famous battles (especially Custer's Last Stand), expert marksmanship, and dramatic demonstrations of horsemanship by cowboys and natives, as well as sure-shooting Annie Oakley. Elite Eastern writers and artists of the late 19th century promoted and celebrated western lore. Theodore Roosevelt, wearing his hats as a historian, explorer, hunter, rancher, and naturalist, was especially productive. Their work appeared in upscale national magazines such as Harper's Weekly featured illustrations by artists Frederic Remington, Charles M. Russell, and others. Readers bought action-filled stories by writers like Owen Wister, conveying vivid images of the Old West. Remington lamented the passing of an era he helped to chronicle when he wrote: I knew the wild riders and the vacant land were about to vanish forever...I saw the living, breathing end of three American centuries of smoke and dust and sweat. Theodore Roosevelt wrote many books on the west and the frontier, and made frequent reference to it as president. From the late 19th century, the railroads promoted tourism in the west, with guided tours of western sites, especially national parks like Yellowstone National Park. Both tourists to the West, and avid fiction readers enjoyed the visual imagery of the frontier. After 1900, Western movies provided the most famous examples, as in the numerous films of John Ford. He was especially enamored of Monument Valley. Critic Keith Phipps says, "its five square miles [13 square kilometers] have defined what decades of moviegoers think of when they imagine the American West." The heroic stories coming out of the building of the transcontinental railroad in the mid-1860s enlivened many dime novels and illustrated many newspapers and magazines with the juxtaposition of the traditional environment with the iron horse of modernity. The cowboy has for over a century been an iconic American image both in the country and abroad. Heather Cox Richardson argues for a political dimension to the cowboy image: The timing of the cattle industry’s growth meant that cowboy imagery grew to have extraordinary power. Entangled in the vicious politics of the postwar years, Democrats, especially those in the old Confederacy, imagined the West as a land untouched by Republican politicians they hated. They developed an image of the cowboys as men who worked hard, played hard, lived by a code of honor, protected themselves, and asked nothing of the government. In the hands of Democratic newspaper editors, the realities of cowboy life—the poverty, the danger, the debilitating hours—became romantic. Cowboys embodied virtues Democrats believed Republicans were destroying by creating a behemoth government catering to lazy ex-slaves. By the 1860s, cattle drives were a feature of the plains landscape, and Democrats had made cowboys a symbol of rugged individual independence, something they insisted Republicans were destroying. The most famous popularizers of the image included part-time cowboy and "Rough Rider" President Theodore Roosevelt (1858–1919), a Republican who made "cowboy" internationally synonymous with the brash aggressive American. He was followed by trick roper Will Rogers (1879–1935), the leading humorist of the 1920s. Roosevelt had conceptualized the herder (cowboy) as a stage of civilization distinct from the sedentary farmer—a theme well expressed in the 1944 Hollywood hit Oklahoma! that highlights the enduring conflict between cowboys and farmers. Roosevelt argued that the manhood typified by the cowboy—and outdoor activity and sports generally—was essential if American men were to avoid the softness and rot produced by an easy life in the city. Will Rogers, the son of a Cherokee judge in Oklahoma, started with rope tricks and fancy riding, but by 1919 discovered his audiences were even more enchanted with his wit in his representation of the wisdom of the common man. Others who contributed to enhancing the romantic image of the American cowboy include Charles Siringo (1855–1928) and Andy Adams (1859–1935). Cowboy, Pinkerton detective, and western author, Siringo was the first authentic cowboy autobiographer. Adams spent the 1880s in the cattle industry in Texas and the 1890s mining in the Rockies. When an 1898 play's portrayal of Texans outraged Adams, he started writing plays, short stories, and novels drawn from his own experiences. His The Log of a Cowboy (1903) became a classic novel about the cattle business, especially the cattle drive. It described a fictional drive of the Circle Dot herd from Texas to Montana in 1882 and became a leading source on cowboy life; historians retraced its path in the 1960s, confirming its basic accuracy. His writings are acclaimed and criticized for realistic fidelity to detail on the one hand and thin literary qualities on the other. Many regard Red River (1948), directed by Howard Hawks, and starring John Wayne and Montgomery Clift, as an authentic cattle drive depiction. The unique skills of the cowboys are highlighted in the rodeo. It began in an organized fashion in the West in the 1880s, when several Western cities followed up on touring Wild West shows and organized celebrations that included rodeo activities. The establishment of major cowboy competitions in the East in the 1920s led to the growth of rodeo sports. Trail cowboys who were also known as gunfighters like John Wesley Hardin, Luke Short and others, were known for their prowess, speed, and skill with their pistols and other firearms. Their violent escapades and reputations morphed over time into the stereotypical image of violence endured by the "cowboy hero". Historians of the American West have written about the mythic West; the west of western literature, art, and of people's shared memories. The phenomenon is "the Imagined West". The "Code of the West" was an unwritten, socially agreed upon set of informal laws shaping the cowboy culture of the Old West. Over time, the cowboys developed a personal culture of their own, a blend of values that even retained vestiges of chivalry. Such hazardous work in isolated conditions also bred a tradition of self-dependence and individualism, with great value put on personal honesty, exemplified in songs and cowboy poetry. The code also included the gunfighter, who sometimes followed a form of code duello adopted from the Old South, in order to solve disputes and duels. Extrajudicial justice seen during the frontier days such as lynching, vigilantism and gunfighting, in turn popularized by the Western genre, would later be known in modern times as examples of frontier justice. Historiography Scores of Frederick Jackson Turner's students became professors in history departments in the western states and taught courses on the frontier influenced by his ideas. Scholars have debunked many of the myths of the frontier, but they nevertheless live on in community traditions, folklore, and fiction. In the 1970s a historiographical range war broke out between the traditional frontier studies, which stress the influence of the frontier on all of American history and culture, and the "New Western History" which narrows the geographical and temporal framework to concentrate on the trans-Mississippi West after 1850. It avoids the word "frontier" and stresses cultural interaction between white culture and groups such as Natives and Hispanics. History professor William Weeks of the University of San Diego argues that in this "New Western History" approach: It is easy to tell who the bad guys are—they are almost invariably white, male, and middle-class or better, while the good guys are almost invariably non-white, non-male, or non-middle class.... Anglo-American civilization....is represented as patriarchal, racist, genocidal, and destructive of the environment, in addition to hypocritically betraying the ideals on which it supposedly is built. By 2005, Steven Aron argued that the two sides had "reached an equilibrium in their rhetorical arguments and critiques". Since then, however, the field of American frontier and western regional history has become increasingly inclusive.[additional citation(s) needed] The field's more recent focus was captured in the language of the 2024 Call for Papers of the Western History Association: The Western History Association was once an organization dominated by white male scholars who typically wrote triumphalist narratives. We are no longer that organization. We now produce pathbreaking scholarship by and about the members of the many communities previously excluded from traditional tales of expansion. This new work and the people writing it have transformed the WHA, the history of the U.S. West, and the profession more broadly. Meanwhile, environmental history has emerged, in large part from the frontier historiography, hence its emphasis on wilderness. It plays an increasingly large role in frontier studies. Historians approached the environment for the frontier or regionalism. The first group emphasizes human agency on the environment; the second looks at the influence of the environment. William Cronon has argued that Turner's famous 1893 essay was environmental history in an embryonic form. It emphasized the vast power of free land to attract and reshape settlers, making a transition from wilderness to civilization. Journalist Samuel Lubell saw similarities between the frontier's Americanization of immigrants that Turner described and the social climbing by later immigrants in large cities as they moved to wealthier neighborhoods. He compared the effects of the railroad opening up Western lands to urban transportation systems and the automobile, and Western settlers' "land hunger" to poor city residents seeking social status. Just as the Republican party benefited from support from "old" immigrant groups that settled on frontier farms, "new" urban immigrants formed an important part of the Democratic New Deal coalition that began with Franklin Delano Roosevelt's victory in the 1932 presidential election. Since the 1960s, the history department at the University of New Mexico has been an active center for studies along with the University of New Mexico Press. Leading historians there include Gerald D. Nash, Donald C. Cutter, Richard N. Ellis, Richard Etulain, Ferenc Szasz, Margaret Connell-Szasz, Paul Hutton, Virginia Scharff, and Samuel Truett. The department has collaborated with other departments and emphasizes Southwestern regionalism, minorities in the Southwest, and historiography. See also Explanatory notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/w/index.php?title=Social_network&action=edit§ion=15] | [TOKENS: 1430] |
Editing Social network (section) Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} This page is a member of 6 hidden categories (help): |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Leanan_s%C3%ADdhe] | [TOKENS: 488] |
Contents Leanan sídhe The leannán sídhe (Irish: [ˈl̠ʲan̪ˠaːnˠ ˈʃiː]; lit. 'fairy lover'; Scottish Gaelic: leannan sìth, Manx: lhiannan shee) is a figure from Irish folklore. She is depicted as a beautiful woman of the Aos Sí ("people of the fairy mounds") who takes a human lover. Lovers of the leannán sídhe are said to live brief, though highly inspired, lives. The name comes from the Gaelic words for a sweetheart or lover and the term for inhabitants of fairy mounds (fairy). While the leannán sídhe is most often depicted as a female fairy, there is at least one reference to a male leannán sídhe troubling a mortal woman. A version of the myth was popularized during the Celtic Revival in the late 19th-century. The leannán sídhe is mentioned by Jane Wilde, writing as "Speranza", in her 1887 Ancient Legends, Mystic Charms and Superstitions of Ireland. W. B. Yeats popularized his own 'newly-ancient' version of the leannán sídhe, emphasizing the spirit's almost vampiric tendencies. As he imagined it, the leannán sídhe is depicted as a beautiful muse who offers inspiration to an artist in exchange for their love and devotion; although the supernatural affair leads to madness and eventual death for the artist: The Leanhaun Shee (fairy mistress) seeks the love of mortals. If they refuse, she must be their slave; if they consent, they are hers, and can only escape by finding another to take their place. The fairy lives on their life, and they waste away. Death is no escape from her. She is the Gaelic muse, for she gives inspiration to those she persecutes. The Gaelic poets die young, for she is restless, and will not let them remain long on earth—this malignant phantom. In literature and pop culture The 2017 horror movie MUSE, written and directed by John Burr, features her as the mythical and deadly spirit who becomes the muse and lover of a painter. See also References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/SISAL] | [TOKENS: 454] |
Contents SISAL SISAL (Streams and Iteration in a Single Assignment Language) is a general-purpose single assignment functional programming language with strict semantics, implicit parallelism, and efficient array handling. SISAL outputs a dataflow graph in Intermediary Form 1 (IF1). It was derived from the Value-oriented Algorithmic Language (VAL), designed by Jack Dennis, and adds recursion and finite streams. It has a Pascal-like syntax and was designed to be a common high-level programming language for numerical programs on a variety of multiprocessors. History SISAL was defined in 1983 by James McGraw et al., at the University of Manchester, Lawrence Livermore National Laboratory (LLNL), Colorado State University and Digital Equipment Corporation (DEC). It was revised in 1985, and the first compiled implementation was made in 1986. Its performance is superior to C and rivals Fortran, according to some sources, combined with efficient and automatic parallelization. SISAL's name came from grepping "sal" for "Single Assignment Language" from the Unix dictionary /usr/dict/words. Versions exist for the Cray X-MP, Y-MP, 2; Sequent, Encore Alliant, DEC DEC VAX-11/784, dataflow architectures, KSR1, Inmos Transputers, and systolic arrays. Architecture The requirements for a fine-grain parallelism language are better met with a dataflow programming language than a system programming language.[citation needed] SISAL is more than just a dataflow and fine-grain language. It is a set of tools that convert a textual human readable dataflow language into a graph format (named IF1 - Intermediary Form 1). Part of the SISAL project also involved converting this graph format into runnable C code. SISAL Renaissance Era In 2010 SISAL saw a brief resurgence when a group of undergraduates at Worcester Polytechnic Institute investigated implementing a fine-grain parallelism backend for the SISAL language. In 2018 SISAL was modernized with indent-based syntax, first-class functions, lambdas, closures and lazy semantics within a project SISAL-IS. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_note-133] | [TOKENS: 11899] |
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of". |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Elon_Musk#cite_note-232] | [TOKENS: 10515] |
Contents Elon Musk Elon Reeve Musk (/ˈiːlɒn/ EE-lon; born June 28, 1971) is a businessman and entrepreneur known for his leadership of Tesla, SpaceX, Twitter, and xAI. Musk has been the wealthiest person in the world since 2025; as of February 2026,[update] Forbes estimates his net worth to be around US$852 billion. Born into a wealthy family in Pretoria, South Africa, Musk emigrated in 1989 to Canada; he has Canadian citizenship since his mother was born there. He received bachelor's degrees in 1997 from the University of Pennsylvania before moving to California to pursue business ventures. In 1995, Musk co-founded the software company Zip2. Following its sale in 1999, he co-founded X.com, an online payment company that later merged to form PayPal, which was acquired by eBay in 2002. Musk also became an American citizen in 2002. In 2002, Musk founded the space technology company SpaceX, becoming its CEO and chief engineer; the company has since led innovations in reusable rockets and commercial spaceflight. Musk joined the automaker Tesla as an early investor in 2004 and became its CEO and product architect in 2008; it has since become a leader in electric vehicles. In 2015, he co-founded OpenAI to advance artificial intelligence (AI) research, but later left; growing discontent with the organization's direction and their leadership in the AI boom in the 2020s led him to establish xAI, which became a subsidiary of SpaceX in 2026. In 2022, he acquired the social network Twitter, implementing significant changes, and rebranding it as X in 2023. His other businesses include the neurotechnology company Neuralink, which he co-founded in 2016, and the tunneling company the Boring Company, which he founded in 2017. In November 2025, a Tesla pay package worth $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Musk was the largest donor in the 2024 U.S. presidential election, where he supported Donald Trump. After Trump was inaugurated as president in early 2025, Musk served as Senior Advisor to the President and as the de facto head of the Department of Government Efficiency (DOGE). After a public feud with Trump, Musk left the Trump administration and returned to managing his companies. Musk is a supporter of global far-right figures, causes, and political parties. His political activities, views, and statements have made him a polarizing figure. Musk has been criticized for COVID-19 misinformation, promoting conspiracy theories, and affirming antisemitic, racist, and transphobic comments. His acquisition of Twitter was controversial due to a subsequent increase in hate speech and the spread of misinformation on the service, following his pledge to decrease censorship. His role in the second Trump administration attracted public backlash, particularly in response to DOGE. The emails he sent to Jeffrey Epstein are included in the Epstein files, which were published between 2025–26 and became a topic of worldwide debate. Early life Elon Reeve Musk was born on June 28, 1971, in Pretoria, South Africa's administrative capital. He is of British and Pennsylvania Dutch ancestry. His mother, Maye (née Haldeman), is a model and dietitian born in Saskatchewan, Canada, and raised in South Africa. Musk therefore holds both South African and Canadian citizenship from birth. His father, Errol Musk, is a South African electromechanical engineer, pilot, sailor, consultant, emerald dealer, and property developer, who partly owned a rental lodge at Timbavati Private Nature Reserve. His maternal grandfather, Joshua N. Haldeman, who died in a plane crash when Elon was a toddler, was an American-born Canadian chiropractor, aviator and political activist in the technocracy movement who moved to South Africa in 1950. Elon has a younger brother, Kimbal, a younger sister, Tosca, and four paternal half-siblings. Musk was baptized as a child in the Anglican Church of Southern Africa. Despite both Elon and Errol previously stating that Errol was a part owner of a Zambian emerald mine, in 2023, Errol recounted that the deal he made was to receive "a portion of the emeralds produced at three small mines". Errol was elected to the Pretoria City Council as a representative of the anti-apartheid Progressive Party and has said that his children shared their father's dislike of apartheid. After his parents divorced in 1979, Elon, aged around 9, chose to live with his father because Errol Musk had an Encyclopædia Britannica and a computer. Elon later regretted his decision and became estranged from his father. Elon has recounted trips to a wilderness school that he described as a "paramilitary Lord of the Flies" where "bullying was a virtue" and children were encouraged to fight over rations. In one incident, after an altercation with a fellow pupil, Elon was thrown down concrete steps and beaten severely, leading to him being hospitalized for his injuries. Elon described his father berating him after he was discharged from the hospital. Errol denied berating Elon and claimed, "The [other] boy had just lost his father to suicide, and Elon had called him stupid. Elon had a tendency to call people stupid. How could I possibly blame that child?" Elon was an enthusiastic reader of books, and had attributed his success in part to having read The Lord of the Rings, the Foundation series, and The Hitchhiker's Guide to the Galaxy. At age ten, he developed an interest in computing and video games, teaching himself how to program from the VIC-20 user manual. At age twelve, Elon sold his BASIC-based game Blastar to PC and Office Technology magazine for approximately $500 (equivalent to $1,600 in 2025). Musk attended Waterkloof House Preparatory School, Bryanston High School, and then Pretoria Boys High School, where he graduated. Musk was a decent but unexceptional student, earning a 61/100 in Afrikaans and a B on his senior math certification. Musk applied for a Canadian passport through his Canadian-born mother to avoid South Africa's mandatory military service, which would have forced him to participate in the apartheid regime, as well as to ease his path to immigration to the United States. While waiting for his application to be processed, he attended the University of Pretoria for five months. Musk arrived in Canada in June 1989, connected with a second cousin in Saskatchewan, and worked odd jobs, including at a farm and a lumber mill. In 1990, he entered Queen's University in Kingston, Ontario. Two years later, he transferred to the University of Pennsylvania, where he studied until 1995. Although Musk has said that he earned his degrees in 1995, the University of Pennsylvania did not award them until 1997 – a Bachelor of Arts in physics and a Bachelor of Science in economics from the university's Wharton School. He reportedly hosted large, ticketed house parties to help pay for tuition, and wrote a business plan for an electronic book-scanning service similar to Google Books. In 1994, Musk held two internships in Silicon Valley: one at energy storage startup Pinnacle Research Institute, which investigated electrolytic supercapacitors for energy storage, and another at Palo Alto–based startup Rocket Science Games. In 1995, he was accepted to a graduate program in materials science at Stanford University, but did not enroll. Musk decided to join the Internet boom of the 1990s, applying for a job at Netscape, to which he reportedly never received a response. The Washington Post reported that Musk lacked legal authorization to remain and work in the United States after failing to enroll at Stanford. In response, Musk said he was allowed to work at that time and that his student visa transitioned to an H1-B. According to numerous former business associates and shareholders, Musk said he was on a student visa at the time. Business career In 1995, Musk, his brother Kimbal, and Greg Kouri founded the web software company Zip2 with funding from a group of angel investors. They housed the venture at a small rented office in Palo Alto. Replying to Rolling Stone, Musk denounced the notion that they started their company with funds borrowed from Errol Musk, but in a tweet, he recognized that his father contributed 10% of a later funding round. The company developed and marketed an Internet city guide for the newspaper publishing industry, with maps, directions, and yellow pages. According to Musk, "The website was up during the day and I was coding it at night, seven days a week, all the time." To impress investors, Musk built a large plastic structure around a standard computer to create the impression that Zip2 was powered by a small supercomputer. The Musk brothers obtained contracts with The New York Times and the Chicago Tribune, and persuaded the board of directors to abandon plans for a merger with CitySearch. Musk's attempts to become CEO were thwarted by the board. Compaq acquired Zip2 for $307 million in cash in February 1999 (equivalent to $590,000,000 in 2025), and Musk received $22 million (equivalent to $43,000,000 in 2025) for his 7-percent share. In 1999, Musk co-founded X.com, an online financial services and e-mail payment company. The startup was one of the first federally insured online banks, and, in its initial months of operation, over 200,000 customers joined the service. The company's investors regarded Musk as inexperienced and replaced him with Intuit CEO Bill Harris by the end of the year. The following year, X.com merged with online bank Confinity to avoid competition. Founded by Max Levchin and Peter Thiel, Confinity had its own money-transfer service, PayPal, which was more popular than X.com's service. Within the merged company, Musk returned as CEO. Musk's preference for Microsoft software over Unix created a rift in the company and caused Thiel to resign. Due to resulting technological issues and lack of a cohesive business model, the board ousted Musk and replaced him with Thiel in 2000.[b] Under Thiel, the company focused on the PayPal service and was renamed PayPal in 2001. In 2002, PayPal was acquired by eBay for $1.5 billion (equivalent to $2,700,000,000 in 2025) in stock, of which Musk—the largest shareholder with 11.72% of shares—received $175.8 million (equivalent to $320,000,000 in 2025). In 2017, Musk purchased the domain X.com from PayPal for an undisclosed amount, stating that it had sentimental value. In 2001, Musk became involved with the nonprofit Mars Society and discussed funding plans to place a growth-chamber for plants on Mars. Seeking a way to launch the greenhouse payloads into space, Musk made two unsuccessful trips to Moscow to purchase intercontinental ballistic missiles (ICBMs) from Russian companies NPO Lavochkin and Kosmotras. Musk instead decided to start a company to build affordable rockets. With $100 million of his early fortune, (equivalent to $180,000,000 in 2025) Musk founded SpaceX in May 2002 and became the company's CEO and Chief Engineer. SpaceX attempted its first launch of the Falcon 1 rocket in 2006. Although the rocket failed to reach Earth orbit, it was awarded a Commercial Orbital Transportation Services program contract from NASA, then led by Mike Griffin. After two more failed attempts that nearly caused Musk to go bankrupt, SpaceX succeeded in launching the Falcon 1 into orbit in 2008. Later that year, SpaceX received a $1.6 billion NASA contract (equivalent to $2,400,000,000 in 2025) for Falcon 9-launched Dragon spacecraft flights to the International Space Station (ISS), replacing the Space Shuttle after its 2011 retirement. In 2012, the Dragon vehicle docked with the ISS, a first for a commercial spacecraft. Working towards its goal of reusable rockets, in 2015 SpaceX successfully landed the first stage of a Falcon 9 on a land platform. Later landings were achieved on autonomous spaceport drone ships, an ocean-based recovery platform. In 2018, SpaceX launched the Falcon Heavy; the inaugural mission carried Musk's personal Tesla Roadster as a dummy payload. Since 2019, SpaceX has been developing Starship, a reusable, super heavy-lift launch vehicle intended to replace the Falcon 9 and Falcon Heavy. In 2020, SpaceX launched its first crewed flight, the Demo-2, becoming the first private company to place astronauts into orbit and dock a crewed spacecraft with the ISS. In 2024, NASA awarded SpaceX an $843 million (equivalent to $865,000,000 in 2025) contract to build a spacecraft that NASA will use to deorbit the ISS at the end of its lifespan. In 2015, SpaceX began development of the Starlink constellation of low Earth orbit satellites to provide satellite Internet access. After the launch of prototype satellites in 2018, the first large constellation was deployed in May 2019. As of May 2025[update], over 7,600 Starlink satellites are operational, comprising 65% of all operational Earth satellites. The total cost of the decade-long project to design, build, and deploy the constellation was estimated by SpaceX in 2020 to be $10 billion (equivalent to $12,000,000,000 in 2025).[c] During the Russian invasion of Ukraine, Musk provided free Starlink service to Ukraine, permitting Internet access and communication at a yearly cost to SpaceX of $400 million (equivalent to $440,000,000 in 2025). However, Musk refused to block Russian state media on Starlink. In 2023, Musk denied Ukraine's request to activate Starlink over Crimea to aid an attack against the Russian navy, citing fears of a nuclear response. Tesla, Inc., originally Tesla Motors, was incorporated in July 2003 by Martin Eberhard and Marc Tarpenning. Both men played active roles in the company's early development prior to Musk's involvement. Musk led the Series A round of investment in February 2004; he invested $6.35 million (equivalent to $11,000,000 in 2025), became the majority shareholder, and joined Tesla's board of directors as chairman. Musk took an active role within the company and oversaw Roadster product design, but was not deeply involved in day-to-day business operations. Following a series of escalating conflicts in 2007 and the 2008 financial crisis, Eberhard was ousted from the firm.[page needed] Musk assumed leadership of the company as CEO and product architect in 2008. A 2009 lawsuit settlement with Eberhard designated Musk as a Tesla co-founder, along with Tarpenning and two others. Tesla began delivery of the Roadster, an electric sports car, in 2008. With sales of about 2,500 vehicles, it was the first mass production all-electric car to use lithium-ion battery cells. Under Musk, Tesla has since launched several well-selling electric vehicles, including the four-door sedan Model S (2012), the crossover Model X (2015), the mass-market sedan Model 3 (2017), the crossover Model Y (2020), and the pickup truck Cybertruck (2023). In May 2020, Musk resigned as chairman of the board as part of the settlement of a lawsuit from the SEC over him tweeting that funding had been "secured" for potentially taking Tesla private. The company has also constructed multiple lithium-ion battery and electric vehicle factories, called Gigafactories. Since its initial public offering in 2010, Tesla stock has risen significantly; it became the most valuable carmaker in summer 2020, and it entered the S&P 500 later that year. In October 2021, it reached a market capitalization of $1 trillion (equivalent to $1,200,000,000,000 in 2025), the sixth company in U.S. history to do so. Musk provided the initial concept and financial capital for SolarCity, which his cousins Lyndon and Peter Rive founded in 2006. By 2013, SolarCity was the second largest provider of solar power systems in the United States. In 2014, Musk promoted the idea of SolarCity building an advanced production facility in Buffalo, New York, triple the size of the largest solar plant in the United States. Construction of the factory started in 2014 and was completed in 2017. It operated as a joint venture with Panasonic until early 2020. Tesla acquired SolarCity for $2 billion in 2016 (equivalent to $2,700,000,000 in 2025) and merged it with its battery unit to create Tesla Energy. The deal's announcement resulted in a more than 10% drop in Tesla's stock price; at the time, SolarCity was facing liquidity issues. Multiple shareholder groups filed a lawsuit against Musk and Tesla's directors, stating that the purchase of SolarCity was done solely to benefit Musk and came at the expense of Tesla and its shareholders. Tesla directors settled the lawsuit in January 2020, leaving Musk the sole remaining defendant. Two years later, the court ruled in Musk's favor. In 2016, Musk co-founded Neuralink, a neurotechnology startup, with an investment of $100 million. Neuralink aims to integrate the human brain with artificial intelligence (AI) by creating devices that are embedded in the brain. Such technology could enhance memory or allow the devices to communicate with software. The company also hopes to develop devices to treat neurological conditions like spinal cord injuries. In 2022, Neuralink announced that clinical trials would begin by the end of the year. In September 2023, the Food and Drug Administration approved Neuralink to initiate six-year human trials. Neuralink has conducted animal testing on macaques at the University of California, Davis. In 2021, the company released a video in which a macaque played the video game Pong via a Neuralink implant. The company's animal trials—which have caused the deaths of some monkeys—have led to claims of animal cruelty. The Physicians Committee for Responsible Medicine has alleged that Neuralink violated the Animal Welfare Act. Employees have complained that pressure from Musk to accelerate development has led to botched experiments and unnecessary animal deaths. In 2022, a federal probe was launched into possible animal welfare violations by Neuralink.[needs update] In 2017, Musk founded the Boring Company to construct tunnels; he also revealed plans for specialized, underground, high-occupancy vehicles that could travel up to 150 miles per hour (240 km/h) and thus circumvent above-ground traffic in major cities. Early in 2017, the company began discussions with regulatory bodies and initiated construction of a 30-foot (9.1 m) wide, 50-foot (15 m) long, and 15-foot (4.6 m) deep "test trench" on the premises of SpaceX's offices, as that required no permits. The Los Angeles tunnel, less than two miles (3.2 km) in length, debuted to journalists in 2018. It used Tesla Model Xs and was reported to be a rough ride while traveling at suboptimal speeds. Two tunnel projects announced in 2018, in Chicago and West Los Angeles, have been canceled. A tunnel beneath the Las Vegas Convention Center was completed in early 2021. Local officials have approved further expansions of the tunnel system. April 14, 2022 In early 2017, Musk expressed interest in buying Twitter and had questioned the platform's commitment to freedom of speech. By 2022, Musk had reached 9.2% stake in the company, making him the largest shareholder.[d] Musk later agreed to a deal that would appoint him to Twitter's board of directors and prohibit him from acquiring more than 14.9% of the company. Days later, Musk made a $43 billion offer to buy Twitter. By the end of April Musk had successfully concluded his bid for approximately $44 billion. This included approximately $12.5 billion in loans and $21 billion in equity financing. Having backtracked on his initial decision, Musk bought the company on October 27, 2022. Immediately after the acquisition, Musk fired several top Twitter executives including CEO Parag Agrawal; Musk became the CEO instead. Under Elon Musk, Twitter instituted monthly subscriptions for a "blue check", and laid off a significant portion of the company's staff. Musk lessened content moderation and hate speech also increased on the platform after his takeover. In late 2022, Musk released internal documents relating to Twitter's moderation of Hunter Biden's laptop controversy in the lead-up to the 2020 presidential election. Musk also promised to step down as CEO after a Twitter poll, and five months later, Musk stepped down as CEO and transitioned his role to executive chairman and chief technology officer (CTO). Despite Musk stepping down as CEO, X continues to struggle with challenges such as viral misinformation, hate speech, and antisemitism controversies. Musk has been accused of trying to silence some of his critics such as Twitch streamer Asmongold, who criticized him during one of his streams. Musk has been accused of removing their accounts' blue checkmarks, which hinders visibility and is considered a form of shadow banning, or suspending their accounts without justification. Other activities In August 2013, Musk announced plans for a version of a vactrain, and assigned engineers from SpaceX and Tesla to design a transport system between Greater Los Angeles and the San Francisco Bay Area, at an estimated cost of $6 billion. Later that year, Musk unveiled the concept, dubbed the Hyperloop, intended to make travel cheaper than any other mode of transport for such long distances. In December 2015, Musk co-founded OpenAI, a not-for-profit artificial intelligence (AI) research company aiming to develop artificial general intelligence, intended to be safe and beneficial to humanity. Musk pledged $1 billion of funding to the company, and initially gave $50 million. In 2018, Musk left the OpenAI board. Since 2018, OpenAI has made significant advances in machine learning. In July 2023, Musk launched the artificial intelligence company xAI, which aims to develop a generative AI program that competes with existing offerings like OpenAI's ChatGPT. Musk obtained funding from investors in SpaceX and Tesla, and xAI hired engineers from Google and OpenAI. December 16, 2022 Musk uses a private jet owned by Falcon Landing LLC, a SpaceX-linked company, and acquired a second jet in August 2020. His heavy use of the jets and the consequent fossil fuel usage have received criticism. Musk's flight usage is tracked on social media through ElonJet. In December 2022, Musk banned the ElonJet account on Twitter, and made temporary bans on the accounts of journalists that posted stories regarding the incident, including Donie O'Sullivan, Keith Olbermann, and journalists from The New York Times, The Washington Post, CNN, and The Intercept. In October 2025, Musk's company xAI launched Grokipedia, an AI-generated online encyclopedia that he promoted as an alternative to Wikipedia. Articles on Grokipedia are generated and reviewed by xAI's Grok chatbot. Media coverage and academic analysis described Grokipedia as frequently reusing Wikipedia content but framing contested political and social topics in line with Musk's own views and right-wing narratives. A study by Cornell University researchers and NBC News stated that Grokipedia cites sources that are blacklisted or considered "generally unreliable" on Wikipedia, for example, the conspiracy site Infowars and the neo-Nazi forum Stormfront. Wired, The Guardian and Time criticized Grokipedia for factual errors and for presenting Musk himself in unusually positive terms while downplaying controversies. Politics Musk is an outlier among business leaders who typically avoid partisan political advocacy. Musk was a registered independent voter when he lived in California. Historically, he has donated to both Democrats and Republicans, many of whom serve in states in which he has a vested interest. Since 2022, his political contributions have mostly supported Republicans, with his first vote for a Republican going to Mayra Flores in the 2022 Texas's 34th congressional district special election. In 2024, he started supporting international far-right political parties, activists, and causes, and has shared misinformation and numerous conspiracy theories. Since 2024, his views have been generally described as right-wing. Musk supported Barack Obama in 2008 and 2012, Hillary Clinton in 2016, Joe Biden in 2020, and Donald Trump in 2024. In the 2020 Democratic Party presidential primaries, Musk endorsed candidate Andrew Yang and expressed support for Yang's proposed universal basic income, and endorsed Kanye West's 2020 presidential campaign. In 2021, Musk publicly expressed opposition to the Build Back Better Act, a $3.5 trillion legislative package endorsed by Joe Biden that ultimately failed to pass due to unanimous opposition from congressional Republicans and several Democrats. In 2022, gave over $50 million to Citizens for Sanity, a conservative political action committee. In 2023, he supported Republican Ron DeSantis for the 2024 U.S. presidential election, giving $10 million to his campaign, and hosted DeSantis's campaign announcement on a Twitter Spaces event. From June 2023 to January 2024, Musk hosted a bipartisan set of X Spaces with Republican and Democratic candidates, including Robert F. Kennedy Jr., Vivek Ramaswamy, and Dean Phillips. In October 2025, former vice-president Kamala Harris commented that it was a mistake from the Democratic side to not invite Musk to a White House electric vehicle event organized in August 2021 and featuring executives from General Motors, Ford and Stellantis, despite Tesla being "the major American manufacturer of extraordinary innovation in this space." Fortune remarked that this was a nod to United Auto Workers and organized labor. Harris said presidents should put aside political loyalties when it came to recognizing innovation, and guessed that the non-invitation impacted Musk's perspective. Fortune noted that, at the time, Musk said, "Yeah, seems odd that Tesla wasn't invited." A month later, he criticized Biden as "not the friendliest administration." Jacob Silverman, author of the book Gilded Rage: Elon Musk and the Radicalization of Silicon Valley, said that the tech industry represented by Musk, Thiel, Andreessen and other capitalists, actually flourished under Biden, but the tech leaders chose Trump for their common ground on cultural issues. By early 2024, Musk had become a vocal and financial supporter of Donald Trump. In July 2024, minutes after the attempted assassination of Donald Trump, Musk endorsed him for president saying; "I fully endorse President Trump and hope for his rapid recovery." During the presidential campaign, Musk joined Trump on stage at a campaign rally, and during the campaign promoted conspiracy theories and falsehoods about Democrats, election fraud and immigration, in support of Trump. Musk was the largest individual donor of the 2024 election. In 2025, Musk contributed $19 million to the Wisconsin Supreme Court race, hoping to influence the state's future redistricting efforts and its regulations governing car manufacturers and dealers. In 2023, Musk said he shunned the World Economic Forum because it was boring. The organization commented that they had not invited him since 2015. He has participated in Dialog, dubbed "Tech Bilderberg" and organized by Peter Thiel and Auren Hoffman, though. Musk's international political actions and comments have come under increasing scrutiny and criticism, especially from the governments and leaders of France, Germany, Norway, Spain and the United Kingdom, particularly due to his position in the U.S. government as well as ownership of X. An NBC News analysis found he had boosted far-right political movements to cut immigration and curtail regulation of business in at least 18 countries on six continents since 2023. During his speech after the second inauguration of Donald Trump, Musk twice made a gesture interpreted by many as a Nazi or a fascist Roman salute.[e] He thumped his right hand over his heart, fingers spread wide, and then extended his right arm out, emphatically, at an upward angle, palm down and fingers together. He then repeated the gesture to the crowd behind him. As he finished the gestures, he said to the crowd, "My heart goes out to you. It is thanks to you that the future of civilization is assured." It was widely condemned as an intentional Nazi salute in Germany, where making such gestures is illegal. The Anti-Defamation League said it was not a Nazi salute, but other Jewish organizations disagreed and condemned the salute. American public opinion was divided on partisan lines as to whether it was a fascist salute. Musk dismissed the accusations of Nazi sympathies, deriding them as "dirty tricks" and a "tired" attack. Neo-Nazi and white supremacist groups celebrated it as a Nazi salute. Multiple European political parties demanded that Musk be banned from entering their countries. The concept of DOGE emerged in a discussion between Musk and Donald Trump, and in August 2024, Trump committed to giving Musk an advisory role, with Musk accepting the offer. In November and December 2024, Musk suggested that the organization could help to cut the U.S. federal budget, consolidate the number of federal agencies, and eliminate the Consumer Financial Protection Bureau, and that its final stage would be "deleting itself". In January 2025, the organization was created by executive order, and Musk was designated a "special government employee". Musk led the organization and was a senior advisor to the president, although his official role is not clear. In sworn statement during a lawsuit, the director of the White House Office of Administration stated that Musk "is not an employee of the U.S. DOGE Service or U.S. DOGE Service Temporary Organization", "is not the U.S. DOGE Service administrator", and has "no actual or formal authority to make government decisions himself". Trump said two days later that he had put Musk in charge of DOGE. A federal judge has ruled that Musk acted as the de facto leader of DOGE. Musk's role in the second Trump administration, particularly in response to DOGE, has attracted public backlash. He was criticized for his treatment of federal government employees, including his influence over the mass layoffs of the federal workforce. He has prioritized secrecy within the organization and has accused others of violating privacy laws. A Senate report alleged that Musk could avoid up to $2 billion in legal liability as a result of DOGE's actions. In May 2025, Bill Gates accused Musk of "killing the world's poorest children" through his cuts to USAID, which modeling by Boston University estimated had resulted in 300,000 deaths by this time, most of them of children. By November 2025, the estimated death toll had increased to 400,000 children and 200,000 adults. Musk announced on May 28, 2025, that he would depart from the Trump administration as planned when the special government employee's 130 day deadline expired, with a White House official confirming that Musk's offboarding from the Trump administration was already underway. His departure was officially confirmed during a joint Oval Office press conference with Trump on May 30, 2025. @realDonaldTrump is in the Epstein files. That is the real reason they have not been made public. June 5, 2025 After leaving office, Musk criticized the Trump administration's Big Beautiful Bill, calling it a "disgusting abomination" due to its provisions increasing the deficit. A feud began between Musk and Trump, with its most notable event being Musk alleging Trump had ties to sex offender Jeffrey Epstein on X (formerly Twitter) on June 5, 2025. Trump responded on Truth Social stating that Musk went "CRAZY" after the "EV Mandate" was purportedly taken away and threatened to cut Musk's government contracts. Musk then called for a third Trump impeachment. The next day, Trump stated that he did not wish to reconcile with Musk, and added that Musk would face "very serious consequences" if he funds Democratic candidates. On June 11, Musk publicly apologized for the tweets against Trump, saying they "went too far". Views November 6, 2022 Rejecting the conservative label, Musk has described himself as a political moderate, even as his views have become more right-wing over time. His views have been characterized as libertarian and far-right, and after his involvement in European politics, they have received criticism from world leaders such as Emmanuel Macron and Olaf Scholz. Within the context of American politics, Musk supported Democratic candidates up until 2022, at which point he voted for a Republican for the first time. He has stated support for universal basic income, gun rights, freedom of speech, a tax on carbon emissions, and H-1B visas. Musk has expressed concern about issues such as artificial intelligence (AI) and climate change, and has been a critic of wealth tax, short-selling, and government subsidies. An immigrant himself, Musk has been accused of being anti-immigration, and regularly blames immigration policies for illegal immigration. He is also a pronatalist who believes population decline is the biggest threat to civilization, and identifies as a cultural Christian. Musk has long been an advocate for space colonization, especially the colonization of Mars. He has repeatedly pushed for humanity colonizing Mars, in order to become an interplanetary species and lower the risks of human extinction. Musk has promoted conspiracy theories and made controversial statements that have led to accusations of racism, sexism, antisemitism, transphobia, disseminating disinformation, and support of white pride. While describing himself as a "pro-Semite", his comments regarding George Soros and Jewish communities have been condemned by the Anti-Defamation League and the Biden White House. Musk was criticized during the COVID-19 pandemic for making unfounded epidemiological claims, defying COVID-19 lockdowns restrictions, and supporting the Canada convoy protest against vaccine mandates. He has amplified false claims of white genocide in South Africa. Musk has been critical of Israel's actions in the Gaza Strip during the Gaza war, praised China's economic and climate goals, suggested that Taiwan and China should resolve cross-strait relations, and was described as having a close relationship with the Chinese government. In Europe, Musk expressed support for Ukraine in 2022 during the Russian invasion, recommended referendums and peace deals on the annexed Russia-occupied territories, and supported the far-right Alternative for Germany political party in 2024. Regarding British politics, Musk blamed the 2024 UK riots on mass migration and open borders, criticized Prime Minister Keir Starmer for what he described as a "two-tier" policing system, and was subsequently attacked as being responsible for spreading misinformation and amplifying the far-right. He has also voiced his support for far-right activist Tommy Robinson and pledged electoral support for Reform UK. In February 2026, Musk described Spanish Prime Minister Pedro Sánchez as a "tyrant" following Sánchez's proposal to prohibit minors under the age of 16 from accessing social media platforms. Legal affairs In 2018, Musk was sued by the U.S. Securities and Exchange Commission (SEC) for a tweet stating that funding had been secured for potentially taking Tesla private.[f] The securities fraud lawsuit characterized the tweet as false, misleading, and damaging to investors, and sought to bar Musk from serving as CEO of publicly traded companies. Two days later, Musk settled with the SEC, without admitting or denying the SEC's allegations. As a result, Musk and Tesla were fined $20 million each, and Musk was forced to step down for three years as Tesla chairman but was able to remain as CEO. Shareholders filed a lawsuit over the tweet, and in February 2023, a jury found Musk and Tesla not liable. Musk has stated in interviews that he does not regret posting the tweet that triggered the SEC investigation. In 2019, Musk stated in a tweet that Tesla would build half a million cars that year. The SEC reacted by asking a court to hold him in contempt for violating the terms of the 2018 settlement agreement. A joint agreement between Musk and the SEC eventually clarified the previous agreement details, including a list of topics about which Musk needed preclearance. In 2020, a judge blocked a lawsuit that claimed a tweet by Musk regarding Tesla stock price ("too high imo") violated the agreement. Freedom of Information Act (FOIA)-released records showed that the SEC concluded Musk had subsequently violated the agreement twice by tweeting regarding "Tesla's solar roof production volumes and its stock price". In October 2023, the SEC sued Musk over his refusal to testify a third time in an investigation into whether he violated federal law by purchasing Twitter stock in 2022. In February 2024, Judge Laurel Beeler ruled that Musk must testify again. In January 2025, the SEC filed a lawsuit against Musk for securities violations related to his purchase of Twitter. In January 2024, Delaware judge Kathaleen McCormick ruled in a 2018 lawsuit that Musk's $55 billion pay package from Tesla be rescinded. McCormick called the compensation granted by the company's board "an unfathomable sum" that was unfair to shareholders. The Delaware Supreme Court overturned McCormick's decision in December 2025, restoring Musk's compensation package and awarding $1 in nominal damages. Personal life Musk became a U.S. citizen in 2002. From the early 2000s until late 2020, Musk resided in California, where both Tesla and SpaceX were founded. He then relocated to Cameron County, Texas, saying that California had become "complacent" about its economic success. While hosting Saturday Night Live in 2021, Musk stated that he has Asperger syndrome (an outdated term for autism spectrum disorder). When asked about his experience growing up with Asperger's syndrome in a TED2022 conference in Vancouver, Musk stated that "the social cues were not intuitive ... I would just tend to take things very literally ... but then that turned out to be wrong — [people were not] simply saying exactly what they mean, there's all sorts of other things that are meant, and [it] took me a while to figure that out." Musk suffers from back pain and has undergone several spine-related surgeries, including a disc replacement. In 2000, he contracted a severe case of malaria while on vacation in South Africa. Musk has stated he uses doctor-prescribed ketamine for occasional depression and that he doses "a small amount once every other week or something like that"; since January 2024, some media outlets have reported that he takes ketamine, marijuana, LSD, ecstasy, mushrooms, cocaine and other drugs. Musk at first refused to comment on his alleged drug use, before responding that he had not tested positive for drugs, and that if drugs somehow improved his productivity, "I would definitely take them!". The New York Times' investigations revealed Musk's overuse of ketamine and numerous other drugs, as well as strained family relationships and concerns from close associates who have become troubled by his public behavior as he became more involved in political activities and government work. According to The Washington Post, President Trump described Musk as "a big-time drug addict". Through his own label Emo G Records, Musk released a rap track, "RIP Harambe", on SoundCloud in March 2019. The following year, he released an EDM track, "Don't Doubt Ur Vibe", featuring his own lyrics and vocals. Musk plays video games, which he stated has a "'restoring effect' that helps his 'mental calibration'". Some games he plays include Quake, Diablo IV, Elden Ring, and Polytopia. Musk once claimed to be one of the world's top video game players but has since admitted to "account boosting", or cheating by hiring outside services to achieve top player rankings. Musk has justified the boosting by claiming that all top accounts do it so he has to as well to remain competitive. In 2024 and 2025, Musk criticized the video game Assassin's Creed Shadows and its creator Ubisoft for "woke" content. Musk posted to X that "DEI kills art" and specified the inclusion of the historical figure Yasuke in the Assassin's Creed game as offensive; he also called the game "terrible". Ubisoft responded by saying that Musk's comments were "just feeding hatred" and that they were focused on producing a game not pushing politics. Musk has fathered at least 14 children, one of whom died as an infant. The Wall Street Journal reported in 2025 that sources close to Musk suggest that the "true number of Musk's children is much higher than publicly known". He had six children with his first wife, Canadian author Justine Wilson, whom he met while attending Queen's University in Ontario, Canada; they married in 2000. In 2002, their first child Nevada Musk died of sudden infant death syndrome at the age of 10 weeks. After his death, the couple used in vitro fertilization (IVF) to continue their family; they had twins in 2004, followed by triplets in 2006. The couple divorced in 2008 and have shared custody of their children. The elder twin he had with Wilson came out as a trans woman and, in 2022, officially changed her name to Vivian Jenna Wilson, adopting her mother's surname because she no longer wished to be associated with Musk. Musk began dating English actress Talulah Riley in 2008. They married two years later at Dornoch Cathedral in Scotland. In 2012, the couple divorced, then remarried the following year. After briefly filing for divorce in 2014, Musk finalized a second divorce from Riley in 2016. Musk then dated the American actress Amber Heard for several months in 2017; he had reportedly been "pursuing" her since 2012. In 2018, Musk and Canadian musician Grimes confirmed they were dating. Grimes and Musk have three children, born in 2020, 2021, and 2022.[g] Musk and Grimes originally gave their eldest child the name "X Æ A-12", which would have violated California regulations as it contained characters that are not in the modern English alphabet; the names registered on the birth certificate are "X" as a first name, "Æ A-Xii" as a middle name, and "Musk" as a last name. They received criticism for choosing a name perceived to be impractical and difficult to pronounce; Musk has said the intended pronunciation is "X Ash A Twelve". Their second child was born via surrogacy. Despite the pregnancy, Musk confirmed reports that the couple were "semi-separated" in September 2021; in an interview with Time in December 2021, he said he was single. In October 2023, Grimes sued Musk over parental rights and custody of X Æ A-Xii. Elon Musk has taken X Æ A-Xii to multiple official events in Washington, D.C. during Trump's second term in office. Also in July 2022, The Wall Street Journal reported that Musk allegedly had an affair with Nicole Shanahan, the wife of Google co-founder Sergey Brin, in 2021, leading to their divorce the following year. Musk denied the report. Musk also had a relationship with Australian actress Natasha Bassett, who has been described as "an occasional girlfriend". In October 2024, The New York Times reported Musk bought a Texas compound for his children and their mothers, though Musk denied having done so. Musk also has four children with Shivon Zilis, director of operations and special projects at Neuralink: twins born via IVF in 2021, a child born in 2024 via surrogacy and a child born in 2025.[h] On February 14, 2025, Ashley St. Clair, an influencer and author, posted on X claiming to have given birth to Musk's son Romulus five months earlier, which media outlets reported as Musk's supposed thirteenth child.[i] On February 22, 2025, it was reported that St Clair had filed for sole custody of her five-month-old son and for Musk to be recognised as the child's father. On March 31, 2025, Musk wrote that, while he was unsure if he was the father of St. Clair's child, he had paid St. Clair $2.5 million and would continue paying her $500,000 per year.[j] Later reporting from the Wall Street Journal indicated that $1 million of these payments to St. Clair were structured as a loan. In 2014, Musk and Ghislaine Maxwell appeared together in a photograph taken at an Academy Awards after-party, which Musk later described as a "photobomb". The January 2026 Epstein files contain emails between Musk and Epstein from 2012 to 2013, after Epstein's first conviction. Emails released on January 30, 2026, indicated that Epstein invited Musk to visit his private island on multiple occasions. The correspondence showed that while Epstein repeatedly encouraged Musk to attend, Musk did not visit the island. In one instance, Musk discussed the possibility of attending a party with his then-wife Talulah Riley and asked which day would be the "wildest party"; according to the emails, the visit did not take place after Epstein later cancelled the plans.[k] On Christmas day in 2012, Musk emailed Epstein asking "Do you have any parties planned? I’ve been working to the edge of sanity this year and so, once my kids head home after Christmas, I really want to hit the party scene in St Barts or elsewhere and let loose. The invitation is much appreciated, but a peaceful island experience is the opposite of what I’m looking for". Epstein replied that the "ratio on my island" might make Musk's wife uncomfortable to which Musk responded, "Ratio is not a problem for Talulah". On September 11, 2013, Epstein sent an email asking Musk if he had any plans for coming to New York for the opening of the United Nations General Assembly where many "interesting people" would be coming to his house to which Musk responded that "Flying to NY to see UN diplomats do nothing would be an unwise use of time". Epstein responded by stating "Do you think i am retarded. Just kidding, there is no one over 25 and all very cute." Musk has denied any close relationship with Epstein and described him as a "creep" who attempted to ingratiate himself with influential people. When Musk was asked in 2019 if he introduced Epstein to Mark Zuckerberg, Musk responded: "I don’t recall introducing Epstein to anyone, as I don’t know the guy well enough to do so." The released emails nonetheless showed cordial exchanges on a range of topics, including Musk's inquiry about parties on the island. The correspondence also indicated that Musk suggested hosting Epstein at SpaceX, while Epstein separately discussed plans to tour SpaceX and bring "the girls", though there is no evidence that such a visit occurred. Musk has described the release of the files a "distraction", later accusing the second Trump administration of suppressing them to protect powerful individuals, including Trump himself.[l] Wealth Elon Musk is the wealthiest person in the world, with an estimated net worth of US$690 billion as of January 2026, according to the Bloomberg Billionaires Index, and $852 billion according to Forbes, primarily from his ownership stakes in SpaceX and Tesla. Having been first listed on the Forbes Billionaires List in 2012, around 75% of Musk's wealth was derived from Tesla stock in November 2020, although he describes himself as "cash poor". According to Forbes, he became the first person in the world to achieve a net worth of $300 billion in 2021; $400 billion in December 2024; $500 billion in October 2025; $600 billion in mid-December 2025; $700 billion later that month; and $800 billion in February 2026. In November 2025, a Tesla pay package worth potentially $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Public image Although his ventures have been highly influential within their separate industries starting in the 2000s, Musk only became a public figure in the early 2010s. He has been described as an eccentric who makes spontaneous and impactful decisions, while also often making controversial statements, contrary to other billionaires who prefer reclusiveness to protect their businesses. Musk's actions and his expressed views have made him a polarizing figure. Biographer Ashlee Vance described people's opinions of Musk as polarized due to his "part philosopher, part troll" persona on Twitter. He has drawn denouncement for using his platform to mock the self-selection of personal pronouns, while also receiving praise for bringing international attention to matters like British survivors of grooming gangs. Musk has been described as an American oligarch due to his extensive influence over public discourse, social media, industry, politics, and government policy. After Trump's re-election, Musk's influence and actions during the transition period and the second presidency of Donald Trump led some to call him "President Musk", the "actual president-elect", "shadow president" or "co-president". Awards for his contributions to the development of the Falcon rockets include the American Institute of Aeronautics and Astronautics George Low Transportation Award in 2008, the Fédération Aéronautique Internationale Gold Space Medal in 2010, and the Royal Aeronautical Society Gold Medal in 2012. In 2015, he received an honorary doctorate in engineering and technology from Yale University and an Institute of Electrical and Electronics Engineers Honorary Membership. Musk was elected a Fellow of the Royal Society (FRS) in 2018.[m] In 2022, Musk was elected to the National Academy of Engineering. Time has listed Musk as one of the most influential people in the world in 2010, 2013, 2018, and 2021. Musk was selected as Time's "Person of the Year" for 2021. Then Time editor-in-chief Edward Felsenthal wrote that, "Person of the Year is a marker of influence, and few individuals have had more influence than Musk on life on Earth, and potentially life off Earth too." Notes References Works cited Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Z.ai] | [TOKENS: 665] |
Contents Z.ai Knowledge Atlas Technology Joint Stock Co., Ltd.[a] branded internationally as Z.ai, is a Chinese technology company specializing in artificial intelligence (AI). The company was formerly known as Zhipu AI outside China until its rebranding in 2025. As of 2024, it is one of China's "AI Tiger" companies by investors and considered to be the third largest LLM market player in China's AI industry according to the International Data Corporation. In January 2025, the United States Commerce Department blacklisted the company in its Entity List due to national security concerns. History Founded in 2019, the startup company began from Tsinghua University and was later spun out as an independent company. In 2023, it raised 2.5 billion yuan (approx. 350 million in USD) from Alibaba Group and Tencent, along with Meituan, Ant Group, Xiaomi, and HongShan. In March 2024, Zhipu AI announced it was developing a Sora-like technology to achieve artificial general intelligence (AGI). In May 2024, the Saudi Arabian finance firm Prosperity7 Ventures, LLC participated in a USD $400 million financing round for Zhipu AI with a valuation of approximately 3 billion USD. In July 2024, they debuted the Ying text-to-video model. In October 2024, Zhipu AI released GLM-4.0, an open-source end-to-end speech large language model. The model can replicate human-like interactions and has the capability to adjust its tone, emotion, or dialect based from user's preference. In May 2025, the company sealed a 61.28 million yuan deal from the Chinese government for city projects in Hangzhou. In July 2025, Zhipu AI released GLM-4.5 and GLM-4.5 Air, their next generation language models, and the company rebranded itself as Z.ai internationally. In August 2025, Z.ai announced that their GLM models are compatible with Huawei's Ascend processors. On August 11, 2025, Z.ai released a new vision-language model (VLM) with a total of 106B parameters, GLM-4.5V. In late September 2025, the company released GLM 4.6 using China's domestic chips such as those from Cambricon Technologies. That same year, the company had its official name to Knowledge Atlas Technology JSC Ltd. On 8 January 2026, Z.ai held its initial public offering on the Hong Kong Stock Exchange to become a listed company. It is considered to be China's first major LLM company that went through an IPO. In February 2026, JPMorgan Chase recommended to investors of purchasing stocks of the company alongside MiniMax. On February 11, 2026, zAI Released GLM-5. Description Z.ai provides the following products and services: Z.ai has offices in the Middle East, United Kingdom, Singapore, and Malaysia, along with innovation center projects across Southeast Asia (2025). In January 2025, the United States Commerce Department added the company to its Entity List, citing national security concerns. See also References Notes External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Collective_action] | [TOKENS: 3152] |
Contents Collective action 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias Collective action refers to action taken together by a group of people whose goal is to enhance their condition and achieve a common objective. It is a term that has formulations and theories in many areas of the social sciences including psychology, sociology, anthropology, political science and economics. The social identity model Researchers Martijn van Zomeren, Tom Postmes, and Russell Spears conducted a meta-analysis of over 180 studies of collective action, in an attempt to integrate three dominant socio-psychological perspectives explaining antecedent conditions to this phenomenon – injustice, efficacy, and identity. In their resultant 2008 review article, an integrative Social Identity Model of Collective Action (SIMCA) was proposed which accounts for interrelationships among the three predictors as well as their predictive capacities for collective action. An important assumption of this approach is that people tend to respond to subjective states of disadvantage, which may or may not flow from objective physical and social reality. Examining collective action through perceived injustice was initially guided by relative deprivation theory (RDT). RDT focuses on a subjective state of unjust disadvantage, proposing that engaging in fraternal (group-based) social comparisons with others may result in feelings of relative deprivation that foster collective action. Group-based emotions resulting from perceived injustice, such as anger, are thought to motivate collective action in an attempt to rectify the state of unfair deprivation. The extent to which individuals respond to this deprivation involves several different factors and varies from extremely high to extremely low across different settings. Meta-analysis results confirm that effects of injustice causally predict collective action, highlighting the theoretical importance of this variable. Moving beyond RDT, scholars suggested that in addition to a sense of injustice, people must also have the objective, structural resources necessary to mobilize change through social protest. An important psychological development saw this research instead directed towards subjective expectations and beliefs that unified effort (collective action) is a viable option for achieving group-based goals – this is referred to as perceived collective efficacy. Empirically, collective efficacy is shown to causally affect collective action among a number of populations across varied contexts. Social identity theory (SIT) suggests that people strive to achieve and maintain positive social identities associated with their group memberships. Where a group membership is disadvantaged (for example, low status), SIT implicates three variables in the evocation of collective action to improve conditions for the group – permeability of group boundaries, legitimacy of the intergroup structures, and the stability of these relationships. For example, when disadvantaged groups perceive intergroup status relationships as illegitimate and unstable, collective action is predicted to occur, in an attempt to change status structures for the betterment of the disadvantaged group. Meta-analysis results also confirm that social identity causally predicts collective action across a number of diverse contexts. Additionally, the integrated SIMCA affords another important role to social identity – that of a psychological bridge forming the collective base from which both collective efficacy and group injustice may be conceived.[citation needed] While there is sound empirical support for the causal importance of SIMCA's key theoretical variables on collective action, more recent literature has addressed the issue of reverse causation, finding support for a related, yet distinct, encapsulation model of social identity in collective action (EMSICA). This model suggests that perceived group efficacy and perceived injustice provide the basis from which social identity emerges, highlighting an alternative causal pathway to collective action. Recent research has sought to integrate SIMCA with intergroup contact theory (see Cakal, Hewstone, Schwär, & Heath) and others have extended SIMCA through bridging morality research with the collective action literature (see van Zomeren, Postmes, & Spears for a review). Also, utopian thinking has been proposed as an antecendant to collective action, aside to the route affecting perceived injustice, efficacy, or social identity. Utopian thinking contributes to accessing cognitive alternatives, which are imagined models of societies that are different from the current society. Cognitive alternatives are proposed by many social identity theorists as an effective way to increase collective action. Moreover, utopian thinking has the potential to increase perceived injustice, perceived efficacy, or form new social identities and therefore affect collective action. Furthermore, recent longitudinal studies suggest that the outcomes of collective action can create feedback loops that affect future efficacy and trust. For example, a 2026 study of the 2024 UK General Election found that while collective action is often driven by a desire for change, it can backfire psychologically if the desired outcome is not achieved. Supporters of losing parties who had engaged in high levels of collective action experienced a "dampening" effect on their post-election political trust, hindering the restoration of their faith in the political system compared to those who were less active. Public good The economic theory of collective action is concerned with the provision of public goods (and other collective consumption) through the collaboration of two or more individuals, and the impact of externalities on group behavior. It is more commonly referred to as Public Choice. Mancur Olson's 1965 book The Logic of Collective Action: Public Goods and the Theory of Groups, is an important early analysis of the problems of public good cost. Besides economics, the theory has found many applications in political science, sociology, communication, anthropology and environmentalism. The term collective action problem describes the situation in which multiple individuals would all benefit from a certain action, but has an associated cost making it implausible that any individual can or will undertake and solve it alone. The ideal solution is then to undertake this as a collective action, the cost of which is shared. Situations like this include the prisoner's dilemma, a collective action problem in which no communication is allowed, the free rider problem, and the tragedy of the commons, also known as the problem with open access. An allegorical metaphor often used to describe the problem is "belling the cat". Solutions to collective action problems include mutually binding agreements, government regulation, privatisation, and assurance contracts, also known as crowdacting. Mancur Olson made the claim that individual rational choice leads to situations where individuals with more resources will carry a higher burden in the provision of the public good than poorer ones. Poorer individuals will usually have little choice but to opt for the free rider strategy, i.e., they will attempt to benefit from the public good without contributing to its provision. This may also encourage the under-production (inefficient production) of the public good. While public goods are often provided by governments, this is not always the case. Various institutional designs have been studied with the aim of reducing the collaborative failure. The best design for a given situation depends on the production costs, the utility function, and the collaborative effects, amongst other things. Here are only some examples: A joint-product model analyzes the collaborative effect of joining a private good to a public good. For example, a tax deduction (private good) can be tied to a donation to a charity (public good). It can be shown that the provision of the public good increases when tied to the private good, as long as the private good is provided by a monopoly (otherwise the private good would be provided by competitors without the link to the public good). Some institutional design, e.g., intellectual property rights, can introduce an exclusion mechanism and turn a pure public good into an impure public good artificially. If the costs of the exclusion mechanism are not higher than the gain from the collaboration, clubs can emerge. James M. Buchanan showed in his seminal paper that clubs can be an efficient alternative to government interventions. A nation can be seen as a club whose members are its citizens. Government would then be the manager of this club. In some cases, theory shows that collaboration emerges spontaneously in smaller groups rather than in large ones (see e.g. Dunbar's number). This explains why labor unions or charities often have a federated structure. In philosophy Since the late 20th century, analytic philosophers have been exploring the nature of collective action in the sense of acting together, as when people paint a house together, go for a walk together, or together execute a pass play. These particular examples have been central for three of the philosophers who have made well known contributions to this literature: Michael Bratman, Margaret Gilbert, and John Searle, respectively. In (Gilbert 1989) and subsequent articles and book chapters including Gilbert (2006, chapter 7), whom argues for an account of collective action according to which this rests on a special kind of interpersonal commitment, what Gilbert calls a "joint commitment". A joint commitment in Gilbert's sense is not a matter of a set of personal commitments independently created by each of the participants, as when each makes a personal decision to do something. Rather, it is a single commitment to whose creation each participant makes a contribution. Thus suppose that one person says "Shall we go for a walk?" and the other says "Yes, let's". Gilbert proposes that as a result of this exchange the parties are jointly committed to go for a walk, and thereby obligated to one another to act as if they were parts of a single person taking a walk. Joint commitments can be created less explicitly and through processes that are more extended in time. One merit of a joint commitment account of collective action, in Gilbert's view, is that it explains the fact that those who are out on a walk together, for instance, understand that each of them is in a position to demand corrective action of the other if he or she acts in ways that affect negatively the completion of their walk. In (Gilbert 2006a) she discusses the pertinence of joint commitment to collective actions in the sense of the theory of rational choice. In Searle (1990) Searle argues that what lies at the heart of a collective action is the presence in the mind of each participant of a "we-intention". Searle does not give an account of we-intentions or, as he also puts it, "collective intentionality", but insists that they are distinct from the "I-intentions" that animate the actions of persons acting alone. In Bratman (1993) Bratman proposed that, roughly, two people "share an intention" to paint a house together when each intends that the house is painted by virtue of the activity of each, and also intends that it is so painted by virtue of the intention of each that it is so painted. That these conditions obtain must also be "common knowledge" between the participants. Discussion in this area continues to expand, and has influenced discussions in other disciplines including anthropology, developmental psychology, and economics. One general question is whether it is necessary to think in terms that go beyond the personal intentions of individual human beings properly to characterize what it is to act together. Bratman's account does not go beyond such personal intentions. Gilbert's account, with its invocation of joint commitment, does go beyond them. Searle's account does also, with its invocation of collective intentionality. The question of whether and how one must account for the existence of mutual obligations when there is a collective intention is another of the issues in this area of inquiry. Spontaneous consensus In addition to the psychological mechanisms of collective action as explained by the social identity model, researchers have developed sociological models of why collective action exists and have studied under what conditions collective action emerges. Along this social dimension, a special case of the general collective action problem is one of collective agreement: how does a group of agents (humans, animals, robots, etc.) reach consensus about a decision or belief, in the absence of central organization? Common examples can be found from domains as diverse as biology (flocking, shoaling and schooling, and general collective animal behavior), economics (stock market bubbles), and sociology (social conventions and norms) among others. Consensus is distinct from the collective action problem in that there often is not an explicit goal, benefit, or cost of action but rather it concerns itself with a social equilibrium of the individuals involved (and their beliefs). And it can be considered spontaneous when it emerges without the presence of a centralized institution among self-interested individuals. Spontaneous consensus can be considered along 4 dimensions involving the social structure of the individuals participating (local versus global) in the consensus as well as the processes (competitive vs cooperative) involved in reaching consensus: The underlying processes of spontaneous consensus can be viewed either as cooperation among individuals trying to coordinate themselves through their interactions or as competition between the alternatives or choices to be decided upon. Depending on the dynamics of the individuals involved as well as the context of the alternatives considered for consensus, the process can be wholly cooperative, wholly competitive, or a mix of the two. The distinction between local and global consensus can be viewed in terms of the social structure underlying the network of individuals participating in the consensus making process. Local consensus occurs when there is agreement between groups of neighboring nodes while global consensus refers to the state in which most of the population has reached an agreement. How and why consensus is reached is dependent on both the structure of the social network of individuals as well as the presence (or lack) of centralized institutions. There are many mechanisms (social and psychological) that have been identified to underlie the consensus making process. They have been used to both explain the emergence of spontaneous consensus and understand how to facilitate an equilibrium between individuals and can be grouped according to their role in the process. Due to the interdisciplinary nature of both the mechanisms as well as the applications of spontaneous consensus, a variety of techniques have been developed to study the emergence and evolution of spontaneous cooperation. Two of the most widely used are game theory and social network analysis. Traditionally game theory has been used to study zero-sum games but has been extended to many different types of games. Relevant to the study of spontaneous consensus are cooperative and non-cooperative games. Since a consensus must be reached without the presence of any external authoritative institution for it to be considered spontaneous, non-cooperative games and Nash equilibrium have been the dominant paradigm for which to study its emergence. In the context of non-cooperative games, a consensus is a formal Nash equilibrium that all players tend towards through self-enforcing alliances or agreements. An important case study of the underlying mathematical dynamics is the coordination game. Even when coordination is desired, it can be difficult to achieve due to incomplete information and constrained time horizons. An alternative approach to studying the emergence of spontaneous consensus—that avoids many of the unnatural or overly constrained assumptions of game theoretic models—is the use of network based methods and social network analysis (SNA). These SNA models are theoretically grounded in the communication mechanism of facilitating consensus and describe its emergence through the information propagation processes of the network (behavioral contagion). Through the spread of influence (and ideas) between agents participating in the consensus, local and global consensus can emerge if the agents in the network achieve a shared equilibrium state. Leveraging this model of consensus, researchers have shown that local peer influence can be used to reach a global consensus and cooperation across the entire network. While this model of consensus and cooperation has been shown to be successful in certain contexts, research suggest that communication and social influence cannot be fully captured by simple contagion models and as such a pure contagion based model of consensus may have limits. See also Footnotes Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Alexander_Friedmann] | [TOKENS: 1037] |
Contents Alexander Friedmann Alexander Alexandrovich Friedmann (also spelled Friedman or Fridman; /ˈfriːdmən/; Russian: Алекса́ндр Алекса́ндрович Фри́дман; 16 June [O.S. 4 June] 1888 – 16 September 1925) was a Russian and Soviet physicist and mathematician. He originated the pioneering theory that the universe is expanding, governed by a set of equations he developed known as the Friedmann equations. Early life Alexander Friedmann was born to the composer and ballet dancer Alexander Friedmann (who was a son of a baptized Jewish cantonist) and the pianist Ludmila Ignatievna Voyachek (who was a daughter of the Czech composer Hynek Vojáček). Friedmann was baptized into the Russian Orthodox Church as an infant, and lived much of his life in Saint Petersburg. Friedmann obtained his degree from St. Petersburg State University in 1910, and became a lecturer at Saint Petersburg Mining Institute. From his school days, Friedmann found a lifelong companion in Jacob Tamarkin, who was also a distinguished mathematician. World War I Friedmann fought in World War I on behalf of Imperial Russia, as an army aviator, an instructor, and eventually, under the revolutionary regime, as the head of an airplane factory. Professorship Friedmann in 1922 introduced the idea of an expanding universe that contained moving matter. Correspondence with Einstein suggests that Einstein was unwilling to accept the idea of an evolving Universe and worked instead to revise his equations to support the static, eternal Universe of Newton's time. In 1929 Hubble published the redshift vs distance relationship showing that all the galaxies in the neighborhood recede at a rate proportional to their distance, formalizing an observation made earlier by Carl Wilhelm Wirtz. Unaware of Friedmann's work, in 1927 Belgian astronomer Georges Lemaître independently formulated a theorem of an evolving Universe. In June 1925 Friedmann was appointed as director of the Main Geophysical Observatory in Leningrad. In July 1925 he participated in a record-setting balloon flight, reaching an elevation of 7,400 m (24,300 ft). Work Friedmann's 1924 papers, including "Über die Möglichkeit einer Welt mit konstanter negativer Krümmung des Raumes" ("On the possibility of a world with constant negative curvature of space") published by the German physics journal Zeitschrift für Physik (Vol. 21, pp. 326–332), demonstrated that he had command of all three Friedmann models describing positive, zero and negative curvature respectively, a decade before Robertson and Walker published their analysis.[citation needed] This dynamic cosmological model of general relativity would come to form the standard for both the Big Bang and Steady State theories. Friedmann's work supported both theories equally, so it was not until the detection of the cosmic microwave background radiation that the Steady State theory was abandoned in favor of the current favorite Big Bang paradigm.[citation needed] The classic solution of the Einstein field equations that describes a homogeneous and isotropic universe was called the Friedmann–Lemaître–Robertson–Walker metric, or FLRW, after Friedmann, Georges Lemaître, Howard P. Robertson and Arthur Geoffrey Walker, who worked on the problem in the 1920s and 30s independently of Friedmann.[citation needed] In addition to general relativity, Friedmann's interests included hydrodynamics and meteorology. Physicists George Gamow, Vladimir Fock, and Lev Vasilievich Keller were among his students. Personal life and death In 1911, he married Ekaterina Dorofeeva, though he later divorced her. He married Natalia Malinina in 1923. They had a religious wedding ceremony, though both were far from religious. They had a son Alexander Alexandrovich Friedman (1925–1983), born after his father's death. Friedmann died on 16 September 1925, from misdiagnosed typhoid fever. He had allegedly contracted the bacteria on return from his honeymoon in Crimea, when he ate an unwashed pear bought at a railway station. Legacy The Moon crater Fridman is named after him. Alexander Friedmann International Seminar is a periodical scientific event. The objective of the meeting is to promote contact between scientists working in the field of Relativity, Gravitation and Cosmology, and related fields. The First Alexander Friedmann International Seminar on Gravitation and Cosmology devoted to the centenary of his birth took place in 1988. During the 2022 COVID-19 protests in China, Tsinghua University students were seen displaying Friedmann's equation as if it were a protest slogan, which was understood as an evasion of censorship by punning multilingually on "freed man" and referring to liberalization and opening via the expansion of the universe. Selected publications References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Master%27s_degree] | [TOKENS: 6181] |
Contents Master's degree A master's degree[note 1] (from Latin magister) is a postgraduate academic degree awarded by universities or colleges upon completion of a course of study demonstrating mastery or a high-order overview of a specific field of study or area of professional practice. A master's degree normally requires previous study at the bachelor's level, either as a separate degree or as part of an integrated course. Within the area studied, master's graduates are expected to possess advanced knowledge of a specialized body of theoretical and applied topics; high order skills in analysis, critical evaluation, or professional application; and the ability to solve complex problems and think rigorously and independently. Historical development The master's degree dates back to the origin of European universities, with a Papal bull of 1233 decreeing that anyone admitted to the mastership in the University of Toulouse should be allowed to teach freely in any other university. The original meaning of the master's degree was thus that someone who had been admitted to the rank (degree) of master (i.e. teacher) in one university should be admitted to the same rank in other universities. This gradually became formalised as the licentia docendī (licence to teach). Originally, masters and doctors were not distinguished, but by the 15th century it had become customary in the English universities to refer to the teachers in the lower faculties (arts and grammar) as masters and those in the higher faculties as doctors. Initially, the Bachelor of Arts (BA) was awarded for the study of the trivium and the Master of Arts (MA) for the study of the quadrivium. From the late Middle Ages until the 19th century, the pattern of degrees was therefore to have a bachelor's and master's degree in the lower faculties and to have bachelor's and doctorates in the higher faculties. In the United States, the first master's degrees (Magister Artium, or Master of Arts) were awarded at Harvard University soon after its foundation. In Scotland, the pre-Reformation universities (St Andrews, Glasgow, and Aberdeen) developed so that the Scottish MA became their first degree, while in Oxford, Cambridge and Trinity College Dublin, the MA was awarded to BA graduates of a certain standing without further examination from the late 17th century, its main purpose being to confer full membership of the university. At Harvard the 1700 regulations required that candidates for the master's degree had to pass a public examination, but by 1835 this was awarded Oxbridge-style three years after the BA. The 19th century saw a great expansion in the variety of master's degrees offered. At the start of the century, the only master's degree was the MA, and this was normally awarded without any further study or examination. The Master in Surgery degree was introduced by the University of Glasgow in 1815. By 1861 this had been adopted throughout Scotland as well as by Cambridge and Durham in England and the University of Dublin in Ireland. When the Philadelphia College of Surgeons was established in 1870, it too conferred the Master of Surgery, "the same as that in Europe". In Scotland, Edinburgh maintained separate BA and MA degrees until the mid-19th century, although there were major doubts as to the quality of the Scottish degrees of this period. In 1832 Lord Brougham, the Lord Chancellor and an alumnus of the University of Edinburgh, told the House of Lords that "In England the Universities conferred degrees after a considerable period of residence, after much labour performed, and if they were not in all respects so rigorous as the statutes of the Universities required, nevertheless it could not be said, that Masters of Arts were created at Oxford and Cambridge as they were in Scotland, without any residence, or without some kind of examination. In Scotland, all the statutes of the Universities which enforced conditions on the grant of degrees were a dead letter." It 1837, separate examinations were reintroduced for the MA in England, at the newly established Durham University (even though, as in the ancient English universities, this was to confer full membership), to be followed in 1840 by the similarly new University of London, which was only empowered by its charter to grant degrees by examination. However, by the middle of the century the MA as an examined second degree was again under threat, with Durham moving to awarding it automatically to those who gained honours in the BA in 1857, along the lines of the Oxbridge MA, and Edinburgh following the other Scottish universities in awarding the MA as its first degree, in place of the BA, from 1858. At the same time, new universities were being established around the then British Empire along the lines of London, including examinations for the MA: the University of Sydney in Australia and the Queen's University of Ireland in 1850, and the Universities of Bombay (now the University of Mumbai), Madras and Calcutta in India in 1857. In the US, the revival of master's degrees as an examined qualification began in 1856 at the University of North Carolina, followed by the University of Michigan in 1859, although the idea of a master's degree as an earned second degree was not well established until the 1870s, alongside the PhD as the terminal degree. Sometimes it was possible to earn an MA either by examination or by seniority in the same institution; for example, in Michigan the "in course" MA was introduced in 1848 and was last awarded in 1882, while the "on examination" MA was introduced in 1859. Probably the most important master's degree introduced in the 19th century was the Master of Science (MS in the US, MSc in the UK). At the University of Michigan this was introduced in two forms in 1858: "in course", first awarded in 1859, and "on examination", first awarded in 1862. The "in course" MS was last awarded in 1876. In Britain, however, the degree took a while longer to arrive. When London introduced its Faculty of Sciences in 1858, the university was granted a new charter giving it the power "to confer the several Degrees of Bachelor, Master, and Doctor, in Arts, Laws, Science, Medicine, Music", but the degrees it awarded in science were the Bachelor of Science and the Doctor of Science. The same two degrees, again omitting the master's, were awarded at Edinburgh, despite the MA being the standard undergraduate degree for Arts in Scotland. In 1862, a royal commission suggested that Durham should award master's degrees in theology and science (with the suggested abbreviations MT and MS, contrary to later British practice of using MTh or MTheol and MSc for these degrees), but its recommendations were not enacted. In 1877, Oxford introduced the Master of Natural Science, along with the Bachelor of Natural Science, to stand alongside the MA and BA degrees and be awarded to students who took their degrees in the honours school of natural sciences. In 1879 a statute to actually establish the faculty of Natural Sciences at Oxford was promulgated, but in 1880 a proposal to rename the degree as a Master of Science was rejected along with a proposal to grant Masters of Natural Sciences a Master of Arts degree, in order to make them full members of the university. This scheme would appear to have then been quietly dropped, with Oxford going on to award BAs and MAs in science. The Master of Science (MSc) degree was finally introduced in Britain in 1878 at Durham, followed by the new Victoria University in 1881. At the Victoria University both the MA and MSc followed the lead of Durham's MA in requiring a further examination for those with an ordinary bachelor's degree but not for those with honours. At the start of the 20th century, there were four different sorts of master's degree in the UK: the Scottish MA, granted as a first degree; the Master of Arts (Oxbridge and Dublin), granted to all BA graduates a certain period after their first degree without further study; master's degrees that could be gained either by further study or by gaining an honours degree (which, at the time in the UK involved further study beyond the ordinary degree, as it still does in Scotland and some Commonwealth countries); and master's degrees that could only be obtained by further study (including all London master's degrees). In 1903, the London Daily News criticised the practice of Oxford and Cambridge, calling their MAs "the most stupendous of academic frauds" and "bogus degrees". Ensuing correspondence pointed out that "A Scotch M.A., at the most, is only the equivalent of an English B.A." and called for common standards for degrees, while defenders of the ancient universities said that "the Cambridge M.A. does not pretend to be a reward of learning" and that "it is rather absurd to describe one of their degrees as a bogus one because other modern Universities grant the same degree for different reasons". In 1900, Dartmouth College introduced the Master of Commercial Science (MCS), first awarded in 1902. This was the first master's degree in business, the forerunner of the modern MBA. The idea quickly crossed the Atlantic, with Manchester establishing a Faculty of Commerce, awarding Bachelor and Master of Commerce degrees, in 1903. Over the first half of the century the automatic master's degrees for honours graduates vanished as honours degrees became the standard undergraduate qualification in the UK. In the 1960s, new Scottish universities (except for Dundee, which inherited the undergraduate MA from St Andrews) reintroduced the BA as their undergraduate degree in arts, restoring the MA to its position as a postgraduate qualification. Oxford and Cambridge retained their MAs, but renamed many of their postgraduate bachelor's degrees in the higher faculties as master's degrees, e.g. the Cambridge LLB became the LLM in 1982, and the Oxford BLitt, BPhil (except in philosophy) and BSc became the MLitt, MPhil and MSc. In 1983, the Engineering Council issued a "'Statement on enhanced and extended undergraduate engineering degree courses", proposing the establishment of a four-year first degree (Master of Engineering). These were up and running by the mid-1980s and were followed in the early 1990s by the MPhys for physicists and since then integrated master's degrees in other sciences such as MChem, MMath, and MGeol, and in some institutions general or specific MSci (Master in Science) and MArts (Master in Arts) degrees. This development was noted by the Dearing Report into UK Higher Education in 1997, which called for the establishment of a national framework of qualifications and identified five different routes to master's degrees: This led to the establishment of the Quality Assurance Agency, which was charged with drawing up the framework. In 2000, renewed pressure was put on Oxbridge MAs in the UK Parliament, with Labour MP Jackie Lawrence introducing an early day motion calling for them to be scrapped and telling the Times Higher Education it was a "discriminatory practice" and that it "devalues and undermines the efforts of students at other universities". The following month the Quality Assurance Agency announced the results of a survey of 150 major employers showing nearly two thirds mistakenly thought the Cambridge MA was a postgraduate qualification and just over half made the same error regarding the Edinburgh MA, with QAA chief executive John Randall calling the Oxbridge MA "misleading and anachronistic". The QAA released the first "framework for higher education qualifications in England, Wales and Northern Ireland" in January 2001. This specified learning outcomes for M-level (master's) degrees and advised that the title "Master" should only be used for qualifications that met those learning outcomes in full. It addressed many of the Dearing Report's concerns, specifying that shorter courses at H-level (honours), e.g. conversion courses, should be styled Graduate Diploma or Graduate Certificate rather than as master's degrees, but confirmed that the extended undergraduate degrees were master's degrees, saying that "Some Masters degrees in science and engineering are awarded after extended undergraduate programmes that last, typically, a year longer than Honours degree programmes". It also addressed the Oxbridge MA issue, noting that "the MAs granted by the Universities of Oxford and Cambridge are not academic qualifications". The first "framework for qualifications of Higher Education Institutes in Scotland", also published in January 2001, used the same qualifications descriptors, adding in credit values that specified that a stand-alone master should be 180 credits and a "Masters (following an integrated programme from undergraduate to Masters level study)" should be 600 credits with a minimum of 120 at M-level. It was specified that the title "Master" should only be used for qualifications that met the learning outcomes and credit definitions, although it was noted that "A small number of universities in Scotland have a long tradition of labelling certain first degrees as 'MA'. Reports of Agency reviews of such provision will relate to undergraduate benchmarks and will make it clear that the title reflects Scottish custom and practice, and that any positive judgement on standards should not be taken as implying that the outcomes of the programme were at postgraduate level." The Bologna declaration in 1999 started the Bologna Process, leading to the creation of the European Higher Education Area (EHEA). This established a three-cycle bachelor's—master's—doctorate classification of degrees, leading to the adoption of master's degrees across the continent, often replacing older long-cycle qualifications such as the Magister (arts), Diplom (sciences) and state registration (professional) awards in Germany. As the process continued, descriptors were introduced for all three levels in 2004, and ECTS credit guidelines were developed. This led to questions as to the status of the integrated master's degrees and one-year master's degrees in the UK. However, the Framework for Higher Education Qualifications in England, Wales and Northern Ireland and the Framework for Qualifications of Higher Education Institutes in Scotland have both been aligned with the overarching framework for the EHEA with these being accepted as masters-level qualifications. Titles Master's degrees are commonly titled using the form 'Master of ...', where either a faculty (typically Arts or Science) or a field (Engineering, Physics, Chemistry, Business Administration, etc.) is specified. The two most common titles of master's degrees are the Master of Arts (MA/AM) and Master of Science (MSc/MS/SM) degrees, which normally consist of a mixture of research and taught material. The title of Master of Philosophy (MPhil) indicates (in the same manner as Doctor of Philosophy) an extended degree with a large research component. Other generically named master's programs include the Master of Studies (MSt)/Master of Advanced Study (MASt)/Master of Advanced Studies (M.A.S.), and Professional Master's (MProf). Integrated master's degrees and postgraduate master's degrees oriented towards professional practice are often more specifically named for their field of study ("tagged degrees"), including, for example, Master of Business Administration, Master of Divinity, Master of Engineering, Master of Physics, and Master of Public Health. The form "Master in ..." is also sometimes used, particularly where a faculty title is used for an integrated master's degree in addition to its use in a traditional postgraduate master's degree, e.g. Master in Science (MSci) and Master in Arts (MArts). This form is also sometimes used with other integrated master's degrees and occasionally for postgraduate master's degrees (e.g. Master's in Accounting). Some universities use Latin degree names; because of the flexibility of syntax in Latin, the Master of Arts and Master of Science degrees may be known in these institutions as Magister artium and Magister scientiæ or reversed from the English order to Artium magister and Scientiæ magister. Examples of the reversed usage include Harvard University and the University of Chicago, leading to the abbreviations AM and SM for these degrees. The forms "Master of Science" and "Master in Science" are indistinguishable in Latin. In the UK, full stops (periods) are not commonly used in degree abbreviations. In the US, The Gregg Reference Manual recommends placing periods in degrees (e.g. B.S., Ph.D.), while The Chicago Manual of Style recommends writing degrees without periods (e.g. BS, PhD). Master of Science is generally abbreviated MS in countries following United States usage and MSc in countries following British usage, where MS would refer to the degree of Master of Surgery. In Australia, some extended master's degrees use the title "doctor": Juris doctor and Doctors of Medical Practice, Physiotherapy, Dentistry, Optometry and Veterinary Practice. Despite their titles these are still master's degree and may not be referred to as doctoral degrees, nor may graduates use the title "doctor". The UK Quality Assurance Agency defines three categories of master's degrees: The United States Department of Education classifies master's degrees as research or professional. Research master's degrees in the US (e.g., MA/AM or MS) require the completion of taught courses and examinations in a major and one or more minor subjects, as well as (normally) a research thesis. Professional master's degrees may be structured like research master's (e.g., ME/MEng) or may concentrate on a specific discipline (e.g., MBA) and often substitute a project for the thesis. The Australian Qualifications Framework classifies master's degrees as research, coursework or extended. Research master's degrees typically take one to two years, and at least two-thirds of their content consists of research, research training and independent study. Coursework master's degrees typically also last one to two years, and consist mainly of structured learning with some independent research and project work or practice-related learning. Extended master's degrees typically take three to four years and contain significant practice-related learning that must be developed in collaboration with relevant professional, statutory or regulatory bodies. In Ireland, master's degrees may be either Taught or Research. Taught master's degrees are normally one to two year courses, rated at 60 – 120 ECTS credits, while research master's degrees are normally two year courses, either rated at 120 ECTS credits or not credit rated. Structure There is a range of pathways to the degree with entry based on evidence of a capacity to undertake higher level studies in a proposed field. A dissertation may or may not be required depending on the program. In general, structure and duration of a program of study leading to a master's degree will differ by country and university. Master's programs in the US and Canada are normally two years (full-time) in length. In some fields/programs, work on a doctorate begins immediately after the bachelor's degree, but a master's degree may be granted along the way as an intermediate qualification if the student petitions for it. Some universities offer evening options so that students can work during the day and earn a master's degree in the evenings. In the UK, postgraduate master's degrees typically take one to two years full-time or two to four years part-time. Master's degrees may be classified as either "research" or "taught", with taught degrees (those where research makes up less than half of the volume of work) being further subdivided into "specialist or advanced study" or "professional or practice". Taught degrees (of both forms) typically take a full calendar year (180 UK credits, compared to 120 for an academic year), while research degrees are not typically credit rated but may take up to two years to complete. An MPhil normally takes two calendar years (360 credits). An integrated master's degree (which is always a taught degree) combines a bachelor's degree course with an additional year of study (120 credits) at master's level for a four (England, Wales and Northern Ireland) or five (Scotland) academic year total period. In Australia, master's degrees vary from one year for a "research" or "coursework" master's following on from an Australian honours degree in a related field, with an extra six months if following on straight from an ordinary bachelor's degree and another extra six months if following on from a degree in a different field, to four years for an "extended" master's degree. At some Australian universities, the master's degree may take up to two years. In the Overarching Framework of Qualifications for the European Higher Education Area defined as part of the Bologna process, a "second cycle" (i.e. master's degree) programme is typically 90–120 ECTS credits, with a minimum requirement of at least 60 ECTS credits at second-cycle level. The definition of ECTS credits is that "60 ECTS credits are allocated to the learning outcomes and associated workload of a full-time academic year or its equivalent", thus European master's degrees should last for between one calendar year and two academic years, with at least one academic year of study at master's level. The Framework for Higher Education Qualification (FHEQ) in England Wales and Northern Ireland level 7 qualifications and the Framework for Qualification of Higher Education Institutes in Scotland (FQHEIS) level 11 qualifications (postgraduate and integrated master's degrees, except for MAs from the ancient universities of Scotland and Oxbridge MAs) have been certified as meeting this requirement. Irish master's degrees are one to two years (60–120 ECTS credits) for taught degrees and two years (not credit rated) for taught and research degrees. These have also been certified as compatible with the FQ-EHEA. Admission to a master's degree normally requires successful completion of study at bachelor's degree level either (for postgraduate degrees) as a stand-alone degree or (for integrated degrees) as part of an integrated scheme of study. In countries where the bachelor's degree with honours is the standard undergraduate degree, this is often the normal entry qualification. In addition, students will normally have to write a personal statement and, in the arts and humanities, will often have to submit a portfolio of work. In the UK, students will normally need to have a 2:1. Students may also have to provide evidence of their ability to successfully pursue a postgraduate degree to be accepted into a taught master's course, and possibly higher for a research master's. Graduate schools in the US similarly require strong undergraduate performance, and may require students to take one or more standardised tests, such as the GRE, GMAT or LSAT. Comparable European degrees In some European countries, a magister is a first degree and may be considered equivalent to a modern (standardized) master's degree (e.g., the German, Austrian and Polish university Diplom/Magister, or the similar five-year Diploma awarded in several subjects in Greek, Spanish, Portuguese, and other universities and polytechnics).[clarification needed] Under the Bologna Process, countries in the European Higher Education Area (EHEA) are moving to a three-cycle (bachelor's - master's - doctorate) system of degrees. Two-thirds of EHEA countries have standardised on 120 ECTS credits for their second-cycle (master's) degrees, but 90 ECTS credits is the main form in Cyprus, Ireland and Scotland and 60-75 credits in Montenegro, Serbia and Spain. The combined length of the first and second cycle varies from "3 + 1" years (240 ECTS credits), through "3 + 2" or "4 + 1" years (300 ECTS credits), to "4 + 2" years (360 ECTS credits). As of 2015, 31 EHEA countries have integrated programmes that combine the first and second cycle and lead to a second-cycle qualification (e.g. the UK integrated master's degree), particularly in STEM subjects and subjects allied to medicine. These typically have a duration of 300 – 360 ECTS credits (five to six years), with the integrated master's degrees in England, Wales and Northern Ireland being the shortest at 240 ECTS credits (four years). In Spain and Latin America, the Master's degree nomenclature in the Spanish language varies from one country to another. It is known as: Note: In Argentina, Colombia, Chile, Ecuador, and Uruguay, the term "magíster" is used to refer to a person who has completed a master's degree and received the corresponding title. Brazil After acquiring a Bachelor's, Technologist or Licenciate Degree, students are qualified to continue their academic career through Master's Degree ("mestrado", in Portuguese, a.k.a. stricto sensu post-graduation) or Specialization Degree ("especialização", in Portuguese, a.k.a. lato sensu post-graduation) programs. At the Master's program there are 2–3 years of graduate-level studies. Usually focused on academic research, the Master's Degree requires, on any specific knowledge area, the development of a thesis to be presented and defended before a board of professors after the period of research. Conversely, the Specialization Degree, also comprehends a 1–2 years studies, but does not require a new thesis to be proposed and defended, being usually attended by professionals looking for complementary training on a specific area of their knowledge. In addition, many Brazilian universities offer an MBA program. However, those are not the equivalent to a United States MBA degree, as it does not formally certify the student with a Master's degree (stricto sensu) but with a Specialization Degree (lato sensu) instead. A regular post-graduation course has to comply with a minimum of 360 class-hours, while an MBA degree has to comply with a minimum of 400 class-hours. Master's degree (stricto sensu) does not require a set minimum of class-hours, but it is practically impossible to finish it in less than 18 months due to the workload and research required; an average time for the degree is 2.5 years[citation needed]. Specialization (lato sensu) and MBA degrees can be also offered as distance education courses, while the master's degree (stricto-sensu) requires physical attendance. In Brazil, the degree often serves as additional qualification for those seeking to differentiate themselves in the job market, or for those who want to pursue a PhD It corresponds to the European (Bologna Process) 2nd Cycle or the North American master's. Asia Hong Kong requires one or two years of full-time coursework to achieve a master's degree. For part-time study, two or three years of study are normally required to achieve a postgraduate degree. As in the United Kingdom, the MPhil is the most advanced master's degree and usually includes both a taught portion and a research portion which requires candidates to complete an extensive original research for their thesis. Regardless of subject, students in all faculties (including sciences, arts, humanities and social sciences) may be awarded the Master of Philosophy. In Pakistani education system, there are two different master's degree programmes.[citation needed] Master’s degrees are earned after having received a bachelor’s pass degree and one year after the honours degree. The master's program generally lasts for two years. Both MA and MS are offered in all major subjects. In the Indian system, a master's degree is a postgraduate degree following a Bachelor's degree and preceding a Doctorate, usually requiring two years to complete. The available degrees include but are not limited to the following: In the Indonesian higher education system, a master's degree (Indonesian: magister) is a postgraduate degree following a bachelor's degree, preceding a doctorate and requiring a maximum of four years to complete. Master's degree students are required to submit their thesis (Indonesian: tesis) for examination by two or three examiners. The available degrees include but are not limited to the following: Postgraduate studies in Israel require the completion of a bachelor's degree and is dependent upon this title's grades; see Education in Israel#Higher education. Degrees awarded are the MA, MSc, MBA and LLM; the Technion awards a non-thesis MEng. There also exists "a direct track" doctorate degree, which lasts four to five years. Taking this route, students prepare a preliminary research paper during their first year, after which they must pass an exam before being allowed to proceed, at which point they are awarded a master's degree. In Nepal, after completing a bachelor's degree, students must spend at least three or four years studying full-time in college and university, with an entrance test for those who wish to pursue master's, PhD, and doctorate degrees. All doctoral and PhD degrees, as well as third cycle degrees, are research and experience oriented, with a focus on results. After completing a successful bachelor's degree, students pursue master's degrees in engineering, education, and arts, as well as all law and medicine-related courses. MBBS is only a medical degree with six and a half years of study resulting in a medical doctor and must finish its study in four years after master's degree with minimum education of 15 or 16 years of university bachelor's degree education. The following are the most professional and internationalized programs in Nepal: In Taiwan, bachelor's degree courses are about four years in length, while an entrance examination is required for people who want to study for master's degrees and doctorates. The courses leading to these higher degrees are normally research-based. Tuition is less expensive than would be the case in North America, costing as little as US$5000 for an MBA.[citation needed] As an incentive designed to increase foreign student numbers, the government and universities of Taiwan have redoubled their efforts to make a range of high-quality scholarships available in the form of university-specific scholarships that include tuition waivers of up to NT$20,000 per month. The government offers the Taiwan Scholarship amounting to NT$20,000–30,000 per month (US$654–981) for a two-year program. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/SKILL] | [TOKENS: 292] |
Contents Cadence SKILL SKILL is a Lisp dialect used as a scripting language and PCell (parameterized cells) description language used in many electronic design automation (EDA) software suites by Cadence Design Systems. It was originally put forth in an Institute of Electrical and Electronics Engineers (IEEE) paper in 1990. History SKILL was originally based on a flavor of Lisp called Franz Lisp created at University of California, Berkeley by the students of Professor Richard J. Fateman. SKILL is not an acronym; it is a name. For trademark reasons Cadence prefers it be capitalized. Franz Lisp and all other flavors of LISP were eventually superseded by an ANSI standard for Common Lisp. Historically, SKILL was known as IL. SKILL was a library of IL functions. The name was originally an initialism for Silicon Compiler Interface Language (SCIL), pronounced "SKIL", which then morphed into "SKILL", a plain English word that was easier for everyone to remember. "IL" was only Interface Language. Although SKILL was used initially to describe the application programming interface (API) rather than the language, the snappier name stuck. The name IL remains a common file extension used for SKILL code .il designating that the code contained in the file has lisp-2 semantics. Another possible file extension is .ils, designating that the content has lisp-1 semantics. References Academic: External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Sassanid_Empire] | [TOKENS: 21573] |
Contents Sasanian Empire The Sasanian Empire (/səˈsɑːniən/), officially Eranshahr (Middle Persian: 𐭠𐭩𐭥𐭠𐭭𐭱𐭲𐭥𐭩 Ērānšahr, "Empire of the Iranians"),[a] was an Iranian empire that was founded and ruled by the House of Sasan from 224 to 651 AD. Lasting for over four centuries, the length of the Sasanian dynasty's reign over ancient Iran was second only to that of the Arsacid dynasty of Parthia which immediately preceded it. Founded by Ardashir I, whose rise coincided with the decline of Arsacid influence in the face of both internal and external strife, the House of Sasan was highly determined to restore the legacy of the Achaemenid Empire by expanding and consolidating the dominions of the Iranian nation. Most notably, after defeating Artabanus IV of Parthia at the Battle of Hormozdgan in 224, it began competing far more zealously with the neighbouring Roman Empire than the Arsacids had, thus sparking a new phase of the Roman–Iranian Wars. These efforts by Sasanian rulers ultimately led to the re-establishment of Iran as a major power of late antiquity. At its zenith, the Sasanian Empire controlled all of modern-day Iran and Iraq and parts of the Arabian Peninsula, in particular Eastern Arabia and South Arabia, the Caucasus, the Levant, parts of Central Asia as well as parts of the Indian subcontinent. They maintained Ctesiphon as the capital city—as it had been under the Arsacids—for all but the first two years of their empire's existence, when Istakhr briefly served in this capacity. A high point in the history of Iranian civilization, the Sasanian Empire was characterized by a complex and centralized government bureaucracy and the revitalization of Zoroastrianism as a legitimizing and unifying ideal. This period saw the construction of many grand monuments, public works, and patronized cultural and educational institutions. Under the Sasanians, Iran's cultural influence spread far beyond the territory that it controlled, influencing regions as distant as Western Europe, Eastern Africa, and China and India. It also helped shape European and Asian medieval art. Following the rise of Islam in Arabia, and a devastating war with the Byzantine/Eastern Roman Empire, the Sasanian Empire fell to the early Muslim conquests, which were initiated by Muhammad and continued under the Rashidun Caliphate. Although the Muslim conquest of Iran marked a significant religious and cultural shift in the nation's history, the Islamization of Iran enabled the gradual absorption of Sasanian art, architecture, music, literature, and philosophy into nascent Islamic culture, which, in turn, ensured and sustained the proliferation and evolution of Iranian culture, knowledge, and ideas throughout the growing Muslim world. Name Officially, the Sasanian Empire was known as the "Empire[b] of the Iranians" (Middle Persian: 𐭠𐭩𐭥𐭠𐭭𐭱𐭲𐭥𐭩 Ērānšahr; Parthian: 𐭀𐭓𐭉𐭀𐭍𐭇𐭔𐭕𐭓 Aryānxšahr; Greek: Ἀριανῶν ἔθνος Arianōn Ethnos). The term is first attested in the trilingual Great Inscription of Shapur I, in which the king declares "I am the lord of the Empire of the Iranians."[c] More commonly, the empire, like the Sasanian dynasty itself, is named directly after Sasan in historical and academic sources. This term is variously recorded in English as the Sassanian Empire, the Sasanid Empire, and the Sassanid Empire. History Conflicting accounts shroud the details of the fall of the Parthian Empire and subsequent rise of the Sassanian Empire in mystery. The Sassanian Empire was established in Estakhr by Ardashir I. Ardashir's father, Papak, was originally the ruler of a region called Khir. However, by 200, Papak had managed to overthrow Gochihr and appoint himself the new ruler of the Bazrangids. Papak's mother, Rodhagh, was the daughter of the provincial governor of Pars. Papak and his eldest son Shapur managed to expand their power over all of Pars. Subsequent events are unclear due to the elusive nature of the sources. It is certain that following the death of Papak, Ardashir, the governor of Darabgerd, became involved in a power struggle with his elder brother Shapur. Sources reveal that Shapur was killed when the roof of a building collapsed on him. By 208, over the protests of his other brothers, who were put to death, Ardashir declared himself ruler of Pars. Once Ardashir was appointed shah (king), he moved his capital further to the south of Pars and founded Ardashir-Khwarrah (formerly Gur, modern day Firuzabad). The city, well protected by high mountains and easily defensible due to the narrow passes that approached it, became the center of Ardashir's efforts to gain more power. It was surrounded by a high, circular wall, probably copied from that of Darabgerd. Ardashir's palace was on the north side of the city; remains of it are extant. After establishing his rule over Pars, Ardashir rapidly extended his territory, demanding fealty from the local princes of Pars, and gaining control over the neighbouring provinces of Kerman, Isfahan, Susiana and Mesene. This expansion quickly came to the attention of Artabanus IV, the Parthian king, who initially ordered the governor of Khuzestan to wage war against Ardashir in 224, but Ardashir was victorious in the ensuing battles. In a second attempt to destroy Ardashir, Artabanus himself met Ardashir in battle at Hormozgan, where the former met his death. Following the death of the Parthian ruler, Ardashir went on to invade the western provinces of the now defunct Parthian Empire. At that time the Arsacid dynasty was divided between supporters of Artabanus IV and Vologases VI, which probably allowed Ardashir to consolidate his authority in the south with little or no interference from the Parthians. Ardashir was aided by the geography of the province of Pars, which was separated from the rest of Iran. Crowned in 224 at Ctesiphon as the sole ruler of Persia, Ardashir took the title shahanshah, or "King of Kings" (the inscriptions mention Adhur-Anahid as his banbishnan banbishn, "Queen of Queens", but her relationship with Ardashir has not been fully established), bringing the 400-year-old Parthian Empire to an end, and beginning four centuries of Sassanid rule. In the next few years, local rebellions occurred throughout the empire. Nonetheless, Ardashir I further expanded his new empire to the east and northwest, conquering the provinces of Sakastan, Gorgan, Khorasan, Marw (in modern Turkmenistan), Balkh and Chorasmia. He also added Bahrain and Mosul to the Sassanid possessions. Later Sassanid inscriptions also claim the submission of the kings of Kushan, Turan and Makuran to Ardashir, although based on numismatic evidence it is more likely that these actually submitted to Ardashir's son, the future Shapur I. In the west, assaults against Hatra, Armenia and Adiabene met with less success. In 230, Ardashir raided deep into Roman territory, and a Roman counter-offensive two years later ended inconclusively. Ardashīr began leading campaigns into Greater Khurasan as early as 233, extending his power to Khwarazm in the north and Sistan in the south while capturing lands from Gorgan to Abarshahr, Marw, and as far east as Balkh. Ardashir I's son Shapur I continued the expansion of the empire, conquering Bactria and the western portion of the Kushan Empire, while leading several campaigns against Rome. Invading Roman Mesopotamia, Shapur I captured Carrhae and Nisibis, but in 243 the Roman general Timesitheus defeated the Persians at Rhesaina and regained the lost territories. The emperor Gordian III's (238–244) subsequent advance down the Euphrates was defeated at Meshike (244), leading to Gordian's murder by his own troops and enabling Shapur to conclude a highly advantageous peace treaty with the new emperor Philip the Arab, by which he secured the immediate payment of 500,000 denarii and further annual payments. Shapur soon resumed the war, defeated the Romans at Barbalissos (253), and then probably took and plundered Antioch. Roman counter-attacks under the emperor Valerian ended in disaster when the Roman army was defeated and besieged at Edessa and Valerian was captured by Shapur, remaining his prisoner for the rest of his life. Shapur celebrated his victory by carving the impressive rock reliefs in Naqsh-e Rostam and Bishapur, as well as a monumental inscription in Persian and Greek in the vicinity of Persepolis. He exploited his success by advancing into Anatolia (260), but withdrew in disarray after defeats at the hands of the Romans and their Palmyrene ally Odaenathus, suffering the capture of his harem and the loss of all the Roman territories he had occupied. In 262 CE, Shapur attacked the Kushans destroying the cities of Begram and Taxila. Shapur had intensive development plans. He ordered the construction of the first dam bridge in Iran and founded many cities, some settled in part by emigrants from the Roman territories, including Christians who could exercise their faith freely under Sassanid rule. Two cities, Bishapur and Nishapur, are named after him. He particularly favoured Manichaeism, protecting Mani (who dedicated one of his books, the Shabuhragan, to him) and sent many Manichaean missionaries abroad. He also befriended a Babylonian rabbi called Samuel. This friendship was advantageous for the Jewish community and gave them a respite from the oppressive laws enacted against them. Later kings reversed Shapur's policy of religious tolerance. When Shapur's son Bahram I acceded to the throne, he was pressured by the Zoroastrian high-priest Kartir Bahram I to kill Mani and persecute his followers. Bahram II was also amenable to the wishes of the Zoroastrian priesthood. During his reign, the Sassanid capital Ctesiphon was sacked by the Romans under Emperor Carus, and most of Armenia, after half a century of Persian rule, was ceded to Diocletian. Succeeding Bahram III (who ruled briefly in 293), Narseh embarked on another war with the Romans. After an early success against the Emperor Galerius near Callinicum on the Euphrates in 296, he was eventually decisively defeated by them. Galerius had been reinforced, probably in the spring of 298, by a new contingent collected from the empire's Danubian holdings. Narseh did not advance from Armenia and Mesopotamia, leaving Galerius to lead the offensive in 298 with an attack on northern Mesopotamia via Armenia. Narseh retreated to Armenia to fight Galerius's force, to the former's disadvantage: the rugged Armenian terrain was favourable to Roman infantry, but not to Sassanid cavalry. Local aid gave Galerius the advantage of surprise over the Persian forces, and, in two successive battles, Galerius secured victories over Narseh. During the second encounter, Roman forces seized Narseh's camp, his treasury, his harem, and his wife. Galerius advanced into Media and Adiabene, winning successive victories, most prominently near Erzurum, and securing Nisibis (Nusaybin, Turkey) before 1 October 298. He then advanced down the Tigris, taking Ctesiphon. Narseh had previously sent an ambassador to Galerius to plead for the return of his wives and children. Peace negotiations began in the spring of 299, with both Diocletian and Galerius presiding. The conditions of the peace were heavy: Persia would give up territory to Rome, making the Tigris the boundary between the two empires. Further terms specified that Armenia was returned to Roman domination, with the fort of Ziatha as its border; Caucasian Iberia would pay allegiance to Rome under a Roman appointee; Nisibis, now under Roman rule, would become the sole conduit for trade between Persia and Rome; and Rome would exercise control over the five satrapies between the Tigris and Armenia: Ingilene, Sophanene (Sophene), Arzanene (Aghdznik), Corduene, and Zabdicene (near modern Hakkâri, Turkey). The Sassanids ceded five provinces west of the Tigris, and agreed not to interfere in the affairs of Armenia and Georgia. In the aftermath of this defeat, Narseh gave up the throne and died a year later, leaving the Sassanid throne to his son, Hormizd II. Unrest spread throughout the land, and while the new king suppressed revolts in Sakastan and Kushan, he was unable to control the nobles and was subsequently killed by Bedouins on a hunting trip in 309. Following Hormizd II's death, northern Arabs started to ravage and plunder the western cities of the empire, even attacking the province of pars, the birthplace of the Sassanid kings. Meanwhile, Persian nobles killed Hormizd II's eldest son, blinded the second, and imprisoned the third (who later escaped into Roman territory). The throne was reserved for a younger son, Shapur II.[d] During his youth the empire was controlled by his mother and the nobles. Upon coming of age, Shapur II assumed power and quickly proved to be an active and effective ruler. He first led his small but disciplined army south against the Arabs, whom he defeated, securing the southern areas of the empire. He then began his first campaign against the Romans in the west, where Persian forces won a series of battles but were unable to make territorial gains due to the failure of repeated sieges of the key frontier city of Nisibis, and Roman success in retaking the cities of Singara and Amida after they had previously fallen to the Persians. These campaigns were halted by nomadic raids along the eastern borders of the empire, which threatened Transoxiana, a strategically critical area for control of the Silk Road. Shapur therefore marched east toward Transoxiana to meet the eastern nomads, leaving his local commanders to mount nuisance raids on the Romans. He crushed the Central Asian tribes, and annexed the area as a new province. In the east around 325, Shapur II regained the upper hand against the Kushano-Sasanian Kingdom and took control of large territories in areas now known as Afghanistan and Pakistan. Cultural expansion followed this victory, and Sasanian art penetrated Transoxiana, reaching as far as China. Shapur, along with the nomad King Grumbates, started his second campaign against the Romans in 359 and soon succeeded in retaking Singara and Amida. In response the Roman emperor Julian struck deep into Persian territory and defeated Shapur's forces at Ctesiphon. He failed to take the capital, however, and was killed while trying to retreat to Roman territory. His successor Jovian, trapped on the east bank of the Tigris, had to hand over all the provinces the Persians had ceded to Rome in 298, as well as Nisibis and Singara, to secure safe passage for his army out of Persia. From around 370, however, towards the end of the reign of Shapur II, the Sasanians lost the control of Bactria to invaders from the north: first the Kidarites, then the Hephthalites and finally the Alchon Huns, who would follow up with an invasion of India. These invaders initially issued coins based on Sasanian designs. Various coins minted in Bactria and based on Sasanian designs are extant, often with busts imitating Sassanian kings Shapur II (r. 309 to 379) and Shapur III (r. 383 to 388), adding the Alchon Tamgha and the name "Alchono" in Bactrian script on the obverse, and with attendants to a fire altar on the reverse. Shapur II pursued a harsh religious policy. Under his reign, the collection of the Avesta, the sacred texts of Zoroastrianism, was completed, heresy and apostasy were punished, and Christians were persecuted. The latter was a reaction against the Christianization of the Roman Empire by Constantine the Great. Shapur II, like Shapur I, was amicable towards Jews, who lived in relative freedom and gained many advantages during his reign.[e] At the time of his death, the Persian Empire was stronger than ever, with its enemies to the east pacified and Armenia under Persian control. From Shapur II's death until Kavad I's first coronation, there was a largely peaceful period with the Romans (by this time the Eastern Roman or Byzantine Empire) engaged in just two brief wars with the Sasanian Empire, the first in 421–422 and the second in 440. Throughout this era, Sasanian religious policy differed dramatically from king to king. Despite a series of weak leaders, the administrative system established during Shapur II's reign remained strong, and the empire continued to function effectively. After Shapur II died in 379, the empire passed on to his half-brother Ardashir II (379–383; son of Hormizd II) and his son Shapur III (383–388), neither of whom demonstrated their predecessor's skill in ruling. Bahram IV (388–399) also failed to achieve anything important for the empire. During this time Armenia was divided by a treaty between the Roman and Sasanian empires. The Sasanians reestablished their rule over Greater Armenia, while the Byzantine Empire held a small portion of western Armenia. Bahram IV's son Yazdegerd I (399–421) is often compared to Constantine I. Both were physically and diplomatically powerful, opportunistic, practiced religious tolerance and provided freedom for the rise of religious minorities. Yazdegerd stopped the persecution against the Christians and punished nobles and priests who persecuted them. His reign marked a relatively peaceful era with the Romans, and he even took the young Theodosius II (408–450) under his guardianship. Yazdegerd also married a Jewish princess, who bore him a son called Narsi. Yazdegerd I's successor was his son Bahram V (421–438), one of the most well-known Sasanian kings and the hero of many myths. These myths persisted even after the destruction of the Sasanian Empire by the Arabs. Bahram gained the crown after Yazdegerd's sudden death (or assassination), which occurred when the grandees opposed the king with the help of al-Mundhir, the Arabic dynast of al-Hirah. Bahram's mother was Shushandukht, the daughter of the Jewish Exilarch. In 427, he crushed an invasion in the east by the nomadic Hephthalites, extending his influence into Central Asia, where his portrait survived for centuries on the coinage of Bukhara (in modern Uzbekistan). Bahram deposed the vassal king of the Iranian-held area of Armenia and made it a province of the empire. Bahram V's son Yazdegerd II (438–457) was in some ways a moderate ruler, but, in contrast to Yazdegerd I, he practised a harsh policy towards minority religions, particularly Christianity. However, at the Battle of Avarayr in 451, the Armenian subjects led by Vardan Mamikonian reaffirmed Armenia's right to profess Christianity freely. This was to be later confirmed by the Nvarsak Treaty (484). At the beginning of his reign in 441, Yazdegerd II assembled an army of soldiers from various nations, including his Indian allies, and attacked the Byzantine Empire, but peace was soon restored after some small-scale fighting. He then gathered his forces in Nishapur in 443 and launched a prolonged campaign against the Kidarites. After a number of battles he crushed them and drove them out beyond the Oxus river in 450. During his eastern campaign, Yazdegerd II grew suspicious of the Christians in the army and expelled them all from the governing body and army. He then persecuted the Christians in his land, and, to a much lesser extent, the Jews. In order to reestablish Zoroastrianism in Armenia, he crushed an uprising of Armenian Christians at the Battle of Vartanantz in 451. The Armenians, however, remained primarily Christian. In his later years, he was engaged yet again with the Kidarites right up until his death in 457. Hormizd III (457–459), the younger son of Yazdegerd II, then ascended to the throne. During his short rule, he continually fought with his elder brother Peroz I, who had the support of the nobility, and with the Hephthalites in Bactria. He was killed by his brother Peroz in 459. At the beginning of the 5th century, the Hephthalites (White Huns), along with other nomadic groups, attacked Iran. At first Bahram V and Yazdegerd II inflicted decisive defeats against them and drove them back eastward. The Huns returned at the end of the 5th century and defeated Peroz I (457–484) in 483. Following this victory, the Huns invaded and plundered parts of eastern Iran continually for two years. They exacted heavy tribute for some years thereafter. These attacks brought instability and chaos to the kingdom. Peroz tried again to drive out the Hephthalites, but on the way to Balkh his army was trapped by the Huns in the desert. Peroz was defeated and killed by a Hephthalite army near Balkh. His army was completely destroyed, and his body was never found. Four of his sons and brothers had also died. The main Sasanian cities of the eastern region of Khorasan−Nishapur, Herat and Marw were now under Hephthalite rule. Sukhra, a member of the Parthian House of Karen, one of the Seven Great Houses of Iran, quickly raised a new force and stopped the Hephthalites from achieving further success. Peroz's brother, Balash, was elected as shah by the Iranian magnates, most notably Sukhra and the Mihranid general Shapur Mihran. Balash (484–488) was a mild and generous monarch, and showed care towards his subjects, including the Christians. However, he proved unpopular among the nobility and clergy who had him deposed after just four years in 488. Sukhra, who had played a key role in Balash's deposition, appointed Peroz's son Kavad I as the new shah of Iran. According to Miskawayh (d. 1030), Sukhra was Kavad's maternal uncle. Kavad I (488–531) was an energetic and reformist ruler. He gave his support to the sect founded by Mazdak, son of Bamdad, who demanded that the rich should divide their wives and their wealth with the poor. By adopting the doctrine of the Mazdakites, his intention evidently was to break the influence of the magnates and the growing aristocracy. These reforms led to his being deposed and imprisoned in the Castle of Oblivion in Khuzestan, and his younger brother Jamasp (Zamaspes) became king in 496. Kavad, however, quickly escaped and was given refuge by the Hephthalite king. Jamasp (496–498) was installed on the Sasanian throne upon the deposition of Kavad I by members of the nobility. He was a good and kind king; he reduced taxes in order to improve the condition of the peasants and the poor. He was also an adherent of the mainstream Zoroastrian religion, diversions from which had cost Kavad I his throne and freedom. Jamasp's reign soon ended, however, when Kavad I, at the head of a large army granted to him by the Hephthalite king, returned to the empire's capital. Jamasp stepped down from his position and returned the throne to his brother. No further mention of Jamasp is made after the restoration of Kavad I, but it is widely believed that he was treated favourably at the court of his brother. The second golden era began after the second reign of Kavad I. With the support of the Hephthalites, Kavad launched a campaign against the Romans. In 502, he took Theodosiopolis in Armenia, but lost it soon afterwards. In 503 he took Amida on the Tigris. In 504, an invasion of Armenia by the western Huns from the Caucasus led to an armistice, the return of Amida to Roman control and a peace treaty in 506. In 521/522 Kavad lost control of Lazica, whose rulers switched their allegiance to the Romans; an attempt by the Iberians in 524/525 to do likewise triggered a war between Rome and Persia. In 527, a Roman offensive against Nisibis was repulsed and Roman efforts to fortify positions near the frontier were thwarted. In 530, Kavad sent an army under Perozes to attack the important Roman frontier city of Dara. The army was met by the Roman general Belisarius, and, though superior in numbers, was defeated at the Battle of Dara. In the same year, a second Persian army under Mihr-Mihroe was defeated at Satala by Roman forces under Sittas and Dorotheus, but in 531 a Persian army accompanied by a Lakhmid contingent under Al-Mundhir III defeated Belisarius at the Battle of Callinicum, and in 532 an "eternal peace" was concluded. Kavad succeeded in restoring order in the interior and fought with general success against the Eastern Romans, founded several cities, some of which were named after him, and began to regulate taxation and internal administration. After the reign of Kavad I, his son Khosrow I, also known as Anushirvan ("with the immortal soul"; ruled 531–579), ascended to the throne. He is the most celebrated of the Sassanid rulers. Khosrow I is most famous for his reforms in the aging governing body of Sassanids. He introduced a rational system of taxation based upon a survey of landed possessions, which his father had begun, and he tried in every way to increase the welfare and the revenues of his empire. Previous great feudal lords fielded their own military equipment, followers, and retainers. Khosrow I developed a new force of dehqans, or "knights", paid and equipped by the central government and the bureaucracy, tying the army and bureaucracy more closely to the central government than to local lords. Emperor Justinian I (527–565) paid Khosrow I 440,000 pieces of gold as a part of the "eternal peace" treaty of 532. In 540, Khosrow broke the treaty and invaded Syria, sacking Antioch and extorting large sums of money from a number of other cities. Further successes followed: in 541 Lazica defected to the Persian side, and in 542 a major Byzantine offensive in Armenia was defeated at Anglon. Also in 541, Khosrow I entered Lazica at the invitation of its king, captured the main Byzantine stronghold at Petra, and established another protectorate over the country, commencing the Lazic War. A five-year truce agreed to in 545 was interrupted in 547 when Lazica again switched sides and eventually expelled its Persian garrison with Byzantine help; the war resumed but remained confined to Lazica, which was retained by the Byzantines when peace was concluded in 562. In 565, Justinian I died and was succeeded by Justin II (565–578), who resolved to stop subsidies to Arab chieftains to restrain them from raiding Byzantine territory in Syria. A year earlier, the Sassanid governor of Armenia, Chihor-Vishnasp of the Suren family, built a fire temple at Dvin near modern Yerevan, and he put to death an influential member of the Mamikonian family, touching off a revolt which led to the massacre of the Persian governor and his guard in 571, while rebellion also broke out in Iberia. Justin II took advantage of the Armenian revolt to stop his yearly payments to Khosrow I for the defense of the Caucasus passes. The Armenians were welcomed as allies, and an army was sent into Sassanid territory which besieged Nisibis in 573. However, dissension among the Byzantine generals not only led to an abandonment of the siege, but they in turn were besieged in the city of Dara, which was taken by the Persians. Capitalizing on this success, the Persians then ravaged Syria, causing Justin II to agree to make annual payments in exchange for a five-year truce on the Mesopotamian front, although the war continued elsewhere. In 576 Khosrow I led his last campaign, an offensive into Anatolia which sacked Sebasteia and Melitene, but ended in disaster: defeated outside Melitene, the Persians suffered heavy losses as they fled across the Euphrates under Byzantine attack. Taking advantage of Persian disarray, the Byzantines raided deep into Khosrow's territory, even mounting amphibious attacks across the Caspian Sea. Khosrow sued for peace, but he decided to continue the war after a victory by his general Tamkhosrow in Armenia in 577, and fighting resumed in Mesopotamia. The Armenian revolt came to an end with a general amnesty, which brought Armenia back into the Sassanid Empire. Around 570, "Ma 'd-Karib", half-brother of the King of Yemen, requested Khosrow I's intervention. Khosrow I sent a fleet and a small army under a commander called Vahriz to the area near present Aden, and they marched against the capital San'a'l, which was occupied. Saif, son of Mard-Karib, who had accompanied the expedition, became King sometime between 575 and 577. Thus, the Sassanids were able to establish a base in South Arabia to control the sea trade with the east. Later, the south Arabian kingdom renounced Sassanid overlordship, and another Persian expedition was sent in 598 that successfully annexed southern Arabia as a Sassanid province, which lasted until the time of troubles after Khosrow II. Khosrow I's reign witnessed the rise of the dehqans (literally, village lords), the petty landholding nobility who were the backbone of later Sassanid provincial administration and the tax collection system. Khosrow I built infrastructure, embellishing his capital and founding new towns with the construction of new buildings. He rebuilt the canals and restocked the farms destroyed in the wars. He built strong fortifications at the passes and placed subject tribes in carefully chosen towns on the frontiers to act as guardians against invaders. He was tolerant of all religions, though he decreed that Zoroastrianism should be the official state religion, and was not unduly disturbed when one of his sons became a Christian. After Khosrow I, Hormizd IV (579–590) took the throne. The war with the Byzantines continued to rage intensely but inconclusively until the general Bahram Chobin, dismissed and humiliated by Hormizd, rose in revolt in 589. The following year, Hormizd was overthrown by a palace coup and his son Khosrow II (590–628) placed on the throne. However, this change of ruler failed to placate Bahram, who defeated Khosrow, forcing him to flee to Byzantine territory, and seized the throne for himself as Bahram VI. Khosrow asked the Byzantine Emperor Maurice (582–602) for assistance against Bahram, offering to cede the western Caucasus to the Byzantines. To cement the alliance, Khosrow also married Maurice's daughter Miriam. Under the command of Khosrow and the Byzantine generals Narses and John Mystacon, the new combined Byzantine-Persian army raised a rebellion against Bahram, defeating him at the Battle of Blarathon in 591. When Khosrow was subsequently restored to power he kept his promise, handing over control of western Armenia and Caucasian Iberia. The new peace arrangement allowed the two empires to focus on military matters elsewhere: Khosrow focused on the Sassanid Empire's eastern frontier while Maurice restored Byzantine control of the Balkans. Circa 600, the Hephthalites had been raiding the Sassanid Empire as far as Spahan in central Iran. The Hephthalites issued numerous coins imitating the coinage of Khosrow II. In c. 606/607, Khosrow recalled Smbat IV Bagratuni from Persian Armenia and sent him to Iran to repel the Hephthalites. Smbat, with the aid of a Persian prince named Datoyean, repelled the Hephthalites from Persia, and plundered their domains in eastern Khorasan, where Smbat is said to have killed their king in single combat. After Maurice was overthrown and killed by Phocas (602–610) in 602, however, Khosrow II used the murder of his benefactor as a pretext to begin a new invasion, which benefited from continuing civil war in the Byzantine Empire and met little effective resistance. Khosrow's generals systematically subdued the heavily fortified frontier cities of Byzantine Mesopotamia and Armenia, laying the foundations for unprecedented expansion. The Persians overran Syria and captured Antioch in 611. In 613, outside Antioch, the Persian generals Shahrbaraz and Shahin decisively defeated a major counter-attack led in person by the Byzantine emperor Heraclius. Thereafter, the Persian advance continued unchecked. Jerusalem fell in 614, Alexandria in 619, and the rest of Egypt by 621. The Sassanid dream of restoring the Achaemenid boundaries was almost complete, while the Byzantine Empire was on the verge of collapse. This remarkable peak of expansion was paralleled by a blossoming of Persian art, music, and architecture. While successful at its first stage (from 602 to 622), the campaign of Khosrow II had actually exhausted the Persian army and treasuries. In an effort to rebuild the national treasuries, Khosrow overtaxed the population. Thus, while his empire was on the verge of total defeat, Heraclius (610–641) drew on all his diminished and devastated empire's remaining resources, reorganised his armies, and mounted a remarkable, risky counter-offensive. Between 622 and 627, he campaigned against the Persians in Anatolia and the Caucasus, winning a string of victories against Persian forces under Shahrbaraz, Shahin, and Shahraplakan (whose competition to claim the glory of personally defeating the Byzantine emperor contributed to their failure), sacking the great Zoroastrian temple at Ganzak, and securing assistance from the Khazars and Western Turkic Khaganate. In response, Khosrow, in coordination with Avar and Slavic forces, launched a siege on the Byzantine capital of Constantinople in 626. The Sassanids, led by Shahrbaraz, attacked the city on the eastern side of the Bosphorus, while his Avar and Slavic allies invaded from the western side. Attempts to ferry the Persian forces across the Bosphorus to aid their allies (the Sassanid forces being by far the most capable in siege warfare) were blocked by the Byzantine fleet, and the siege ended in failure. In 627–628, Heraclius mounted a winter invasion of Mesopotamia, and, despite the departure of his Khazar allies, defeated a Persian army commanded by Rhahzadh in the Battle of Nineveh. He then marched down the Tigris, devastating the country and sacking Khosrow's palace at Dastagerd. He was prevented from attacking Ctesiphon by the destruction of the bridges on the Nahrawan Canal and conducted further raids before withdrawing up the Diyala into north-western Iran. The impact of Heraclius's victories, the devastation of the richest territories of the Sassanid Empire, and the humiliating destruction of high-profile targets such as Ganzak and Dastagerd fatally undermined Khosrow's prestige and his support among the Persian aristocracy. In early 628, he was overthrown and murdered by his son Kavadh II (628), who immediately brought an end to the war, agreeing to withdraw from all occupied territories. In 629, Heraclius restored the True Cross to Jerusalem in a majestic ceremony. Kavadh died within months, and chaos and civil war followed. Over a period of four years and five successive kings, the Sassanid Empire weakened considerably. The power of the central authority passed into the hands of the generals. It would take several years for a strong king to emerge from a series of coups, and the Sassanids never had time to recover fully. In early 632, a grandson of Khosrow I, who had lived in hiding in Estakhr, Yazdegerd III, acceded to the throne. The same year, the first raiders from the Arab tribes, newly united by Islam, arrived in Persian territory. According to Howard-Johnston, years of warfare had exhausted both the Byzantines and the Persians. The Sassanids were further weakened by economic decline, heavy taxation, religious unrest, rigid social stratification, the increasing power of the provincial landholders, and a rapid turnover of rulers, facilitating the Islamic conquest of Persia. The Sassanids never mounted a truly effective resistance to the pressure applied by the initial Arab armies. Yazdegerd was a boy at the mercy of his advisers and incapable of uniting a vast country crumbling into small feudal kingdoms, despite the fact that the Byzantines, under similar pressure from the newly expansive Arabs, were no longer a threat. Caliph Abu Bakr's commander Khalid ibn Walid, once one of Muhammad's chosen companions-in-arms and leader of the Arab army, moved to capture Iraq in a series of lightning battles. Redeployed to the Syrian front against the Byzantines in June 634, Khalid's successor in Iraq failed him, and the Muslims were defeated in the Battle of the Bridge in 634. However, the Arab threat did not stop there and reemerged shortly via the disciplined armies of Khalid ibn Walid. In 637, a Muslim army under the Caliph Umar ibn al-Khattāb defeated a larger Persian force led by General Rostam Farrokhzad at the plains of al-Qādisiyyah, and then advanced on Ctesiphon, which fell after a prolonged siege. Yazdegerd fled eastward from Ctesiphon, leaving behind him most of the empire's vast treasury. The Arabs captured Ctesiphon shortly afterward. Thus, the Muslims were able to seize a powerful financial resource, leaving the Sassanid government strapped for funds. A number of Sassanid governors attempted to combine their forces to throw back the invaders, but the effort was crippled by the lack of a strong central authority, and the governors were defeated at the Battle of Nihawānd. The empire, with its military command structure non-existent, its non-noble troop levies decimated, its financial resources effectively destroyed, and the Asawaran (Azatan) knightly caste destroyed piecemeal, was now utterly helpless in the face of the Arab invaders. Upon hearing of the defeat in Nihawānd, Yazdegerd along with Farrukhzad and some of the Persian nobles fled further inland to the eastern province of Khorasan. Yazdegerd was assassinated by a miller in Merv in late 651. His sons, Peroz and Bahram, fled to Tang China. Some of the nobles settled in Central Asia, where they contributed greatly to spreading the Persian culture and language in those regions and to the establishment of the first native Iranian Islamic dynasty, the Samanid dynasty, which sought to revive Sassanid traditions. The abrupt fall of the Sassanid Empire was completed in a period of just five years, and most of its territory was absorbed into the Islamic caliphate; however, many Iranian cities resisted and fought against the invaders several times. Islamic caliphates repeatedly suppressed revolts in cities such as Rey, Isfahan, and Hamadan. The local population was initially under little pressure to convert to Islam, remaining as dhimmi subjects of the Muslim state and paying the jizya tax. In addition, the old Sassanid "land tax" (known in Arabic as kharaj) was also adopted. Caliph Umar is said to have occasionally set up a commission to survey the taxes, to judge if they were more than the land could bear. The Sasanian Empire was the last native Iranian government before the Arab invasion. In 1501, the Safavid dynasty became the first local rulers to take control of all of Iran since the Arabs had ended the Sasanian Empire in the 7th century. It is believed that the following dynasties and noble families had ancestors among the Sasanian rulers: Government The Sassanids established an empire roughly within the frontiers achieved by the Parthian Arsacids, with the capital at Ctesiphon in the Asoristan province. In administering this empire, Sassanid rulers took the title of shahanshah (King of Kings), becoming the central overlords and also assumed guardianship of the sacred fire, the symbol of the national religion. This symbol is explicit on Sassanid coins where the reigning monarch, with his crown and regalia of office, appears on the obverse, backed by the sacred fire, the symbol of the national religion, on the coin's reverse. Sassanid queens had the title of banbishnan banbishn (Queen of Queens). On a smaller scale, the territory might also be ruled by a number of petty rulers from a noble family, known as shahrdar, overseen directly by the shahanshah. The districts of the provinces were ruled by a shahrab and a mowbed (chief priest). The mowbed dealt with estates and other legal matters. Sasanian rule was characterized by considerable centralization, ambitious urban planning, agricultural development, and technological improvements. Below the king, a powerful bureaucracy carried out much of the affairs of government; the head of the bureaucracy was the wuzurg framadar (vizier or prime minister). Within this bureaucracy the Zoroastrian priesthood was immensely powerful. Below the emperor, the most powerful men of the Sassanid state were his chief officials: the mowbedan mowbed, the head of the priestly class (magi); the spahbed, the commander-in-chief; the hutukhshbed, the head of traders and merchants' syndicate; and the minister of agriculture, the wastaryoshan-salar, who was also head of farmers. Formally absolute monarchs, the Sasanian rulers usually considered the advice of their ministers. A 10th-century Muslim historian, al-Masudi, praised the "excellent administration of the Sasanian kings, their well-ordered policy, their care for their subjects, and the prosperity of their domains". In normal times, the monarchical office was hereditary, but might be transferred by the king to a younger son; in two instances the supreme power was held by queens. When no direct heir was available, the nobles and prelates chose a ruler, but their choice was restricted to members of the royal family. The Sasanian nobility was a mixture of old Parthian clans, Persian aristocratic families, and noble families from subjected territories. Many new noble families had risen after the dissolution of the Parthian dynasty, while several of the once-dominant Seven Parthian clans remained of high importance. At the court of Ardashir I, the old Arsacid families of the House of Karen and the House of Suren, along with several other families, the Varazes and Andigans, held positions of great honor. Alongside these Iranian and non-Iranian noble families, the kings of Merv, Abarshahr, Kirman, Sakastan, Iberia, and Adiabene, who are mentioned as holding positions of honor amongst the nobles, appeared at the court of the shahanshah. Indeed, the extensive domains of the Surens, Karens and Varazes, had become part of the original Sassanid state as semi-independent states. Thus, the noble families that attended at the court of the Sassanid empire continued to be ruling lines in their own right, although subordinate to the shahanshah. In general, wuzurgan from Iranian families held the most powerful positions in the imperial administration, including governorships of border provinces (marzban). Most of these positions were patrimonial, and many were passed down through a single family for generations. The marzbans of greatest seniority were permitted a silver throne, while marzbans of the most strategic border provinces, such as the Caucasus province, were allowed a golden throne. In military campaigns, the regional marzbans could be regarded as field marshals, while lesser spahbeds could command a field army. Culturally, the Sassanids implemented a system of social stratification. This system was supported by Zoroastrianism, which was established as the state religion. Other religions appear to have been largely tolerated, although this claim has been debated. Sassanid emperors consciously sought to resuscitate Persian traditions and to obliterate Greek cultural influence. The active army of the Sassanid Empire originated from Ardashir I, the first shahanshah of the empire. Ardashir restored the Achaemenid military organizations, retained the Parthian cavalry model, and employed new types of armour and siege warfare techniques. The relationship between priests and warriors was important, because the concept of Ērānshahr had been revived by the priests. Disagreements between the priests and the warriors led to fragmentation within the empire, which led to its downfall. The Paygan formed the bulk of the Sassanid infantry, and were often recruited from the peasant population. Each unit was headed by an officer called a "Paygan-salar", which meant "commander of the infantry" and their main task was to guard the baggage train, serve as pages to the Asvaran (a higher rank), storm fortification walls, undertake entrenchment projects, and excavate mines. Those serving in the infantry were fitted with shields and lances. To make the size of their army larger, the Sassanids added soldiers provided by the Medes and the Dailamites to their own. The Medes provided the Sassanid army with high-quality javelin throwers, slingers and heavy infantry. Iranian infantry are described by Ammianus Marcellinus as "armed like gladiators" and "obey orders like so many horse-boys". The Dailamite people also served as infantry and were Iranian people who lived mainly within Gilan, Iranian Azerbaijan and Mazandaran. They are reported as having fought with weapons such as daggers, swords and javelins and reputed to have been recognized by Romans for their skills and hardiness in close-quarter combat. One account of Dailamites recounted their participation in an invasion of Yemen where 800 of them were led by the Dailamite officer Vahriz. Vahriz would eventually defeat the Arab forces in Yemen and its capital Sana'a making it a Sasanian vassal until the invasion of Persia by Arabs. The Sasanian navy was an important constituent of the Sasanian military from the time that Ardashir I conquered the Arab side of the Persian Gulf. Because controlling the Persian Gulf was an economic necessity, the Sasanian navy worked to keep it safe from piracy, prevent Roman encroachment, and keep the Arab tribes from getting hostile. However, it is believed by many historians that the naval force could not have been a strong one, as the men serving in the navy were those who were confined in prisons. The leader of the navy bore the title of nāvbed. The cavalry used during the Sassanid Empire were two types of heavy cavalry units: Clibanarii and Cataphracts. The first cavalry force, composed of elite noblemen trained since youth for military service, was supported by light cavalry, infantry and archers. Mercenaries and tribal people of the empire, including the Turks, Kushans, Sarmatians, Khazars, Georgians, and Armenians were included in these first cavalry units. The second cavalry involved the use of the war elephants. In fact, it was their specialty to deploy elephants as cavalry support. Unlike the Parthians, the Sassanids developed advanced siege engines. The development of siege weapons was a useful weapon during conflicts with Rome, in which success hinged upon the ability to seize cities and other fortified points; conversely, the Sassanids also developed a number of techniques for defending their own cities from attack. The Sassanid army was much like the preceding Parthian army, although some of the Sassanid's heavy cavalry were equipped with lances, while Parthian armies were heavily equipped with bows. The Roman historian Ammianus Marcellinus's description of Shapur II's clibanarii cavalry manifestly shows how heavily equipped it was, and how only a portion were spear equipped: All the companies were clad in iron, and all parts of their bodies were covered with thick plates, so fitted that the stiff-joints conformed with those of their limbs; and the forms of human faces were so skillfully fitted to their heads, that since their entire body was covered with metal, arrows that fell upon them could lodge only where they could see a little through tiny openings opposite the pupil of the eye, or where through the tip of their nose they were able to get a little breath. Of these, some who were armed with pikes, stood so motionless that you would have thought them held fast by clamps of bronze. Horsemen in the Sassanid cavalry lacked a stirrup. Instead, they used a war saddle which had a cantle at the back and two guard clamps which curved across the top of the rider's thighs. This allowed the horsemen to stay in the saddle at all times during the battle, especially during violent encounters. The Byzantine emperor Maurikios also emphasizes in his Strategikon that many of the Sassanid heavy cavalry did not carry spears, relying on their bows as their primary weapons. Conversely the Taq-e Bostan reliefs and Al-Tabari's famed list of equipment required for dihqan knights included the lance. The amount of money involved in maintaining a warrior of the Asawaran (Azatan) knightly caste required a small estate, and the Asawaran (Azatan) knightly caste received that from the throne, and in return, were the throne's most notable defenders in time of war. Relations with neighbouring powers The Sassanids, like the Parthians, were in constant hostilities with the Roman Empire. The Sassanids, who succeeded the Parthians, were recognized as one of the leading world powers alongside its neighboring rival the Byzantine Empire, or Eastern Roman Empire, for a period of more than 400 years. Following the division of the Roman Empire in 395, the Byzantine Empire, with its capital at Constantinople, continued as Persia's principal western enemy, and main enemy in general. Hostilities between the two empires became more frequent. The Sassanids, similar to the Roman Empire, were in a constant state of conflict with neighboring kingdoms and nomadic hordes. Although the threat of nomadic incursions could never be fully resolved, the Sassanids generally dealt much more successfully with these matters than did the Romans, due to their policy of making coordinated campaigns against threatening nomads. The last of the many and frequent wars with the Byzantines, the climactic Byzantine–Sasanian War of 602–628, which included the siege of the Byzantine capital Constantinople, ended with both rivalling sides having drastically exhausted their human and material resources. Furthermore, social conflict within the Empire had considerably weakened it further. Consequently, they were vulnerable to the sudden emergence of the Islamic Rashidun Caliphate, whose forces invaded both empires only a few years after the war. The Muslim forces swiftly conquered the entire Sasanian Empire and in the Byzantine–Arab Wars deprived the Byzantine Empire of its territories in the Levant, the Caucasus, Egypt, and North Africa. Over the following centuries, half the Byzantine Empire and the entire Sasanian Empire came under Muslim rule. In general, over the span of the centuries, in the west, Sassanid territory abutted that of the large and stable Roman state, but to the east, its nearest neighbors were the Kushan Empire and nomadic tribes such as the White Huns. The construction of fortifications such as Tus citadel or the city of Nishapur, which later became a center of learning and trade, also assisted in defending the eastern provinces from attack. In south and central Arabia, Bedouin Arab tribes occasionally raided the Sassanid empire. The Kingdom of Al-Hirah, a Sassanid vassal kingdom, was established to form a buffer zone between the empire's heartland and the Bedouin tribes. The dissolution of the Kingdom of Al-Hirah by Khosrow II in 602 contributed greatly to decisive Sassanid defeats suffered against Bedouin Arabs later in the century. These defeats resulted in a sudden takeover of the Sassanid empire by Bedouin tribes under the Islamic banner. In the north, Khazars and the Western Turkic Khaganate frequently assaulted the northern provinces of the empire. They plundered Media in 634. Shortly thereafter, the Persian army defeated them and drove them out. The Sassanids built numerous fortifications in the Caucasus region to halt these attacks, such as the imposing fortifications built in Derbent (Dagestan, North Caucasus, now a part of Russia) that to a large extent, have remained intact up to this day. On the eastern side of the Caspian Sea, the Sassanians erected the Great Wall of Gorgan, a 200 km-long defensive structure probably aimed to protect the empire from northern peoples, such as the White Huns. In 522, before Khosrow's reign, a group of monophysite Axumites led an attack on the dominant Himyarites of southern Arabia. The local Arab leader was able to resist the attack but appealed to the Sassanians for aid, while the Axumites subsequently turned towards the Byzantines for help. The Axumites sent another force across the Red Sea and this time successfully killed the Arab leader and replaced him with an Axumite man to be king of the region. In 531, Emperor Justinian suggested that the Axumites of Yemen should cut out the Persians from Indian trade by maritime trade with the Indians. The Ethiopians never met this request because an Axumite general named Abraha took control of the Yemenite throne and created an independent nation. After Abraha's death one of his sons, Ma'd-Karib, went into exile while his half-brother took the throne. After being denied by Justinian, Ma'd-Karib sought help from Khosrow, who sent a small fleet and army under commander Vahriz to depose the new king of Yemen. After capturing the capital city San'a'l, Ma'd-Karib's son, Saif, was put on the throne. Justinian was ultimately responsible for Sassanian maritime presence in Yemen. By not providing the Yemenite Arabs support, Khosrow was able to help Ma'd-Karib and subsequently established Yemen as a principality of the Sassanian Empire. Like their predecessors the Parthians, the Sassanid Empire actively sought foreign relations with China, and ambassadors from Persia frequently travelled to China. Chinese documents mention sixteen Sassanid embassies to China from 455 to 555. Commercially, land and sea trade with China was important to both the Sassanid and the Chinese. Large numbers of Sassanid coins have been found in southern China, confirming the existence of maritime trade. On different occasions, Sassanid kings sent their most talented Persian musicians and dancers to the Chinese imperial court at Luoyang during the Jin and Northern Wei dynasties, and to Chang'an during the Sui and Tang dynasties. Both empires benefited from trade along the Silk Road and shared a common interest in preserving and protecting that trade. They cooperated in guarding the trade routes through central Asia, and both built outposts in border areas to keep caravans safe from nomadic tribes and bandits. Politically, there is evidence that on several occasions the Sassanids and the Chinese formed alliances to counter their common enemy, the Hephthalites. Upon the rise of the nomadic Göktürks in Inner Asia, there seems to have been what looks like an alliance between China and the Sassanids to check the Turkic advances. Documents from Mt. Mogh report the presence of a Chinese general in the service of the king of Sogdiana at the time of the Arab invasions. Following the invasion of Iran by Muslim Arabs, Peroz III, son of Yazdegerd III, escaped along with a few Persian nobles and took refuge in the Chinese imperial court. Both Peroz and his son Narsieh (Chinese neh-shie) were given high titles at the Chinese court. On at least two occasions, the last possibly in 670, Chinese troops were sent with Peroz in order to restore him to the Sassanid throne. Narsieh later attained the position of a commander of the Chinese imperial guards, and his descendants lived in China as respected princes. Sassanian refugees who fled the Arab conquests settled in China, during the reign of the Chinese emperor Gaozong of Tang. Following the conquest of Iran and neighboring regions, Shapur I extended his authority northwest of the Indian subcontinent. The previously autonomous Kushans were obliged to accept his suzerainty. These were the western Kushans which controlled Afghanistan while the eastern Kushans were active in India. Although the Kushan empire declined at the end of the 3rd century, to be replaced by the Indian Gupta Empire in the 4th century, it is clear that the Sassanids remained relevant in India's northwest throughout this period. Persia and northwestern India, the latter that made up formerly part of the Kushans, engaged in cultural as well as political exchange during this period, as certain Sassanid practices spread into the Kushan territories. In particular, the Kushans were influenced by the Sassanid conception of kingship, which spread through the trade of Sassanid silverware and textiles depicting emperors hunting or dispensing justice. This cultural interchange did not, however, spread Sassanid religious practices or attitudes to the Kushans. Lower-level cultural interchanges also took place between India and Persia during this period. For example, Persians imported the early form of chess, the chaturanga (Middle Persian: chatrang) from India. In exchange, Persians introduced backgammon (Nēw-Ardašēr) to India. During Khosrow I's reign, many books were brought from India and translated into Middle Persian. Some of these later found their way into the literature of the Islamic world and Arabic literature. A notable example of this was the translation of the Indian Panchatantra by one of Khosrow's ministers, Borzuya. This translation, known as the Kalīlag ud Dimnag, later made its way into the Arabic literature and Europe. The details of Borzuya's legendary journey to India and his daring acquisition of the Panchatantra are written in full detail in Ferdowsi's Shahnameh, which says: In Indian books, Borzuya read that on a mountain in that land there grows a plant which when sprinkled over the dead revives them. Borzuya asked Khosrau I for permission to travel to India to obtain the plant. After a fruitless search, he was led to an ascetic who revealed the secret of the plant to him: The "plant" was word, the "mountain" learning, and the "dead" the ignorant. He told Borzuya of a book, the remedy of ignorance, called the Kalila, which was kept in a treasure chamber. The king of India gave Borzuya permission to read the Kalila, provided that he did not make a copy of it. Borzuya accepted the condition but each day memorized a chapter of the book. When he returned to his room he would record what he had memorized that day, thus creating a copy of the book, which he sent to Iran. In Iran, Bozorgmehr translated the book into Pahlavi and, at Borzuya's request, named the first chapter after him. Society In contrast to Parthian society, the Sassanids renewed emphasis on a charismatic and centralized government. In Sassanid theory, the ideal society could maintain stability and justice, and the necessary instrument for this was a strong monarch. Thus, the Sasanians aimed to be an urban empire, at which they were quite successful. During the late Sasanian period, Mesopotamia had the largest population density in the medieval world. This can be credited to, among other things, the Sasanians founding and re-founding a number of cities, which is talked about in the surviving Middle Persian text Šahrestānīhā ī Ērānšahr (the provincial capitals of Iran). Ardashir I himself built and re-built many cities, which he named after himself, such as Veh-Ardashir in Asoristan, Ardashir-Khwarrah in Pars and Vahman-Ardashir in Meshan. During the Sasanian period, many cities with the name "Iran-khwarrah" were established. This was because Sasanians wanted to revive Avesta ideology. Many of these cities, both new and old, were populated not only by native ethnic groups, such as the Iranians or Syriacs, but also by the deported Roman prisoners of war, such as Goths, Slavs, Latins, and others. Many of these prisoners were experienced workers, who were used to build things such as cities, bridges, and dams. This allowed the Sasanians to become familiar with Roman technology. The impact these foreigners made on the economy was significant, as many of them were Christians, and the spread of the religion accelerated throughout the empire. Unlike the amount of information about the settled people of the Sasanian Empire, there is little about the nomadic/unsettled ones. It is known that they were called "Kurds" by the Sasanians, and that they regularly served the Sasanian military, particularly the Dailamite and Gilani nomads. This way of handling the nomads continued into the Islamic period, where the service of the Dailamites and Gilanis continued unabated. The head of the Sasanian Empire was the shahanshah (king of kings), also simply known as the shah (king). His health and welfare was of high importance—accordingly, the phrase "May you be immortal" was used to reply to him. The Sasanian coins which appeared from the 6th-century and afterwards depict a moon and sun, which, in the words of the Iranian historian Touraj Daryaee, "suggest that the king was at the center of the world and the sun and moon revolved around him." In effect he was the "king of the four corners of the world", which was an old Mesopotamian idea. The king saw all other rulers, such as the Romans, Turks, and Chinese, as being beneath him. The king wore colorful clothes, makeup, a heavy crown, while his beard was decorated with gold. The early Sasanian kings considered themselves of divine descent; they called themselves "bay" (divine). When the king went out in public, he was hidden behind a curtain, and had some of his men in front of him, whose duty was to keep the masses away from him and to clear the way. When one came to the king, one was expected to prostrate oneself before him, also known as proskynesis. The king's guards were known as the pushtigban. On other occasions, the king was protected by a discrete group of palace guards, known as the darigan. Both of these groups were enlisted from royal families of the Sasanian Empire, and were under the command of the hazarbed, who was in charge of the king's safety, controlled the entrance of the kings palace, presented visitors to the king, and was allowed military commands or used as a negotiator. The hazarbed was also allowed in some cases to serve as the royal executioner. During Nowruz (Iranian new year) and Mihragan (Mihr's day), the king would hold a speech. Sassanid society was immensely complex, with separate systems of social organization governing numerous different groups within the empire. Historians believe society comprised four social classes: At the center of the Sasanian caste system the shahanshah ruled over all the nobles. The royal princes, petty rulers, great landlords and priests, together constituted a privileged stratum, and were identified as wuzurgan, or grandees. This social system appears to have been fairly rigid. The Sasanian caste system outlived the empire, continuing in the early Islamic period. The Sasanian caste system was directly based on Zoroastrian doctrine. In general, mass slavery was never practiced by the Iranians, and in many cases the situation and lives of semi-slaves (prisoners of war) were, in fact, better than those of the commoner. In Persia, the term "slave" was also used for debtors who had to use some of their time to serve in a fire-temple. Some of the laws governing the ownership and treatment of slaves can be found in the legal compilation called the Mādayān ī Hazār Dādestān, a collection of rulings by Sasanian judges. Principles that can be inferred from the laws include: To free a slave (irrespective of his or her faith) was considered a good deed. Slaves had some rights including keeping gifts from the owner and at least three days of rest in the month. The most common slaves in the Sasanian Empire were the household servants, who worked in private estates and at the fire-temples. Usage of a woman slave in a home was common, and her master had outright control over her and could even produce children with her if he wanted to. Slaves also received wages and were able to have their own families whether they were female or male. Harming a slave was considered a crime, and not even the king himself was allowed to do it. The master of a slave was allowed to free the person when he wanted to, which, no matter what faith the slave believed in, was considered a good deed. A slave could also be freed if his/her master died. Culture There was a major school, called the Grand School, in the capital. In the beginning, only 50 students were allowed to study at the Grand School. In less than 100 years, enrollment at the Grand School was over 30,000 students. On a lower level, Sasanian society was divided into Azatan (freemen). The Azatan formed a large low-aristocracy of low-level administrators, mostly living on small estates. The Azatan provided the cavalry backbone of the Sasanian army. The Sasanian kings were patrons of letters and philosophy. Khosrow I had the works of Plato and Aristotle, translated into Pahlavi, taught at Gundishapur, and read them himself. During his reign, many historical annals were compiled, of which the sole survivor is the Karnamak-i Artaxshir-i Papakan (Deeds of Ardashir), a mixture of history and romance that served as the basis of the Iranian national epic, the Shahnameh. When Justinian I closed the schools of Athens, seven of their professors went to Persia and found refuge at Khosrow's court. In his treaty of 533 with Justinian, the Sasanian king stipulated that the Greek sages should be allowed to return and be free from persecution.[page needed] Under Khosrow I, the Academy of Gundishapur, which had been founded in the 5th century, became "the greatest intellectual center of the time", drawing students and teachers from every quarter of the known world. Nestorian Christians were received there, and brought Syriac translations of Greek works in medicine and philosophy. The medical lore of India, Persia, Syria and Greece mingled there to produce a flourishing school of therapy.[page needed] The Sasanian period saw an open exchange of ideas with other people, in particular with India and Byzantium. From India, works on medicine, astronomy, mirrors for princes and fables such as the panchatantra were imported and translated. Indian works on astrology were especially prized and much effort was dedicated to their translation. Chess was also imported from India, where it was adopted and underwent further development. From Byzantium came musical instruments, scientific, medical and philosophical works. The decision of emperor Justinian to close down the neo-platonic school in Athens caused many philosophers to travel to Persia and seek work at the academy of Gondishapur. Artistically, the Sasanian period witnessed some of the highest achievements of Iranian civilization. Much of what would later become known as Islamic culture, including architecture and writing, was originally drawn from Persian culture. At its peak, the Sasanian Empire stretched from western Anatolia to northwest India (today Pakistan), but its influence was felt far beyond these political boundaries. Sasanian motifs found their way into the art of Central Asia and China, the Byzantine Empire, and even Merovingian France. Islamic art however, was the true heir to Sasanian art, whose concepts it was to assimilate while at the same time instilling fresh life and renewed vigor into it. According to Will Durant: Sasanian art exported its forms and motifs eastward into India, Turkestan and China, westward into Syria, Asia Minor, Constantinople, the Balkans, Egypt and Spain. Probably its influence helped to change the emphasis in Greek art from classic representation to Byzantine ornament, and in Latin Christian art from wooden ceilings to brick or stone vaults and domes and buttressed walls. Sasanian carvings at Taq-e Bostan and Naqsh-e Rustam were colored; so were many features of the palaces; but only traces of such painting remain. The literature, however, makes it clear that the art of painting flourished in Sasanian times. Painting, sculpture, pottery, and other forms of decoration shared their designs with Sasanian textile art. Silks, embroideries, brocades, damasks, tapestries, chair covers, canopies, tents and rugs were woven with patience and masterly skill, and were dyed in warm tints of yellow, blue and green. Great colorful carpets had been an appendage of wealth in the East since Assyrian days. The two dozen Sasanian textiles that have survived are among the most highly valued fabrics in existence. Even in their own day, Sasanian textiles were admired and imitated from Egypt to the Far East; and during the Middle Ages, they were favored for clothing the relics of Christian saints. Studies on Sasanian remains show over 100 types of crowns being worn by Sasanian kings. The various Sasanian crowns demonstrate the cultural, economic, social and historical situation in each period. The crowns also show the character traits of each king in this era. Different symbols and signs on the crowns—the moon, stars, eagle and palm, each illustrate the wearer's religious faith and beliefs. The Sasanian dynasty, like the Achaemenid, originated in the province of Pars. The Sasanians saw themselves as successors of the Achaemenids, after the Hellenistic and Parthian interlude, and believed that it was their destiny to restore the greatness of Persia. When reviving the glories of the Achaemenid past, the Sasanians went beyond imitation. The art of this period reveals an astonishing virility, in certain respects anticipating key features of Islamic art. Sasanian art combined elements of traditional Persian art with Hellenistic elements and influences. The conquest of Persia by Alexander the Great helped introduce Hellenistic art to the East. Though the East accepted the outward form of this art, it never really assimilated its spirit. By the Parthian period, Hellenistic art was being interpreted freely by the peoples of the Near East, and during the Sasanian period, there was reaction against it. Sasanian art revived forms and traditions native to Persia, and in the Islamic period, these reached the shores of the Mediterranean. According to Fergusson: With the accession of the [Sasanians], Persia regained much of that power and stability to which she had been so long a stranger ... The improvement in the fine arts at home indicates returning prosperity, and a degree of security unknown since the fall of the Achaemenidae. Surviving palaces illustrate the splendor in which the Sasanian monarchs lived. Examples include palaces at Firuzabad and Bishapur in pars, and the capital city of Ctesiphon in the Asoristan province (present-day Iraq). In addition to local traditions, Parthian architecture influenced Sasanian architectural characteristics. All are characterized by the barrel-vaulted iwans introduced in the Parthian period. During the Sasanian period, these reached massive proportions, particularly at Ctesiphon. There, the arch of the great vaulted hall, attributed to the reign of Shapur I (241–272), has a span of more than 80 feet (24 m) and reaches a height of 118 feet (36 m). This magnificent structure fascinated architects in the centuries that followed and has been considered one of the most important examples of Persian architecture. Many of the palaces contain an inner audience hall consisting, as at Firuzabad, of a chamber surmounted by a dome. The Persians solved the problem of constructing a circular dome on a square building by employing squinches, or arches built across each corner of the square, thereby converting it into an octagon on which it is simple to place the dome. The dome chamber in the palace of Firuzabad is the earliest surviving example of the use of the squinch, suggesting that this architectural technique was probably invented in Persia.[citation needed] Among the unique characteristics of Sasanian architecture was its distinctive use of space. The Sasanian architect conceived his building in terms of masses and surfaces; hence the use of massive walls of brick decorated with molded or carved stucco. Stucco wall decorations appear at Bishapur, but better examples are preserved at Chal Tarkhan near Rey, which date to the late Sasanian or early Islamic period, and from Ctesiphon and Kish in Mesopotamia. The panels show animal figures set in roundels, human busts, and geometric and floral motifs.[citation needed] Economy Due to the majority of the inhabitants being of peasant stock, the Sasanian economy relied on farming and agriculture, Khuzestan and Iraq being the most important provinces for it. The Nahravan Canal is one of the greatest examples of Sasanian irrigation systems, and many of these things can still be found in Iran. The mountains of the Sasanian state were used for lumbering by the nomads of the region, and the centralized nature of the Sasanian state allowed it to impose taxes on the nomads and inhabitants of the mountains. During the reign of Khosrow I, further land was brought under centralized administration. Two trade routes were used during the Sasanian period: one in the north, the famous Silk Route, and one less prominent route on the southern Sasanian coast. The factories of Susa, Gundeshapur, and Shushtar were famously known for their production of silk, and rivaled the Chinese factories. The Sasanians showed great toleration to the inhabitants of the countryside, which allowed the latter to stockpile in case of famine. Persian industry under the Sasanians developed from domestic to urban forms. Guilds were numerous. Good roads and bridges, well patrolled, enabled state post and merchant caravans to link Ctesiphon with all provinces; and harbors were built in the Persian Gulf to quicken trade with India. Sasanian merchants ranged far and wide and gradually ousted Romans from the lucrative Indian Ocean trade routes. Recent archeological discovery has shown the interesting fact that Sasanians used special labels (commercial labels) on goods as a way of promoting their brands and distinguish between different qualities. Khosrow I further extended the already vast trade network. The Sasanian state now tended toward monopolistic control of trade, with luxury goods assuming a far greater role in the trade than heretofore, and the great activity in building of ports, caravanserais, bridges and the like, was linked to trade and urbanization. The Persians dominated international trade, both in the Indian Ocean, Central Asia and South Russia, in the time of Khosrow, although competition with the Byzantines was at times intense. Sassanian settlements in Oman and Yemen testify to the importance of trade with India, but the silk trade with China was mainly in the hands of Sasanian vassals and the Iranian people, the Sogdians. In 571 direct control over incense produced in Yemen, fell to the Sasanian Empire. The main exports of the Sasanians were silk; woolen and golden textiles; carpets and rugs; hides; and leather and pearls from the Persian Gulf. There were also goods in transit from China (paper, silk) and India (spices), which Sasanian customs imposed taxes upon, and which were re-exported from the Empire to Europe. It was also a time of increased metallurgical production, so Iran earned a reputation as the "armory of Asia". Most of the Sasanian mining centers were at the fringes of the Empire – in Armenia, the Caucasus and above all, Transoxania. The extraordinary mineral wealth of the Pamir Mountains led to a legend among the Tajiks: when God was creating the world, he tripped over the Pamirs, dropping his jar of minerals, which spread across the region. Religion Under Parthian rule, Zoroastrianism had fragmented into regional variations which also saw the rise of local cult-deities, some from Iranian religious tradition but others drawn from Greek tradition too. Greek paganism and religious ideas had spread and mixed with Zoroastrianism when Alexander the Great had conquered the Persian Empire from Darius III—a process of Greco-Persian religious and cultural synthesisation which had continued into the Parthian era. However, under the Sassanids, an orthodox Zoroastrianism was revived and the religion would undergo numerous and important developments. Sassanid Zoroastrianism developed clear distinctions from the practices laid out in the Avesta, the holy books of Zoroastrianism. Sassanid religious policies contributed to the flourishing of numerous religious reform movements, most importantly those founded by the religious leaders Mani and Mazdak.[citation needed] The relationship between the Sassanid kings and the religions practiced in their empire became complex and varied. For instance, while Shapur I tolerated and encouraged a variety of religions and seems to have been a Zurvanite himself, religious minorities at times were suppressed under later kings, such as Bahram II. Shapur II, on the other hand, tolerated religious groups except Christians, whom he only persecuted in the wake of Constantine's conversion. From the very beginning of Sassanid rule in 224, an orthodox Pars-oriented Zoroastrian tradition would play an important part in influencing and lending legitimization to the state until its collapse in the mid-7th century. After Ardashir I had deposed the last Parthian King, Artabanus IV, he sought the aid of Tansar, a herbad (high priest) of the Iranian Zoroastrians to aid him in acquiring legitimization for the new dynasty. This Tansar did by writing to the nominal and vassal kings in different regions of Iran to accept Ardashir I as their new King, most notably in the Letter of Tansar, which was addressed to Gushnasp, the vassal king of Tabarestan. Gushnasp had accused Ardashir I of having forsaken tradition by usurping the throne, and that while his actions "may have been good for the World" they were "bad for the faith". Tansar refuted these charges in his letter to Gushnasp by proclaiming that not all of the old ways had been good, and that Ardashir was more virtuous than his predecessors. The Letter of Tansar included some attacks on the religious practices and orientation of the Parthians, who did not follow an orthodox Zoroastrian tradition but rather a heterodox one, and so attempted to justify Ardashir's rebellion against them by arguing that Zoroastrianism had 'decayed' after Alexander's invasion, a decay which had continued under the Parthians and so needed to be 'restored'. Tansar would later help to oversee the formation of a single 'Zoroastrian church' under the control of the Persian magi, alongside the establishment of a single set of Avestan texts, which he himself approved and authorised. Kartir, a very powerful and influential Persian cleric, served under several Sassanid Kings and actively campaigned for the establishment of a Pars-centred Zoroastrian orthodoxy across the Sassanid Empire. His power and influence grew so much that he became the only 'commoner' to later be allowed to have his own rock inscriptions carved in the royal fashion (at Sar Mashhad, Naqsh-e Rostam, Ka'ba-ye Zartosht and Naqsh-e Rajab). Under Shapur I, Kartir was made the 'absolute authority' over the 'order of priests' at the Sassanid court and throughout the empire's regions too, with the implication that all regional Zoroastrian clergies would now for the first time be subordinated to the Persian Zoroastrian clerics of Pars. To some extent Kartir was an iconoclast and took it upon himself to help establish numerous Bahram fires throughout Iran in the place of the 'bagins / ayazans' (monuments and temples containing images and idols of cult-deities) that had proliferated during the Parthian era. In expressing his doctrinal orthodoxy, Kartir also encouraged an obscure Zoroastrian concept known as khvedodah among the common-folk (marriage within the family; between siblings, cousins). At various stages during his long career at court, Kartir also oversaw the periodic persecution of the non-Zoroastrians in Iran, and secured the execution of the prophet Mani during the reign of Bahram I. During the reign of Hormizd I (the predecessor and brother of Bahram I) Kartir was awarded the new Zoroastrian title of mobad—a clerical title that was to be considered higher than that of the eastern-Iranian (Parthian) title of herbad. The Persians had long known of the Egyptian calendar, with its 365 days divided into 12 months. However, the traditional Zoroastrian calendar had 12 months of 30 days each. During the reign of Ardashir I, an effort was made to introduce a more accurate Zoroastrian calendar for the year, so 5 extra days were added to it. These 5 extra days were named the Gatha days and had a practical as well as religious use. However, they were still kept apart from the 'religious year', so as not to disturb the long-held observances of the older Zoroastrian calendar. Some difficulties arose with the introduction of the first calendar reform, particularly the pushing forward of important Zoroastrian festivals such as Hamaspat-maedaya and Nowruz on the calendar year by year. This confusion apparently caused much distress among ordinary people, and while the Sassanids tried to enforce the observance of these great celebrations on the new official dates, much of the populace continued to observe them on the older, traditional dates, and so parallel celebrations for Nowruz and other Zoroastrian celebrations would often occur within days of each other, in defiance of the new official calendar dates, causing much confusion and friction between the laity and the ruling class. A compromise on this by the Sassanids was later introduced, by linking the parallel celebrations as a 6-day celebration/feast. This was done for all except Nowruz. A further problem occurred as Nowruz had shifted in position during this period from the spring equinox to autumn, although this inconsistency with the original spring-equinox date for Nowruz had possibly occurred during the Parthian period too. Further calendar reforms occurred during the later Sassanid era. Ever since the reforms under Ardashir I there had been no intercalation. Thus with a quarter-day being lost each year, the Zoroastrian holy year had slowly slipped backwards, with Nowruz eventually ending up in July. A great council was therefore convened and it was decided that Nowruz be moved back to the original position it had during the Achaemenid period—back to spring. This change probably took place during the reign of Kavad I in the early 6th century. Much emphasis seems to have been placed during this period on the importance of spring and on its connection with the resurrection and Frashegerd. Reflecting the regional rivalry and bias the Sassanids are believed to have held against their Parthian predecessors, it was probably during the Sassanid era that the two great fires in Pars and Media—the Adur Farnbag and Adur Gushnasp respectively—were promoted to rival, and even eclipse, the sacred fire in Parthia, the Adur Burzen-Mehr. The Adur Burzen-Mehr, linked (in legend) with Zoroaster and Vishtaspa (the first Zoroastrian King), was too holy for the Persian magi to end veneration of it completely. It was therefore during the Sassanid era that the three Great Fires of the Zoroastrian world were given specific associations. The Adur Farnbag in Pars became associated with the magi, Adur Gushnasp in Media with warriors, and Adur Burzen-Mehr in Parthia with the lowest estate, farmers and herdsmen. The Adur Gushnasp eventually became, by custom, a place of pilgrimage by foot for newly enthroned Kings after their coronation. It is likely that, during the Sassanid era, these three Great Fires became central places for pilgrimage among Zoroastrians. The early Sassanids ruled against the use of cult images in worship, and so statues and idols were removed from many temples and, where possible, sacred fires were installed instead. This policy extended even to the 'non-Iran' regions of the empire during some periods. Hormizd I allegedly destroyed statues erected for the dead in Armenia. However, only cult-statues were removed. The Sassanids continued to use images to represent the deities of Zoroastrianism, including that of Ahura Mazda, in the tradition that was established during the Seleucid era. In the early Sassanid period royal inscriptions often consisted of Parthian, Middle Persian and Greek. However, the last time Parthian was used for a royal inscription came during the reign of Narseh, son of Shapur I. It is likely therefore that soon after this, the Sassanids made the decision to impose Persian as the sole official language within Iran, and forbade the use of written Parthian. This had important consequences for Zoroastrianism, given that all secondary literature, including the Zand, was then recorded only in Middle Persian, having a profound impact in orienting Zoroastrianism towards the influence of the Pars region, the homeland of the Sassanids. Some scholars of Zoroastrianism such as Mary Boyce have speculated that it is possible that the yasna service was lengthened during the Sassanid era "to increase its impressiveness". This appears to have been done by joining the Gathic Staota Yesnya with the haoma ceremony. Furthermore, it is believed that another longer service developed, known as the Visperad, which derived from the extended yasna. This was developed for the celebration of the seven holy days of obligation (the Gahambars plus Nowruz) and was dedicated to Ahura Mazda. While the very earliest Zoroastrians eschewed writing as a form of demonic practice, the Middle Persian Zand, along with much secondary Zoroastrian literature, was recorded in writing during the Sassanid era for the first time. Many of these Zoroastrian texts were original works from the Sassanid period. Perhaps the most important of these works was the Bundahishn—the mythical Zoroastrian story of Creation. Other older works, some from remote antiquity, were possibly translated from different Iranian languages into Middle Persian during this period. For example, two works, the Drakht-i Asurig (Assyrian Tree) and Ayadgar-i Zareran (Exploits of Zarter) were probably translated from Parthian originals. Of great importance for Zoroastrianism was the creation of the Avestan alphabet under the Sassanids, which enabled the accurate rendering of the Avestan texts in written form (including in its original language/phonology) for the first time. This alphabet was based on the Pahlavi script, a purely consonantal writing system. However, unlike that script, which was not well adapted for even recording spoken Middle Persian, the Avestan alphabet was a fully phonetic alphabet using 46 letters; one for each sound of Avestan. It was, therefore, specifically designed to record Avestan in written form in exactly the way the language actually sounded. As a result, the Persian magi were finally able to faithfully record all surviving ancient Avestan texts in written form. As a result of this development, the Sasanian Avesta was then compiled into 21 nasks (divisions) to correspond with the 21 words of the Ahunavar manthra. An important literary text, the Khwaday-Namag (Book of Kings), was composed during the Sasanian era. This text is the basis of the later Shahnameh of Ferdowsi. Another important Zoroastrian text from the Sasanian period includes the Dadestan-e Menog-e Khrad (Judgments of the Spirit of Wisdom). Christians in the Sasanian Empire belonged mainly to the Nestorian Church (Church of the East) and the Jacobite Church (Syriac Orthodox Church). Although these churches originally maintained ties with Christian churches in the Roman Empire, they were quite different from them: the liturgical language of the Nestorian and Jacobite Churches was Syriac rather than Greek. Another reason for a separation between Eastern and Western Christianity was strong pressure from the Sasanian authorities to sever connections with Rome, since the Sasanian Empire was often at war with the Roman Empire. Christianity was recognized by Yazdegerd I in 409 as an allowable faith within the Sasanian Empire. The major break with mainstream Christianity came in 431, due to the pronouncements of the First Council of Ephesus. The Council condemned Nestorius, the patriarch of Constantinople, for teaching a view of Christology in accordance with which he was reluctant to call Mary, mother of Jesus, "Theotokos", or Mother of God. While the teaching of the Council of Ephesus was accepted within the Roman Empire, the Sasanian church disagreed with the condemnation of Nestorius' teachings. When Nestorius was deposed as patriarch, a number of his followers fled to the Sasanian Empire. Persian emperors used this opportunity to strengthen Nestorius' position within the Sasanian church (which made up the vast majority of the Christians in the predominantly Zoroastrian Persian Empire) by eliminating the most important pro-Roman clergymen in Persia and making sure that their places were taken by Nestorians. This was to assure that these Christians would be loyal to the Persian Empire, and not to the Roman.[citation needed] Most of the Christians in the Sasanian empire lived on the western edge of the empire, predominantly in Mesopotamia, but there were also important extant communities in the more northern territories, namely Caucasian Albania, Lazica, Iberia, and the Persian part of Armenia. Other important communities were to be found on the island of Tylos (present day Bahrain), the southern coast of the Persian Gulf, and the area of the Arabian kingdom of Lakhm. Some of these areas were the earliest to be Christianized; the kingdom of Armenia became the first independent Christian state in the world in 301. While a number of Assyrian territories had almost become fully Christianized even earlier during the 3rd century, they never became independent nations. Some of the recent excavations have discovered the Buddhist, Hindu and Jewish religious sites in the empire. Buddhism rivalled Zoroastrian influence in Bactria and Margiana, in the far easternmost territories. A very large Jewish community flourished under Sasanian rule, with thriving centers at Isfahan, Babylon and Khorasan, and with its own semiautonomous Exilarchate leadership based in Mesopotamia. Jewish communities suffered only occasional persecution. They enjoyed a relative freedom of religion, and were granted privileges denied to other religious minorities. Shapur I (Shabur Malka in Aramaic) was a particular friend to the Jews. His friendship with Shmuel produced many advantages for the Jewish community. Language During the early Sasanian period, Middle Persian along with Koine Greek and Parthian appeared in the inscriptions of the early Sasanian kings. However, by the time Narseh (r. 293–302) was ruling, Greek was no longer in use, perhaps due to the disappearance of Greek or the efforts of the anti-Hellenic Zoroastrian clergy to remove it once and for all. This was probably also because Greek was commonplace among the Romans/Byzantines, the rivals of the Sasanians. Parthian soon disappeared as an administrative language too, but was continued to be spoken and written in the eastern part of the Sasanian Empire, the homeland of the Parthians. Furthermore, many of the Parthian aristocrats who had entered into Sasanian service after the fall of the Parthian Empire still spoke Parthian, such as the seven Parthian clans, who possessed much power within the empire. The Sasanian Empire appears to have stopped using the Parthian language in their official inscriptions during the reign of Narseh. Aramaic, like in the Achaemenid Empire, was widely used in the Sasanian Empire (from Antioch to Mesopotamia), although Imperial Aramaic began to be replaced by Middle Persian as the administrative language. Although Middle Persian was the native language of the Sasanians, it was only a minority language in the vast Sasanian Empire; it only formed the majority of Pars, while it was widespread around Media and its surrounding regions. However, there were several different Persian dialects during that time. Besides Persian, the unattested predecessor of Adhari along with one of its dialects, Tati, was spoken in Adurbadagan (Azerbaijan). Unwritten Pre-Daylamite and probably Proto-Caspian, which later became Gilaki in Gilan and Mazandarani (also known as Tabari) in Tabaristan, were spoken about in the same regions. Furthermore, some other languages and dialects were spoken in the two regions. In the Sasanian territories in the Caucasus, numerous languages were spoken including Old Georgian, various Kartvelian languages (notably in Lazica), Middle Persian, Old Armenian, Caucasian Albanian, Scythian, Koine Greek, and others. In Khuzestan, several languages were spoken; Persian in the north and east, while Eastern Middle Aramaic was spoken in the rest of the place. Furthermore, late Neo-Elamite may also have been spoken in the province but there are no references explicitly naming the language. In Meshan, Strabo divided the Semitic population of the province into "Chaldeans" (Aramaic-speakers) and "Mesenian Arabs". Nomadic Arabs along with Nabataean and Palmyrene merchants are believed to have added to the population as well. Iranians had also begun to settle in the province, along with the Zutt, who had been deported from India. Other Indian groups such as the Malays may also have been deported to Meshan, either as captives or recruited sailors. In Asoristan, the majority of the people were Aramaic-speaking Nestorian Christians, notably including Middle Syriac, while the Persians, Jews and Arabs formed a minority in the province. Due to invasions from the Scythians and their sub-group, the Alans, into Atropatene, Armenia, and other places in the Caucasus, the places gained a larger, although small, Iranian population. Parthian was spoken in Khorasan along with other Iranian dialects and languages, while the Sogdian, Bactrian and Khwarazmian languages were spoken further east in places which were not always controlled by the Sasanians. Further south in Sakastan, which saw an influx of Scythians during the Parthian period, much later the place of Sistanian Persian, an unknown Middle Southwestern Iranian language was spoken if it was not likely Middle Persian as well.[clarification needed] Kirman was populated by an Iranian group which closely resembled the Persians while, farther to the east in Paratan, Turan and Makran, non-Iranian languages and an unknown Middle Northwestern Iranian language were spoken. In major cities such as Gundeshapur and Ctesiphon, Latin, Greek and Syriac were spoken by Roman/Byzantine prisoners of war. Furthermore, Slavic and Germanic were also spoken in the Sasanian Empire, once again due to the capture of Roman soldiers but this must have been negligible. Semitic languages including Himyaritic and Sabaean were spoken in Yemen. Legacy and importance The influence of the Sasanian Empire continued long after it fell. The empire had achieved a Persian renaissance that would become a driving force behind the civilization of the newly established religion of Islam. In modern Iran and the regions of the Iranosphere, the Sasanian period is regarded as one of the high points of Iranian civilization. Sasanian culture and military structure had a significant influence on Roman civilization. The structure and character of the Roman army was affected by the methods of Persian warfare. In a modified form, the Roman Imperial autocracy imitated the royal ceremonies of the Sasanian court at Ctesiphon, and those in turn had an influence on the ceremonial traditions of the courts of medieval and modern Europe. The origin of the formalities of European diplomacy is attributed to the diplomatic relations between the Persian governments and the Roman Empire. Important developments in Jewish history are associated with the Sassanian Empire. The Babylonian Talmud was composed between the third and sixth centuries in Sasanian Persia and major Jewish academies of learning were established in Sura and Pumbedita that became cornerstones of Jewish scholarship. Several individuals of the Imperial family such as Ifra Hormizd the Queen mother of Shapur II and Queen Shushandukht, the Jewish wife of Yazdegerd I, significantly contributed to the close relations between the Jews of the empire and the government in Ctesiphon. The collapse of the Sasanian Empire led to Islam slowly replacing Zoroastrianism as the primary religion of Iran. A large number of Zoroastrians chose to emigrate to escape Islamic persecution. According to the Qissa-i Sanjan, one group of those refugees landed in what is now Gujarat, India, where they were allowed greater freedom to observe their customs and preserve their faith. The descendants of those Zoroastrians would play a small but significant role in the development of India. Today there are over 70,000 Zoroastrians in India. The Zoroastrians still use a variant of the religious calendar instituted under the Sasanians. That calendar still marks the number of years since the accession of Yazdegerd III.[f] Chronology See also Notes References Bibliography Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/The_Times_of_Israel] | [TOKENS: 1536] |
Contents The Times of Israel The Times of Israel (ToI) is an Israeli multi-language online newspaper that was launched in 2012 and has since become the largest English-language Jewish and Israeli news source by audience size. It was co-founded by Israeli journalist David Horovitz, who is also the founding editor, and American billionaire investor Seth Klarman. Based in Jerusalem, it "documents developments in Israel, the Middle East and around the Jewish world." Along with its original English site, The Times of Israel publishes in Hebrew (via its own edition, Zman Yisrael), Arabic, French, and Persian. In addition to publishing news reports and analysis, the website hosts a multi-author blog platform. In February 2014, two years after its launch, The Times of Israel claimed a readership of two million. In 2017, readership increased to 3.5 million unique monthly users. By 2021, the paper had on average over nine million unique users each month and over 35 million monthly pageviews, according to its press kit, while the paper's blog platform had 9,000 active bloggers. In 2025, the press kit listed an audience of 8 million unique monthly users, comprising 25 million monthly visits and 75 million monthly pageviews. Website ranking company Similarweb listed the site's traffic as 45 million visits in June 2025, with more than half the traffic originating in the United States, with liaison journalists stationed in New York City. History The Times of Israel was launched in February 2012. Its co-founders are journalist David Horovitz, and American billionaire Seth Klarman, founder of the Baupost Group and chairman of The David Project. Klarman is the chairman of the website. Several Times of Israel editors had previously worked for the Haaretz English edition, including Joshua Davidovich and Raphael Ahren, and former Haaretz Arab affairs correspondent Avi Issacharoff – co-creator of the popular Israeli television series Fauda – joined as its Middle East analyst. Amanda Borschel-Dan, who was the Magazine Editor of The Jerusalem Post, is currently The Times of Israel's Deputy Editor, responsible for the Jewish world and archaeology. She also hosts the paper's weekly podcast. The Times of Israel launched its Arabic edition, edited by Suha Halifa, on 4 February 2014; its French edition, edited by Stephanie Bitan, on 25 February 2014; and its Persian edition, edited by Avi Davidi, on 7 October 2015. It launched its Hebrew site, Zman Yisael, on 1 May 2019, edited by Biranit Goren. Both the Arabic and French editions combine translations of English content with original material in their respective languages, and also host a blog platform. In announcing the Arabic edition, Horovitz suggested, The Times of Israel may have created the first Arabic blog platform that "draw[s] articles from across the spectrum of opinion. We're inviting those of our Arabic readers with something of value that they want to say to blog on our pages, respecting the parameters of legitimate debate, joining our marketplace of ideas." "[T]o avoid the kind of anonymous comments that can reduce discussion to toxic lows", comments on news articles and features in all of the site's editions can only be posted by readers identified through their Facebook profiles or equivalent. In February 2014, two years after its launch, The Times of Israel claimed a readership of 2 million. In 2017, readership increased to 3.5 million. By 2021, the paper had on average over 9 million unique users each month and over 35 million monthly page views. It also maintains a blog platform, on which some 9,000 bloggers post. In November 2023, the site saw web visits increase 604% year-on-year to 64.2 million and entered the Press Gazette's top-50 ranking for the first time in 42nd place, according to digital intelligence platform, Similarweb. The increase is likely linked to the increase in demand for news about the Middle East following the outbreak of the Gaza war on 7 October 2023. Since 2016, The Times of Israel has hosted the websites of Jewish newspapers in several countries, known as "local partners". In March 2016, it began hosting New York's The Jewish Week. It also hosts Britain's Jewish News, the New Jersey Jewish Standard, The Atlanta Jewish Times, and Pittsburgh Jewish Chronicle. In October 2019, The Australian Jewish News became the seventh local partner. On 2 November 2017, hackers in Turkey took down the website of The Times of Israel for three hours, replacing the homepage with anti-Israel propaganda. Responding to the attack, Horovitz said: "We constantly work to improve security on the site, which is subjected to relentless attacks by hackers. How unfortunate, and how badly it reflects on them that the hackers seek to prevent people from reading responsible, independent journalism on Israel, the Middle East and the Jewish world." In 2020, Reuters reported that The Times of Israel, along with The Jerusalem Post, Algemeiner, and Arutz Sheva, published op-eds sent to them by someone using a falsified identity. The op-eds were removed as soon as the problem was discovered. Opinion editor Miriam Herschlag said that she regretted the scam because it distorted the public discourse and might lead to "barriers that prevent new voices from being heard". Editorial orientation Most sources describe The Times of Israel as "centrist". According to editor David Horovitz, The Times of Israel is intended to be independent, without any political leanings. Horovitz said in 2012: "We are independent; we're not attached or affiliated with any political party." Third-party blogging The Times of Israel has offered a third-party blogging platform since 2012 which allows writers who are not affiliated with the news site to publish online. The articles on this platform are clearly marked as such, and Times of Israel staff does not oversee or edit the content from outside users published on the blogging platform. The Times of Israel has at times removed content that has violated the site's policies. This platform has occasionally brought about controversy for the newspaper with inflammatory and controversial blog pieces that were later removed. These pieces written by third-party users have sometimes been confused for or misrepresented as the endorsed original work of The Times of Israel, leading to criticism of the newspaper itself. As of 2013, it was estimated that The Times of Israel hired more than 1,500 bloggers. In May 2023, Jeffrey Camras wrote a blog in which he stated "Palestine must be obliterated" because its existence is "an afront to society, morality, humanity... It represents lies and antisemitism, oppression and terror. Nothing more." He also claimed that Palestinian nationality was "a lie". Genocide scholar Raz Segal cited this as an example of genocidal rhetoric in the Israeli discourse. The blog was subsequently removed. Additional media In addition to written journalism, The Times of Israel produces and publishes three podcasts; it also produces video content: Notable writers Competition The Times of Israel competes for readership with The Jerusalem Post, Arutz Sheva's Israel National News, Haaretz daily English edition, Israel Hayom, and The Forward. See also References External links Media related to The Times of Israel at Wikimedia Commons |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Python_(programming_language)#cite_note-AutoNT-73-154] | [TOKENS: 4314] |
Contents Python (programming language) Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language. Python 3.0, released in 2008, was a major revision and not completely backward-compatible with earlier versions. Beginning with Python 3.5, capabilities and keywords for typing were added to the language, allowing optional static typing. As of 2026[update], the Python Software Foundation supports Python 3.10, 3.11, 3.12, 3.13, and 3.14, following the project's annual release cycle and five-year support policy. Python 3.15 is currently in the alpha development phase, and the stable release is expected to come out in October 2026. Earlier versions in the 3.x series have reached end-of-life and no longer receive security updates. Python has gained widespread use in the machine learning community. It is widely taught as an introductory programming language. Since 2003, Python has consistently ranked in the top ten of the most popular programming languages in the TIOBE Programming Community Index, which ranks based on searches in 24 platforms. History Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde & Informatica (CWI) in the Netherlands. It was designed as a successor to the ABC programming language, which was inspired by SETL, capable of exception handling and interfacing with the Amoeba operating system. Python implementation began in December 1989. Van Rossum first released it in 1991 as Python 0.9.0. Van Rossum assumed sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from responsibilities as Python's "benevolent dictator for life" (BDFL); this title was bestowed on him by the Python community to reflect his long-term commitment as the project's chief decision-maker. (He has since come out of retirement and is self-titled "BDFL-emeritus".) In January 2019, active Python core developers elected a five-member Steering Council to lead the project. The name Python derives from the British comedy series Monty Python's Flying Circus. (See § Naming.) Python 2.0 was released on 16 October 2000, featuring many new features such as list comprehensions, cycle-detecting garbage collection, reference counting, and Unicode support. Python 2.7's end-of-life was initially set for 2015, and then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3. It no longer receives security patches or updates. While Python 2.7 and older versions are officially unsupported, a different unofficial Python implementation, PyPy, continues to support Python 2, i.e., "2.7.18+" (plus 3.11), with the plus signifying (at least some) "backported security updates". Python 3.0 was released on 3 December 2008, and was a major revision and not completely backward-compatible with earlier versions, with some new semantics and changed syntax. Python 2.7.18, released in 2020, was the last release of Python 2. Several releases in the Python 3.x series have added new syntax to the language, and made a few (considered very minor) backward-incompatible changes. As of January 2026[update], Python 3.14.3 is the latest stable release. All older 3.x versions had a security update down to Python 3.9.24 then again with 3.9.25, the final version in 3.9 series. Python 3.10 is, since November 2025, the oldest supported branch. Python 3.15 has an alpha released, and Android has an official downloadable executable available for Python 3.14. Releases receive two years of full support followed by three years of security support. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of their features support functional programming and aspect-oriented programming – including metaprogramming and metaobjects. Many other paradigms are supported via extensions, including design by contract and logic programming. Python is often referred to as a 'glue language' because it is purposely designed to be able to integrate components written in other languages. Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. It uses dynamic name resolution (late binding), which binds method and variable names during program execution. Python's design offers some support for functional programming in the "Lisp tradition". It has filter, map, and reduce functions; list comprehensions, dictionaries, sets, and generator expressions. The standard library has two modules (itertools and functools) that implement functional tools borrowed from Haskell and Standard ML. Python's core philosophy is summarized in the Zen of Python (PEP 20) written by Tim Peters, which includes aphorisms such as these: However, Python has received criticism for violating these principles and adding unnecessary language bloat. Responses to these criticisms note that the Zen of Python is a guideline rather than a rule. The addition of some new features had been controversial: Guido van Rossum resigned as Benevolent Dictator for Life after conflict about adding the assignment expression operator in Python 3.8. Nevertheless, rather than building all functionality into its core, Python was designed to be highly extensible via modules. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, which represented the opposite approach. Python claims to strive for a simpler, less-cluttered syntax and grammar, while giving developers a choice in their coding methodology. Python lacks do .. while loops, which Rossum considered harmful. In contrast to Perl's motto "there is more than one way to do it", Python advocates an approach where "there should be one – and preferably only one – obvious way to do it". In practice, however, Python provides many ways to achieve a given goal. There are at least three ways to format a string literal, with no certainty as to which one a programmer should use. Alex Martelli is a Fellow at the Python Software Foundation and Python book author; he wrote that "To describe something as 'clever' is not considered a compliment in the Python culture." Python's developers typically prioritize readability over performance. For example, they reject patches to non-critical parts of the CPython reference implementation that would offer increases in speed that do not justify the cost of clarity and readability.[failed verification] Execution speed can be improved by moving speed-critical functions to extension modules written in languages such as C, or by using a just-in-time compiler like PyPy. Also, it is possible to transpile to other languages. However, this approach either fails to achieve the expected speed-up, since Python is a very dynamic language, or only a restricted subset of Python is compiled (with potential minor semantic changes). Python is meant to be a fun language to use. This goal is reflected in the name – a tribute to the British comedy group Monty Python – and in playful approaches to some tutorials and reference materials. For instance, some code examples use the terms "spam" and "eggs" (in reference to a Monty Python sketch), rather than the typical terms "foo" and "bar". A common neologism in the Python community is pythonic, which has a broad range of meanings related to program style: Pythonic code may use Python idioms well; be natural or show fluency in the language; or conform with Python's minimalist philosophy and emphasis on readability. Syntax and semantics Python is meant to be an easily readable language. Its formatting is visually uncluttered and often uses English keywords where other languages use punctuation. Unlike many other languages, it does not use curly brackets to delimit blocks, and semicolons after statements are allowed but rarely used. It has fewer syntactic exceptions and special cases than C or Pascal. Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block. Thus, the program's visual structure accurately represents its semantic structure. This feature is sometimes termed the off-side rule. Some other languages use indentation this way; but in most, indentation has no semantic meaning. The recommended indent size is four spaces. Python's statements include the following: The assignment statement (=) binds a name as a reference to a separate, dynamically allocated object. Variables may subsequently be rebound at any time to any object. In Python, a variable name is a generic reference holder without a fixed data type; however, it always refers to some object with a type. This is called dynamic typing—in contrast to statically-typed languages, where each variable may contain only a value of a certain type. Python does not support tail call optimization or first-class continuations; according to Van Rossum, the language never will. However, better support for coroutine-like functionality is provided by extending Python's generators. Before 2.5, generators were lazy iterators; data was passed unidirectionally out of the generator. From Python 2.5 on, it is possible to pass data back into a generator function; and from version 3.3, data can be passed through multiple stack levels. Python's expressions include the following: In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This distinction leads to duplicating some functionality, for example: A statement cannot be part of an expression; because of this restriction, expressions such as list and dict comprehensions (and lambda expressions) cannot contain statements. As a particular case, an assignment statement such as a = 1 cannot be part of the conditional expression of a conditional statement. Python uses duck typing, and it has typed objects but untyped variable names. Type constraints are not checked at definition time; rather, operations on an object may fail at usage time, indicating that the object is not of an appropriate type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are poorly defined (e.g., adding a number and a string) rather than quietly attempting to interpret them. Python allows programmers to define their own types using classes, most often for object-oriented programming. New instances of classes are constructed by calling the class, for example, SpamClass() or EggsClass()); the classes are instances of the metaclass type (which is an instance of itself), thereby allowing metaprogramming and reflection. Before version 3.0, Python had two kinds of classes, both using the same syntax: old-style and new-style. Current Python versions support the semantics of only the new style. Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Python includes a module typing including several type names for type annotations. Also, mypy supports a Python compiler called mypyc, which leverages type annotations for optimization. 1.33333 frozenset() Python includes conventional symbols for arithmetic operators (+, -, *, /), the floor-division operator //, and the modulo operator %. (With the modulo operator, a remainder can be negative, e.g., 4 % -3 == -2.) Also, Python offers the ** symbol for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0. Also, it offers the matrix‑multiplication operator @ . These operators work as in traditional mathematics; with the same precedence rules, the infix operators + and - can also be unary, to represent positive and negative numbers respectively. Division between integers produces floating-point results. The behavior of division has changed significantly over time: In Python terms, the / operator represents true division (or simply division), while the // operator represents floor division. Before version 3.0, the / operator represents classic division. Rounding towards negative infinity, though a different method than in most languages, adds consistency to Python. For instance, this rounding implies that the equation (a + b)//b == a//b + 1 is always true. Also, the rounding implies that the equation b*(a//b) + a%b == a is valid for both positive and negative values of a. As expected, the result of a%b lies in the half-open interval [0, b), where b is a positive integer; however, maintaining the validity of the equation requires that the result must lie in the interval (b, 0] when b is negative. Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses the round to even method: round(1.5) and round(2.5) both produce 2. Python versions before 3 used the round-away-from-zero method: round(0.5) is 1.0, and round(-0.5) is −1.0. Python allows Boolean expressions that contain multiple equality relations to be consistent with general usage in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c. Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision with several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers. Due to Python's extensive mathematics library and the third-party library NumPy, the language is frequently used for scientific scripting in tasks such as numerical data processing and manipulation. Functions are created in Python by using the def keyword. A function is defined similarly to how it is called, by first providing the function name and then the required parameters. Here is an example of a function that prints its inputs: To assign a default value to a function parameter in case no actual value is provided at run time, variable-definition syntax can be used inside the function header. Code examples "Hello, World!" program: Program to calculate the factorial of a non-negative integer: Libraries Python's large standard library is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. The language includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specifications—for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333—but most parts are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules must be altered or rewritten for variant implementations. As of 13 March 2025,[update] the Python Package Index (PyPI), the official repository for third-party Python software, contains over 614,339 packages. Development environments Most[which?] Python implementations (including CPython) include a read–eval–print loop (REPL); this permits the environment to function as a command line interpreter, with which users enter statements sequentially and receive results immediately. Also, CPython is bundled with an integrated development environment (IDE) called IDLE, which is oriented toward beginners.[citation needed] Other shells, including IDLE and IPython, add additional capabilities such as improved auto-completion, session-state retention, and syntax highlighting. Standard desktop IDEs include PyCharm, Spyder, and Visual Studio Code; there are web browser-based IDEs, such as the following environments: Implementations CPython is the reference implementation of Python. This implementation is written in C, meeting the C11 standard since version 3.11. Older versions use the C89 standard with several select C99 features, but third-party extensions are not limited to older C versions—e.g., they can be implemented using C11 or C++. CPython compiles Python programs into an intermediate bytecode, which is then executed by a virtual machine. CPython is distributed with a large standard library written in a combination of C and native Python. CPython is available for many platforms, including Windows and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, using an experimental installer). Starting with Python 3.9, the Python installer intentionally fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5, with unofficial support for VMS. Platform portability was one of Python's earliest priorities. During development of Python 1 and 2, even OS/2 and Solaris were supported; since that time, support has been dropped for many platforms. All current Python versions (since 3.7) support only operating systems that feature multithreading, by now supporting not nearly as many operating systems (dropping many outdated) than in the past. All alternative implementations have at least slightly different semantics. For example, an alternative may include unordered dictionaries, in contrast to other current Python versions. As another example in the larger Python ecosystem, PyPy does not support the full C Python API. Creating an executable with Python often is done by bundling an entire Python interpreter into the executable, which causes binary sizes to be massive for small programs, yet there exist implementations that are capable of truly compiling Python. Alternative implementations include the following: Stackless Python is a significant fork of CPython that implements microthreads. This implementation uses the call stack differently, thus allowing massively concurrent programs. PyPy also offers a stackless version. Just-in-time Python compilers have been developed, but are now unsupported: There are several compilers/transpilers to high-level object languages; the source language is unrestricted Python, a subset of Python, or a language similar to Python: There are also specialized compilers: Some older projects existed, as well as compilers not designed for use with Python 3.x and related syntax: A performance comparison among various Python implementations, using a non-numerical (combinatorial) workload, was presented at EuroSciPy '13. In addition, Python's performance relative to other programming languages is benchmarked by The Computer Language Benchmarks Game. There are several approaches to optimizing Python performance, despite the inherent slowness of an interpreted language. These approaches include the following strategies or tools: Language Development Python's development is conducted mostly through the Python Enhancement Proposal (PEP) process; this process is the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council. Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017. CPython's public releases have three types, distinguished by which part of the version number is incremented: Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for releases, they are often delayed if the code is not ready yet. Python's development team monitors the state of the code by running a large unit test suite during development. The major academic conference on Python is PyCon. Also, there are special Python mentoring programs, such as PyLadies. Naming Python's name is inspired by the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs, rather than the traditional foo and bar. Also, the official Python documentation contains various references to Monty Python routines. Python users are sometimes referred to as "Pythonistas". Languages influenced by Python See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Animal#cite_note-119] | [TOKENS: 6011] |
Contents Animal Animals are multicellular, eukaryotic organisms belonging to the biological kingdom Animalia (/ˌænɪˈmeɪliə/). With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Animals form a clade, meaning that they arose from a single common ancestor. Over 1.5 million living animal species have been described, of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are as many as 7.77 million animal species on Earth. Animal body lengths range from 8.5 μm (0.00033 in) to 33.6 m (110 ft). They have complex ecologies and interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology, and the study of animal behaviour is known as ethology. The animal kingdom is divided into five major clades, namely Porifera, Ctenophora, Placozoa, Cnidaria and Bilateria. Most living animal species belong to the clade Bilateria, a highly proliferative clade whose members have a bilaterally symmetric and significantly cephalised body plan, and the vast majority of bilaterians belong to two large clades: the protostomes, which includes organisms such as arthropods, molluscs, flatworms, annelids and nematodes; and the deuterostomes, which include echinoderms, hemichordates and chordates, the latter of which contains the vertebrates. The much smaller basal phylum Xenacoelomorpha have an uncertain position within Bilateria. Animals first appeared in the fossil record in the late Cryogenian period and diversified in the subsequent Ediacaran period in what is known as the Avalon explosion. Nearly all modern animal phyla first appeared in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago (Mya), and most classes during the Ordovician radiation 485.4 Mya. Common to all living animals, 6,331 groups of genes have been identified that may have arisen from a single common ancestor that lived about 650 Mya during the Cryogenian period. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa. Humans make use of many other animal species for food (including meat, eggs, and dairy products), for materials (such as leather, fur, and wool), as pets and as working animals for transportation, and services. Dogs, the first domesticated animal, have been used in hunting, in security and in warfare, as have horses, pigeons and birds of prey; while other terrestrial and aquatic animals are hunted for sports, trophies or profits. Non-human animals are also an important cultural element of human evolution, having appeared in cave arts and totems since the earliest times, and are frequently featured in mythology, religion, arts, literature, heraldry, politics, and sports. Etymology The word animal comes from the Latin noun animal of the same meaning, which is itself derived from Latin animalis 'having breath or soul'. The biological definition includes all members of the kingdom Animalia. In colloquial usage, the term animal is often used to refer only to nonhuman animals. The term metazoa is derived from Ancient Greek μετα meta 'after' (in biology, the prefix meta- stands for 'later') and ζῷᾰ zōia 'animals', plural of ζῷον zōion 'animal'. A metazoan is any member of the group Metazoa. Characteristics Animals have several characteristics that they share with other living things. Animals are eukaryotic, multicellular, and aerobic, as are plants and fungi. Unlike plants and algae, which produce their own food, animals cannot produce their own food, a feature they share with fungi. Animals ingest organic material and digest it internally. Animals have structural characteristics that set them apart from all other living things: Typically, there is an internal digestive chamber with either one opening (in Ctenophora, Cnidaria, and flatworms) or two openings (in most bilaterians). Animal development is controlled by Hox genes, which signal the times and places to develop structures such as body segments and limbs. During development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised into specialised tissues and organs, making the formation of complex structures possible, and allowing cells to be differentiated. The extracellular matrix may be calcified, forming structures such as shells, bones, and spicules. In contrast, the cells of other multicellular organisms (primarily algae, plants, and fungi) are held in place by cell walls, and so develop by progressive growth. Nearly all animals make use of some form of sexual reproduction. They produce haploid gametes by meiosis; the smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm. In most cases, a third germ layer, the mesoderm, also develops between them. These germ layers then differentiate to form tissues and organs. Repeated instances of mating with a close relative during sexual reproduction generally leads to inbreeding depression within a population due to the increased prevalence of harmful recessive traits. Animals have evolved numerous mechanisms for avoiding close inbreeding. Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentation; budding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids. Ecology Animals are categorised into ecological groups depending on their trophic levels and how they consume organic material. Such groupings include carnivores (further divided into subcategories such as piscivores, insectivores, ovivores, etc.), herbivores (subcategorised into folivores, graminivores, frugivores, granivores, nectarivores, algivores, etc.), omnivores, fungivores, scavengers/detritivores, and parasites. Interactions between animals of each biome form complex food webs within that ecosystem. In carnivorous or omnivorous species, predation is a consumer–resource interaction where the predator feeds on another organism, its prey, who often evolves anti-predator adaptations to avoid being fed upon. Selective pressures imposed on one another lead to an evolutionary arms race between predator and prey, resulting in various antagonistic/competitive coevolutions. Almost all multicellular predators are animals. Some consumers use multiple methods; for example, in parasitoid wasps, the larvae feed on the hosts' living tissues, killing them in the process, but the adults primarily consume nectar from flowers. Other animals may have very specific feeding behaviours, such as hawksbill sea turtles which mainly eat sponges. Most animals rely on biomass and bioenergy produced by plants and phytoplanktons (collectively called producers) through photosynthesis. Herbivores, as primary consumers, eat the plant material directly to digest and absorb the nutrients, while carnivores and other animals on higher trophic levels indirectly acquire the nutrients by eating the herbivores or other animals that have eaten the herbivores. Animals oxidise carbohydrates, lipids, proteins and other biomolecules in cellular respiration, which allows the animal to grow and to sustain basal metabolism and fuel other biological processes such as locomotion. Some benthic animals living close to hydrothermal vents and cold seeps on the dark sea floor consume organic matter produced through chemosynthesis (via oxidising inorganic compounds such as hydrogen sulfide) by archaea and bacteria. Animals originated in the ocean; all extant animal phyla, except for Micrognathozoa and Onychophora, feature at least some marine species. However, several lineages of arthropods begun to colonise land around the same time as land plants, probably between 510 and 471 million years ago, during the Late Cambrian or Early Ordovician. Vertebrates such as the lobe-finned fish Tiktaalik started to move on to land in the late Devonian, about 375 million years ago. Other notable animal groups that colonized land environments are Mollusca, Platyhelmintha, Annelida, Tardigrada, Onychophora, Rotifera, Nematoda. Animals occupy virtually all of earth's habitats and microhabitats, with faunas adapted to salt water, hydrothermal vents, fresh water, hot springs, swamps, forests, pastures, deserts, air, and the interiors of other organisms. Animals are however not particularly heat tolerant; very few of them can survive at constant temperatures above 50 °C (122 °F) or in the most extreme cold deserts of continental Antarctica. The collective global geomorphic influence of animals on the processes shaping the Earth's surface remains largely understudied, with most studies limited to individual species and well-known exemplars. Diversity The blue whale (Balaenoptera musculus) is the largest animal that has ever lived, weighing up to 190 tonnes and measuring up to 33.6 metres (110 ft) long. The largest extant terrestrial animal is the African bush elephant (Loxodonta africana), weighing up to 12.25 tonnes and measuring up to 10.67 metres (35.0 ft) long. The largest terrestrial animals that ever lived were titanosaur sauropod dinosaurs such as Argentinosaurus, which may have weighed as much as 73 tonnes, and Supersaurus which may have reached 39 metres. Several animals are microscopic; some Myxozoa (obligate parasites within the Cnidaria) never grow larger than 20 μm, and one of the smallest species (Myxobolus shekel) is no more than 8.5 μm when fully grown. The following table lists estimated numbers of described extant species for the major animal phyla, along with their principal habitats (terrestrial, fresh water, and marine), and free-living or parasitic ways of life. Species estimates shown here are based on numbers described scientifically; much larger estimates have been calculated based on various means of prediction, and these can vary wildly. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million. Using patterns within the taxonomic hierarchy, the total number of animal species—including those not yet described—was calculated to be about 7.77 million in 2011.[a] 3,000–6,500 4,000–25,000 Evolutionary origin Evidence of animals is found as long ago as the Cryogenian period. 24-Isopropylcholestane (24-ipc) has been found in rocks from roughly 650 million years ago; it is only produced by sponges and pelagophyte algae. Its likely origin is from sponges based on molecular clock estimates for the origin of 24-ipc production in both groups. Analyses of pelagophyte algae consistently recover a Phanerozoic origin, while analyses of sponges recover a Neoproterozoic origin, consistent with the appearance of 24-ipc in the fossil record. The first body fossils of animals appear in the Ediacaran, represented by forms such as Charnia and Spriggina. It had long been doubted whether these fossils truly represented animals, but the discovery of the animal lipid cholesterol in fossils of Dickinsonia establishes their nature. Animals are thought to have originated under low-oxygen conditions, suggesting that they were capable of living entirely by anaerobic respiration, but as they became specialised for aerobic metabolism they became fully dependent on oxygen in their environments. Many animal phyla first appear in the fossil record during the Cambrian explosion, starting about 539 million years ago, in beds such as the Burgess Shale. Extant phyla in these rocks include molluscs, brachiopods, onychophorans, tardigrades, arthropods, echinoderms and hemichordates, along with numerous now-extinct forms such as the predatory Anomalocaris. The apparent suddenness of the event may however be an artefact of the fossil record, rather than showing that all these animals appeared simultaneously. That view is supported by the discovery of Auroralumina attenboroughii, the earliest known Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predators, catching small prey with its nematocysts as modern cnidarians do. Some palaeontologists have suggested that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Early fossils that might represent animals appear for example in the 665-million-year-old rocks of the Trezona Formation of South Australia. These fossils are interpreted as most probably being early sponges. Trace fossils such as tracks and burrows found in the Tonian period (from 1 gya) may indicate the presence of triploblastic worm-like animals, roughly as large (about 5 mm wide) and complex as earthworms. However, similar tracks are produced by the giant single-celled protist Gromia sphaerica, so the Tonian trace fossils may not indicate early animal evolution. Around the same time, the layered mats of microorganisms called stromatolites decreased in diversity, perhaps due to grazing by newly evolved animals. Objects such as sediment-filled tubes that resemble trace fossils of the burrows of wormlike animals have been found in 1.2 gya rocks in North America, in 1.5 gya rocks in Australia and North America, and in 1.7 gya rocks in Australia. Their interpretation as having an animal origin is disputed, as they might be water-escape or other structures. Phylogeny Animals are monophyletic, meaning they are derived from a common ancestor. Animals are the sister group to the choanoflagellates, with which they form the Choanozoa. Ros-Rocher and colleagues (2021) trace the origins of animals to unicellular ancestors, providing the external phylogeny shown in the cladogram. Uncertainty of relationships is indicated with dashed lines. The animal clade had certainly originated by 650 mya, and may have come into being as much as 800 mya, based on molecular clock evidence for different phyla. Holomycota (inc. fungi) Ichthyosporea Pluriformea Filasterea The relationships at the base of the animal tree have been debated. Other than Ctenophora, the Bilateria and Cnidaria are the only groups with symmetry, and other evidence shows they are closely related. In addition to sponges, Placozoa has no symmetry and was often considered a "missing link" between protists and multicellular animals. The presence of hox genes in Placozoa shows that they were once more complex. The Porifera (sponges) have long been assumed to be sister to the rest of the animals, but there is evidence that the Ctenophora may be in that position. Molecular phylogenetics has supported both the sponge-sister and ctenophore-sister hypotheses. In 2017, Roberto Feuda and colleagues, using amino acid differences, presented both, with the following cladogram for the sponge-sister view that they supported (their ctenophore-sister tree simply interchanging the places of ctenophores and sponges): Porifera Ctenophora Placozoa Cnidaria Bilateria Conversely, a 2023 study by Darrin Schultz and colleagues uses ancient gene linkages to construct the following ctenophore-sister phylogeny: Ctenophora Porifera Placozoa Cnidaria Bilateria Sponges are physically very distinct from other animals, and were long thought to have diverged first, representing the oldest animal phylum and forming a sister clade to all other animals. Despite their morphological dissimilarity with all other animals, genetic evidence suggests sponges may be more closely related to other animals than the comb jellies are. Sponges lack the complex organisation found in most other animal phyla; their cells are differentiated, but in most cases not organised into distinct tissues, unlike all other animals. They typically feed by drawing in water through pores, filtering out small particles of food. The Ctenophora and Cnidaria are radially symmetric and have digestive chambers with a single opening, which serves as both mouth and anus. Animals in both phyla have distinct tissues, but these are not organised into discrete organs. They are diploblastic, having only two main germ layers, ectoderm and endoderm. The tiny placozoans have no permanent digestive chamber and no symmetry; they superficially resemble amoebae. Their phylogeny is poorly defined, and under active research. The remaining animals, the great majority—comprising some 29 phyla and over a million species—form the Bilateria clade, which have a bilaterally symmetric body plan. The Bilateria are triploblastic, with three well-developed germ layers, and their tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and in the Nephrozoa there is an internal body cavity, a coelom or pseudocoelom. These animals have a head end (anterior) and a tail end (posterior), a back (dorsal) surface and a belly (ventral) surface, and a left and a right side. A modern consensus phylogenetic tree for the Bilateria is shown below. Xenacoelomorpha Ambulacraria Chordata Ecdysozoa Spiralia Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Many bilaterians have a combination of circular muscles that constrict the body, making it longer, and an opposing set of longitudinal muscles, that shorten the body; these enable soft-bodied animals with a hydrostatic skeleton to move by peristalsis. They also have a gut that extends through the basically cylindrical body from mouth to anus. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. However, over evolutionary time, descendant spaces have evolved which have lost one or more of each of these characteristics. For example, adult echinoderms are radially symmetric (unlike their larvae), while some parasitic worms have extremely simplified body structures. Genetic studies have considerably changed zoologists' understanding of the relationships within the Bilateria. Most appear to belong to two major lineages, the protostomes and the deuterostomes. It is often suggested that the basalmost bilaterians are the Xenacoelomorpha, with all other bilaterians belonging to the subclade Nephrozoa. However, this suggestion has been contested, with other studies finding that xenacoelomorphs are more closely related to Ambulacraria than to other bilaterians. Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage. Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm. The main deuterostome taxa are the Ambulacraria and the Chordata. Ambulacraria are exclusively marine and include acorn worms, starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals. The protostomes include the Ecdysozoa, named after their shared trait of ecdysis, growth by moulting, Among the largest ecdysozoan phyla are the arthropods and the nematodes. The rest of the protostomes are in the Spiralia, named for their pattern of developing by spiral cleavage in the early embryo. Major spiralian phyla include the annelids and molluscs. History of classification In the classical era, Aristotle divided animals,[d] based on his own observations, into those with blood (roughly, the vertebrates) and those without. The animals were then arranged on a scale from man (with blood, two legs, rational soul) down through the live-bearing tetrapods (with blood, four legs, sensitive soul) and other groups such as crustaceans (no blood, many legs, sensitive soul) down to spontaneously generating creatures like sponges (no blood, no legs, vegetable soul). Aristotle was uncertain whether sponges were animals, which in his system ought to have sensation, appetite, and locomotion, or plants, which did not: he knew that sponges could sense touch and would contract if about to be pulled off their rocks, but that they were rooted like plants and never moved about. In 1758, Carl Linnaeus created the first hierarchical classification in his Systema Naturae. In his original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then, the last four have all been subsumed into a single phylum, the Chordata, while his Insecta (which included the crustaceans and arachnids) and Vermes have been renamed or broken up. The process was begun in 1793 by Jean-Baptiste de Lamarck, who called the Vermes une espèce de chaos ('a chaotic mess')[e] and split the group into three new phyla: worms, echinoderms, and polyps (which contained corals and jellyfish). By 1809, in his Philosophie Zoologique, Lamarck had created nine phyla apart from vertebrates (where he still had four phyla: mammals, birds, reptiles, and fish) and molluscs, namely cirripedes, annelids, crustaceans, arachnids, insects, worms, radiates, polyps, and infusorians. In his 1817 Le Règne Animal, Georges Cuvier used comparative anatomy to group the animals into four embranchements ('branches' with different body plans, roughly corresponding to phyla), namely vertebrates, molluscs, articulated animals (arthropods and annelids), and zoophytes (radiata) (echinoderms, cnidaria and other forms). This division into four was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860. In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals, with five phyla: coelenterates, echinoderms, articulates, molluscs, and vertebrates) and Protozoa (single-celled animals), including a sixth animal phylum, sponges. The protozoa were later moved to the former kingdom Protista, leaving only the Metazoa as a synonym of Animalia. In human culture The human population exploits a large number of other animal species for food, both of domesticated livestock species in animal husbandry and, mainly at sea, by hunting wild species. Marine fish of many species are caught commercially for food. A smaller number of species are farmed commercially. Humans and their livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined. Invertebrates including cephalopods, crustaceans, insects—principally bees and silkworms—and bivalve or gastropod molluscs are hunted or farmed for food, fibres. Chickens, cattle, sheep, pigs, and other animals are raised as livestock for meat across the world. Animal fibres such as wool and silk are used to make textiles, while animal sinews have been used as lashings and bindings, and leather is widely used to make shoes and other items. Animals have been hunted and farmed for their fur to make items such as coats and hats. Dyestuffs including carmine (cochineal), shellac, and kermes have been made from the bodies of insects. Working animals including cattle and horses have been used for work and transport from the first days of agriculture. Animals such as the fruit fly Drosophila melanogaster serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug trabectedin are based on toxins or other molecules of animal origin. People have used hunting dogs to help chase down and retrieve animals, and birds of prey to catch birds and mammals, while tethered cormorants have been used to catch fish. Poison dart frogs have been used to poison the tips of blowpipe darts. A wide variety of animals are kept as pets, from invertebrates such as tarantulas, octopuses, and praying mantises, reptiles such as snakes and chameleons, and birds including canaries, parakeets, and parrots all finding a place. However, the most kept pet species are mammals, namely dogs, cats, and rabbits. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own. A wide variety of terrestrial and aquatic animals are hunted for sport. The signs of the Western and Chinese zodiacs are based on animals. In China and Japan, the butterfly has been seen as the personification of a person's soul, and in classical representation the butterfly is also the symbol of the soul. Animals have been the subjects of art from the earliest times, both historical, as in ancient Egypt, and prehistoric, as in the cave paintings at Lascaux. Major animal paintings include Albrecht Dürer's 1515 The Rhinoceros, and George Stubbs's c. 1762 horse portrait Whistlejacket. Insects, birds and mammals play roles in literature and film, such as in giant bug movies. Animals including insects and mammals feature in mythology and religion. The scarab beetle was sacred in ancient Egypt, and the cow is sacred in Hinduism. Among other mammals, deer, horses, lions, bats, bears, and wolves are the subjects of myths and worship. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Wikipedia:Spoken_articles] | [TOKENS: 231] |
Contents Wikipedia:Spoken articles Page version status This is an accepted version of this page This page lists recordings of Wikipedia articles being read aloud, and the year each recording was made. Articles under each subject heading are listed alphabetically (by surname for people). For help playing Ogg audio, see Help:Media. To request an article to be spoken, see Category:Spoken Wikipedia requests. For all other information, see the WikiProject Spoken Wikipedia page. Spoken articles marked with were featured articles at the time of recording. Similarly, spoken articles marked with were good articles at the time of recording. There are 1,907 spoken articles in English. Art, architecture and archaeology Biology Business, economics, and finance Chemistry and mineralogy Computing Culture and society Education Food and drink Geography and places Geology and geophysics Health and medicine History Language and linguistics Law and crime Literature and theatre Mathematics Media Music Philosophy Physics and astronomy Politics and government Psychology Religion, mysticism, and mythology Royalty, nobility and chivalry Sport and recreation Technology Transport Video gaming War Wikipedia–related See also External links Topics Types Places, people and times Indices |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Business_Insider] | [TOKENS: 1670] |
Contents Business Insider Business Insider (stylized in all caps: BUSINESS INSIDER; known from 2021 to 2023 as INSIDER) is a New York City–based multinational financial and business news website founded in 2007. Since 2015, a majority stake in Business Insider's parent company Insider Inc. has been owned by the German publishing house Axel Springer. It operates several international editions, including one in the United Kingdom. Insider publishes original reporting and aggregates material from other outlets. As of 2011,[update] it maintained a liberal policy on the use of anonymous sources. It has also published native advertising and granted sponsors editorial control of its content. The outlet has been nominated for several awards, but has also been criticized for using factually incorrect clickbait headlines to attract viewership. In 2015, Axel Springer SE acquired 88 percent of the stake in Insider Inc. for $343 million (€306 million), implying a total valuation of $442 million. From February 2021 to November 2023, the brand was named simply Insider while it published general news and lifestyle content, before its name was reverted. In 2023, Business Insider shifted its organizational model, adding multiple artificial intelligence (AI) products in 2024, and reducing its staff by nearly 40% between April 2023 and May 2025. History Business Insider was launched in 2007 and is based in Manhattan. Founded by DoubleClick's former CEO Kevin P. Ryan, Dwight Merriman, and Henry Blodget, the site began as a consolidation of industry vertical blogs, the first of them being Silicon Alley Insider (launched May 16, 2007) and Clusterstock (launched March 20, 2008). Gordon Crovitz, former publisher of the Wall Street Journal, was an early investor. In addition to providing and analyzing business news, the site aggregates news stories on various subjects. It started a UK edition in November 2014, and a Singapore bureau in September 2020. BI's parent company is Insider Inc. After Axel Springer SE purchased Business Insider in 2015, a substantial portion of its staff left the company. According to a CNN report, some staff who exited complained that "traffic took precedence over enterprise reporting". In 2017, Business Insider launched BI Prime subscription, the service which placed some of its articles behind paywall. In 2018, staff members were asked to sign a confidentiality agreement that included a nondisparagement clause requiring them not to criticize the site during or after their employment. Early in 2020, CEO Henry Blodget convened a meeting in which he announced plans for the website to acquire 1 million subscribers, 1 billion unique visitors per month, and over 1,000 newsroom employees. The parent companies of Business Insider and eMarketer merged in 2020 in connection with the proposed purchase of Axel Springer by KKR, an American private equity firm. In October 2020, BI's parent company purchased a majority position in Morning Brew, a newsletter. In 2022, Insider won the Pulitzer Prize for Illustrated Reporting and Commentary, its first ever Pulitzer Prize, for its illustrated report "How I escaped a Chinese internment camp". The piece, composed as a series of comics that told the story of one woman's experience escaping China's persecution of Uyghurs, was created by illustrator Fahmida Azim alongside art director Anthony Del Col, writer Josh Adams, and editor Walt Hickey. Business Insider laid off 10% of its employees in April 2023. After hiring Jamie Heller, a former Wall Street Journal editor, to serve as its editor-in-chief, in 2024, about 8% of the website's staff were laid off. In May 2025, another 21% of staff were laid off; CEO Barbara Peng stated that the website had launched multiple AI-driven products during the previous year, is "going all-in on AI", and indicated that "significant organizational changes" initiated in 2023 were underway. More than 70% of employees were using Enterprise ChatGPT regularly by May 2025, with 100% being the goal. Finances Business Insider first reported a profit in the fourth quarter of 2010. As of 2011[update], it had 45 full-time employees. Its target audience at the time was limited to "investors and financial professionals". In June 2012, it had 5.4 million unique visitors. As of 2013[update], Jeff Bezos was a Business Insider investor; his investment company Bezos Expeditions held approximately 3 percent of the company as of its acquisition in 2015. In 2015, Axel Springer SE acquired 88 percent of the stake in Insider Inc. for $343 million (€306 million), implying a total valuation of $442 million. Divisions Business Insider operates a paid division titled BI Intelligence, established in 2013. In July 2015, Business Insider began the technology website Tech Insider, with a staff of 40 people working primarily from the company's existing New York headquarters, but originally separated from the main Business Insider newsroom. However, Tech Insider was eventually folded into the Business Insider website. Also in 2015, Business Insider launched Insider Picks, the precursor to what is now Insider Reviews, "to help shoppers navigate the complex retail industry and make the best purchasing decisions." In October 2016, Business Insider started Markets Insider as a joint venture with Finanzen.net, another Axel Springer company. Bias, reliability, and editorial policy Glenn Greenwald has critiqued the reliability of Business Insider, along with that of publications including The Wall Street Journal, Yahoo! News, and Slate. In 2010, Business Insider falsely reported that New York Governor David Paterson was slated to resign; BI had earlier reported a false story alleging that Steve Jobs experienced a heart attack. In April 2011, Blodget sent out a notice inviting publicists to "contribute directly" to Business Insider. As of September 2011[update], Business Insider allowed the use of anonymous sources "at any time for any reason", a practice which many media outlets prefer to avoid or at least indicate why a source is not identified. According to the World Association of Newspapers and News Publishers, Business Insider gave SAP "limited editorial control" over the content of its "Future of Business" section as of 2013[update]. The website publishes a mix of original reporting and aggregation of other outlets' content. Business Insider has also published native advertising. Jacob Furedi, the former deputy editor of UnHerd and editor of the publication Dispatch, investigated articles on Business Insider and Wired written by a supposed freelancer named Margaux Blanchard after receiving an email. After concluding that the articles were AI-generated, he contacted Press Gazette. Both Business Insider and Wired subsequently removed the articles. Reception In January 2009, the Clusterstock section appeared in Time's list of 25 best financial blogs, and the Silicon Alley Insider section was listed in PC Magazine's list of its "favorite blogs of 2009". 2009 also saw Business Insider's selection as an official Webby honoree for Best Business Blog. In 2012, Business Insider was named to the Inc. 500. In 2013, the publication was once again nominated in the Blog-Business category at the Webby Awards. In January 2014, The New York Times reported that Business Insider's web traffic was comparable to that of The Wall Street Journal. In 2017, Digiday included imprint Insider as a candidate in two separate categories—"Best New Vertical" and "Best Use of Instagram"—at their annual Publishing Awards. The website has faced criticism for what critics consider its clickbait-style headlines. A 2013 profile of Blodget and Business Insider in The New Yorker suggested that Business Insider, because it republishes material from other outlets, may not always be accurate. In 2022, Insider won the Pulitzer Prize for Illustrated Reporting and Commentary for its reporting on the persecution of Uyghurs in China. References Works cited External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Category:Articles_with_unsourced_statements_from_October_2017] | [TOKENS: 138] |
Category:Articles with unsourced statements from October 2017 This category combines all articles with unsourced statements from October 2017 (2017-10) to enable us to work through the backlog more systematically. It is a member of Category:Articles with unsourced statements. To add an article to this category add {{Citation needed|date=October 2017}} to the article. If you omit the date a bot will add it for you at some point. Pages in category "Articles with unsourced statements from October 2017" The following 200 pages are in this category, out of approximately 2,383 total. This list may not reflect recent changes. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Doctor_of_Philosophy] | [TOKENS: 14105] |
Contents Doctor of Philosophy A Doctor of Philosophy (PhD, DPhil; Latin: philosophiae doctor or doctor in philosophia) is a terminal degree that usually denotes the highest level of academic achievement in a given discipline and is awarded following a course of graduate study and original research. The name of the degree is most often abbreviated PhD (or, at times, as Ph.D. in North America) and is pronounced as three separate letters (/ˌpiːeɪtʃˈdiː/ PEE-aych-DEE). The University of Oxford uses the alternative abbreviation "DPhil". PhDs are awarded for programs across the whole breadth of academic fields. Since it is an earned research degree, those studying for a PhD are required to produce original research that expands the boundaries of knowledge, normally in the form of a dissertation, and, in some cases, defend their work before a panel of other experts in the field. In many fields, the completion of a PhD is typically required for employment as a university professor, researcher, or scientist. Definition In the context of the Doctor of Philosophy and other similarly titled degrees, the term "philosophy" does not refer to the field or academic discipline of philosophy, but is used in a broader sense in accordance with its original Greek meaning, which is "love of wisdom". In most of Europe, all fields (including history, philosophy, social sciences, mathematics, and natural philosophy – later known as natural science) other than theology, law, and medicine (the so-called professional, vocational, or technical curricula) were traditionally known as philosophy, and in Germany and elsewhere in Europe the basic faculty of liberal arts was known as the "faculty of philosophy". A PhD candidate must submit a project, thesis, or dissertation often consisting of a body of original academic research, which is in principle worthy of publication in a peer-reviewed journal. In many countries, a candidate must defend this work before a panel of expert examiners appointed by the university. Universities sometimes award other types of doctorate besides the PhD, such as the Doctor of Musical Arts (DMA) for music performers, Doctor of Juridical Science (SJD) for legal scholars and the Doctor of Education (EdD) for studies in education. In 2005 the European University Association defined the "Salzburg Principles", 10 basic principles for third-cycle degrees (doctorates) within the Bologna Process. These were followed in 2016 by the "Florence Principles", seven basic principles for doctorates in the arts laid out by the European League of Institutes of the Arts, which have been endorsed by the European Association of Conservatoires, the International Association of Film and Television Schools, the International Association of Universities and Colleges of Art, Design and Media, and the Society for Artistic Research. The specific requirements to earn a PhD degree vary considerably according to the country, institution, and time period, from entry-level research degrees to higher doctorates. During the studies that lead to the degree, the student is called a doctoral student or PhD student; a student who has completed any necessary coursework and related examinations and is working on their thesis/dissertation is sometimes known as a doctoral candidate or PhD candidate. A student attaining this level may be granted a Candidate of Philosophy degree at some institutions or may be granted a master's degree en route to the doctoral degree. Sometimes this status is also colloquially known as "ABD", meaning "all but dissertation". PhD graduates may undertake a postdoc in the process of transitioning from study to academic tenure.[citation needed] Individuals who have earned the Doctor of Philosophy degree use the title Doctor (often abbreviated "Dr" or "Dr."), although the etiquette associated with this usage may be subject to the professional ethics of the particular scholarly field, culture, or society. Those who teach at universities or work in academic, educational, or research fields are usually addressed by this title "professionally and socially in a salutation or conversation". Alternatively, holders may use post-nominal letters such as "Ph.D.", "PhD", or "DPhil", depending on the awarding institution. It is, however, traditionally considered incorrect to use both the title and post-nominals together, although usage in that regard has been evolving over time. History In the universities of Medieval Europe, study was organized in four faculties: the basic faculty of arts, and the three higher faculties of theology, medicine, and laws (canon law and civil law). All of these faculties awarded intermediate degrees (bachelors of arts, theology, laws and medicine) and final degrees. Initially, the titles of master and doctor were used interchangeably for the final degrees—the title Doctor was merely a formality bestowed on a Teacher/Master of the art—but by the late Middle Ages the terms Master of Arts and Doctor of Theology/Divinity, Doctor of Law, and Doctor of Medicine had become standard in most places (though in the German and Italian universities the term Doctor was used for all faculties). The doctorates in the higher faculties were quite different from the current PhD degree in that they were awarded for advanced scholarship, not original research. No dissertation or original work was required, only lengthy residency requirements and examinations. Besides these degrees, there was the licentiate. Originally this was a license to teach, awarded shortly before the award of the master's or doctoral degree by the diocese in which the university was located, but later it evolved into an academic degree in its own right, in particular in the continental universities. According to Keith Allan Noble (1994), the first doctoral degree was awarded in medieval Paris around 1150. The doctorate of philosophy developed in Germany as the terminal teacher's credential in the 17th century (circa 1652). There were no PhDs in Germany before the 1650s (when they gradually started replacing the MA as the highest academic degree; arguably, one of the earliest German PhD holders is Erhard Weigel (Dr. phil. hab., Leipzig, 1652).[citation needed] The full course of studies might, for example, lead in succession to the degrees of Bachelor of Arts, Licentiate of Arts, Master of Arts, or Bachelor of Medicine, Licentiate of Medicine, or Doctor of Medicine, but before the early modern era, many exceptions to this existed. Most students left the university without becoming masters of arts, whereas regulars (members of monastic orders) could skip the arts faculty entirely. This situation changed in the early 19th century through the educational reforms in Germany, most strongly embodied in the model of the University of Berlin, founded in 1810 and controlled by the Prussian government. The arts faculty, which in Germany was labelled the faculty of philosophy, started demanding contributions to research, attested by a dissertation, for the award of their final degree, which was labelled Doctor of Philosophy (abbreviated as Ph.D.)—originally this was just the German equivalent of the Master of Arts degree. Whereas in the Middle Ages the arts faculty had a set curriculum, based upon the trivium and the quadrivium, by the 19th century it had come to house all the courses of study in subjects now commonly referred to as sciences and humanities. Professors across the humanities and sciences focused on their advanced research. Practically all the funding came from the central government, and it could be cut off if the professor was politically unacceptable.[relevant?] These reforms proved extremely successful, and fairly quickly the German universities started attracting foreign students, notably from the United States. The American students would go to Germany to obtain a PhD after having studied for a bachelor's degree at an American college. So influential was this practice that it was imported to the United States, where in 1861 Yale University started granting the PhD degree to younger students who, after having obtained the bachelor's degree, had completed a prescribed course of graduate study and successfully defended a thesis or dissertation containing original research in science or in the humanities. In Germany, the name of the doctorate was adapted after the philosophy faculty started being split up − e.g. Dr. rer. nat. for doctorates in the faculty of natural sciences − but in most of the English-speaking world the name "Doctor of Philosophy" was retained for research doctorates in all disciplines. The PhD degree and similar awards spread across Europe in the 19th and early 20th centuries. The degree was introduced in France in 1808,[contradictory] replacing diplomas as the highest academic degree; into Russia in 1819, when the Doktor Nauk degree, roughly equivalent to a PhD, gradually started replacing the specialist diploma, roughly equivalent to the MA, as the highest academic degree; and in Italy in 1927, when PhDs gradually started replacing the Laurea as the highest academic degree.[citation needed] Research degrees first appeared in the UK in the late 19th century in the shape of the Doctor of Science (DSc or ScD) and other such "higher doctorates". The University of London introduced the DSc in 1860, but as an advanced study course, following on directly from the BSc, rather than a research degree. The first higher doctorate in the modern sense was Durham University's DSc, introduced in 1882. This was soon followed by other universities, including the University of Cambridge establishing its ScD in the same year and the University of London transforming its DSc into a research degree in 1885. These were, however, very advanced degrees, rather than research-training degrees at the PhD level. Harold Jeffreys said that getting a Cambridge ScD was "more or less equivalent to being proposed for the Royal Society." In 1917, the current PhD degree was introduced, along the lines of the American and German model, and quickly became popular with both British and foreign students. The slightly older degrees of Doctor of Science and Doctor of Literature/Letters still exist at British universities; together with the much older degrees of Doctor of Divinity (DD), Doctor of Music (DMus), Doctor of Civil Law (DCL), and Doctor of Medicine (MD), they form the higher doctorates, but apart from honorary degrees, they are only infrequently awarded. In English (but not Scottish) universities, the Faculty of Arts had become dominant by the early 19th century. Indeed, the higher faculties had largely atrophied, since medical training had shifted to teaching hospitals, the legal training for the common law system was provided by the Inns of Court (with some minor exceptions, see Doctors' Commons), and few students undertook formal study in theology. This contrasted with the situation in the continental European (and Scottish) universities at the time, where the preparatory role of the Faculty of Philosophy or Arts was to a great extent taken over by secondary education: in modern France, the Baccalauréat is the examination taken at the end of secondary studies. The reforms at the Humboldt University transformed the Faculty of Philosophy or Arts (and its more recent successors such as the Faculty of Sciences) from a lower faculty into one on a par with the Faculties of Law and Medicine. Similar developments occurred in many other continental European universities, and at least until reforms in the early 21st century, many European countries (e.g., Belgium, Spain, and the Scandinavian countries) had in all faculties triple degree structures of bachelor (or candidate) − licentiate − doctor as opposed to bachelor − master − doctor; the meaning of the different degrees varied from country to country, however. To this day, this is also still the case for the pontifical degrees in theology and canon law; for instance, in sacred theology, the degrees are Bachelor of Sacred Theology (STB), Licentiate of Sacred Theology (STL), and Doctor of Sacred Theology (STD), and in canon law: Bachelor of Canon Law (JCB), Licentiate of Canon Law (JCL), and Doctor of Canon Law (JCD). Until the mid-19th century, advanced degrees were not a criterion for professorships at most colleges. That began to change as the more ambitious scholars at major schools went to Germany for one to three years to obtain a PhD in the sciences or humanities. Graduate schools slowly emerged in the United States. In 1852, the first honorary PhD in the nation was given at Bucknell University in Lewisburg, Pennsylvania to Ebenezer Newton Elliott. Nine years later, in 1861, Yale University awarded three PhDs: to Eugene Schuyler in philosophy and psychology, Arthur Williams Wright in physics, and James Morris Whiton Jr. in classics. Over the following two decades, Harvard University, New York University, Princeton University, and the University of Pennsylvania, also began granting the degree. Major shifts toward graduate education were foretold by the opening of Clark University in 1887, which offered only graduate programs, and the Johns Hopkins University, which focused on its PhD program. By the 1890s, Harvard, Columbia, Michigan and Wisconsin were building major graduate programs, whose alumni were hired by new research universities. By 1900, 300 PhDs were awarded annually, most of them by six universities. It was no longer necessary to study in Germany. However, half of the institutions awarding earned PhDs in 1899 were undergraduate institutions that granted the degree for work done away from campus. Degrees awarded by universities without legitimate PhD programs accounted for about a third of the 382 doctorates recorded by the US Department of Education in 1900, of which another 8–10% were honorary. The awarding of PhD as an honorary degree was banned by the Board of Regents of the University of the State of New York in 1897. This had a nation-wide impact, and after 1907, less than 10 honorary PhDs were awarded in the United States each year. The last authenticated PhD awarded honoris causa was awarded in 1937 to Bing Crosby by Gonzaga University. At the start of the 20th century, U.S. universities were held in low regard internationally and many American students were still traveling to Europe for PhDs. The lack of centralised authority meant anyone could start a university and award PhDs. This led to the formation of the Association of American Universities by 14 leading research universities (producing nearly 90% of the approximately 250 legitimate research doctorates awarded in 1900), with one of the main goals being to "raise the opinion entertained abroad of our own Doctor's Degree." In Germany, the national government funded the universities and the research programs of the leading professors. It was impossible for professors who were not approved by Berlin to train graduate students. In the United States, by contrast, private universities and state universities alike were independent of the federal government. Independence was high, but funding was low. The breakthrough came from private foundations, which began regularly supporting research in science and history; large corporations sometimes supported engineering programs. The postdoctoral fellowship was established by the Rockefeller Foundation in 1919. Meanwhile, the leading universities, in cooperation with the learned societies, set up a network of scholarly journals. "Publish or perish" became the formula for faculty advancement in the research universities. After World War II, state universities across the country expanded greatly in undergraduate enrollment, and eagerly added research programs leading to masters or doctorate degrees. Their graduate faculties had to have a suitable record of publication and research grants. Late in the 20th century, "publish or perish" became increasingly important in colleges and smaller universities. Requirements Detailed requirements for the award of a PhD degree vary throughout the world and even from school to school. It is usually required for the student to hold an Honours degree or a Master's degree with high academic standing, in order to be considered for a PhD program.[citation needed] In the US, Canada, India, Sweden and Denmark, for example, many universities require coursework in addition to research for PhD degrees. In other countries (such as the UK) there is generally no such condition, though this varies by university and field. Some individual universities or departments specify additional requirements for students not already in possession of a bachelor's degree or equivalent or higher. In order to submit a successful PhD admission application, copies of academic transcripts, letters of recommendation, a research proposal, and a personal statement are often required. Most universities also invite for a special interview before admission. A candidate must submit a project, thesis, or dissertation often consisting of a body of original academic research, which is in principle worthy of publication in a peer-reviewed context. Moreover, some PhD programs, especially in science, require one to three published articles in peer-reviewed journals. In many countries, a candidate must defend this work before a panel of expert examiners appointed by the university; this defense is open to the public in some countries, and held in private in others; in other countries, the dissertation is examined by a panel of expert examiners who stipulate whether the dissertation is in principle passable and any issues that need to be addressed before the dissertation can be passed. Some universities in the non-English-speaking world have begun adopting similar standards to those of the anglophone PhD degree for their research doctorates (see the Bologna process). A PhD student or candidate is conventionally required to study on campus under close supervision. With the popularity of distance education and e-learning technologies, some universities now accept students enrolled into a distance education part-time mode. In a "sandwich PhD" program, PhD candidates do not spend their entire study period at the same university. Instead, the PhD candidates spend the first and last periods of the program at their home universities and in between conduct research at another institution or field research. Occasionally a "sandwich PhD" will be awarded by two universities. It is possible to broaden the field of study pursued by a PhD student by the addition of a minor subject of study within a different discipline. Value and criticism A career in academia generally requires a PhD, although in some countries it is possible to reach relatively high positions without a doctorate. In North America, professors are increasingly being required to have a PhD, and the percentage of faculty with a PhD may be used as a university ratings measure. The motivation may also include increased salary, but in many cases, this is not the result. Research by Bernard H. Casey of the University of Warwick, U.K, suggests that, over all subjects, PhDs provide an earnings premium of 26% over non-accredited graduates, but notes that master's degrees already provide a premium of 23% and a bachelor's 14%. While this is a small return to the individual (or even an overall deficit when tuition and lost earnings during training are accounted for), he claims there are significant benefits to society for the extra research training. However, some research suggests that overqualified workers are often less satisfied and less productive at their jobs. These difficulties are increasingly being felt by graduates of professional degrees, such as law school, looking to find employment. PhD students may need to take on debt to undertake their degree. A PhD is also required in some positions outside academia, such as research jobs in major international agencies. In some cases, the executive directors of some types of foundations may be expected to hold a PhD[citation needed]. A PhD is sometimes felt to be a necessary qualification in certain areas of employment, such as in foreign policy think-tanks: U.S. News & World Report wrote in 2013 that "[i]f having a master's degree at the minimum is de rigueur in Washington's foreign policy world, it is no wonder many are starting to feel that the PhD is a necessary escalation, another case of costly signaling to potential employers". Similarly, an article on the Australian public service states that "credentialism in the public service is seeing a dramatic increase in the number of graduate positions going to PhDs and masters degrees becoming the base entry level qualification". The Economist published an article in 2010 citing various criticisms against the state of PhDs. These included a prediction by economist Richard B. Freeman that, based on pre-2000 data, only 20% of life science PhD students would gain a faculty job in the U.S., and that in Canada 80% of postdoctoral research fellows earned less than or equal to an average construction worker ($38,600 a year). According to the article, only the fastest developing countries (e.g. China or Brazil) have a shortage of PhDs. In 2022, Nature reported that PhD students' wages in biological sciences in the US do not cover living costs. The U.S. higher education system often offers little incentive to move students through PhD programs quickly and may even provide incentives to slow them down. To counter this problem, the United States introduced the Doctor of Arts degree in 1970 with seed money from the Carnegie Foundation for the Advancement of Teaching. The aim of the Doctor of Arts degree was to shorten the time needed to complete the degree by focusing on pedagogy over research, although the Doctor of Arts still contains a significant research component. Germany is one of the few nations engaging these issues, and it has been doing so by reconceptualising PhD programs to be training for careers, outside academia, but still at high-level positions. This development can be seen in the extensive number of PhD holders, typically from the fields of law, engineering, and economics, at the very top corporate and administrative positions. To a lesser extent, the UK research councils have tackled the issue by introducing, since 1992, the EngD.[citation needed][clarification needed] Mark C. Taylor opined in 2011 in Nature that total reform of PhD programs in almost every field is necessary in the U.S. and that pressure to make the necessary changes will need to come from many sources (students, administrators, public and private sectors, etc.). Other articles in Nature have also examined the issue of PhD reform. Freeman Dyson, professor emeritus at the Institute for Advanced Study in Princeton, was opposed to the PhD system and did not have a PhD degree. On the other hand, it was understood by all his peers that he was a world leading scientist with many accomplishments already under his belt during his graduate study years and he was eligible to gain the degree at any given moment.[citation needed] Degrees by country The UNESCO, in its International Standard Classification of Education (ISCED), states that: "Programmes to be classified at ISCED level 8 are referred to in many ways around the world such as PhD, DPhil, D.Lit, D.Sc, LL.D, Doctorate or similar terms. However, it is important to note that programmes with a similar name to 'doctor' should only be included in ISCED level 8 if they satisfy the criteria described in Paragraph 263. For international comparability purposes, the term 'doctoral or equivalent' is used to label ISCED level 8." In German-speaking nations, most Eastern European nations, successor states of the former Soviet Union, most parts of Africa, Asia, and many Spanish-speaking countries, the corresponding degree to a Doctor of Philosophy is simply called "Doctor" (Doktor), and the subject area is distinguished by a Latin suffix (e.g., "Dr. med." for Doctor medicinae, Doctor of Medicine; "Dr. rer. nat." for Doctor rerum naturalium, Doctor of the Natural Sciences; "Dr. phil." for Doctor philosophiae, Doctor of Philosophy; "Dr. iur." for Doctor iuris, Doctor of Laws). In Argentina, the admission to a PhD program at public Argentine University requires the full completion of a Master's degree or a Licentiate degree. Non-Argentine Master's titles are generally accepted into a PhD program when the degree comes from a recognized university. While a significant portion of postgraduate students finance their tuition and living costs with teaching or research work at private and state-run institutions, international institutions, such as the Fulbright Program and the Organization of American States (OAS), have been known to grant full scholarships for tuition with apportions for housing. Others apply for funds to CONICET, the national public body of scientific and technical research, which typically awards more than a thousand scholarships each year for this purpose, thus guaranteeing many PhD candidates remain within the system. Upon completion of at least two years' research and coursework as a graduate student, a candidate must demonstrate truthful and original contributions to their specific field of knowledge within a frame of academic excellence. The doctoral candidate's work should be presented in a dissertation or thesis prepared under the supervision of a tutor or director and reviewed by a Doctoral Committee. This committee should be composed of examiners that are external to the program, and at least one of them should also be external to the institution. The academic degree of Doctor, respective to the correspondent field of science that the candidate has contributed with original and rigorous research, is received after a successful defense of the candidate's dissertation. Admission to a PhD program in Australia requires applicants to demonstrate capacity to undertake research in the proposed field of study. The standard requirement is a bachelor honours degree with either first-class or upper second-class honours. Research master's degrees and coursework master's degrees with a 25% research component are usually considered equivalent. It is also possible for research master's degree students to "upgrade" to PhD candidature after demonstrating sufficient progress. PhD students are sometimes offered a scholarship to study for their PhD degree. The most common of these was the government-funded Australian Postgraduate Award (APA) until its dissolution in 2017. It was replaced by Research Training Program (RTP), awarded to students of "exceptional research potential", which provides a living stipend to students of approximately A$34,000 a year (tax-free). RTPs are paid for a duration of 3 years, while a 6-month extension is usually possible upon citing delays out of the control of the student. Some universities also fund a similar scholarship that matches the APA amount. Due to a continual increase in living costs, many PhD students are forced to live under the poverty line. In addition to the more common RTP and university scholarships, Australian students have other sources of scholarship funding, coming from industry, private enterprise, and organisations. Australian citizens, permanent residents, and New Zealand citizens are not charged course fees for their PhD or research master's degree, with the exception in some universities of the student services and amenities fee (SSAF) which is set by each university and typically involves the largest amount allowed by the Australian government. All fees are paid for by the Australian government, except for the SSAF, under the Research Training Program. International students and coursework master's degree students must pay course fees unless they receive a scholarship to cover them. Completion requirements vary. Most Australian PhD programs do not have a required coursework component. The credit points attached to the degree are all in the product of the research, which is usually an 80,000-word thesis that makes a significant new contribution to the field. Recent pressure on higher degree by research (HDR) students to publish has resulted in increasing interest in PhD by publication as opposed to the more traditional PhD by dissertation, which typically requires a minimum of two publications, but which also requires traditional thesis elements such as an introductory exegesis, and linking chapters between papers. The PhD thesis is sent to external examiners who are experts in the field of research and who have not been involved in the work. Examiners are nominated by the candidate's university, and their identities are often not revealed to the candidate until the examination is complete. A formal oral defence is generally not part of the examination of the thesis, largely because of the distances that would need to be travelled by the overseas examiners; however, since 2016, there is a trend toward implementing this in many Australian universities. At the University of South Australia, PhD candidates who started after January 2016 now undertake an oral defence via an online conference with two examiners. Admission to a doctoral programme at a university in Canada typically requires completion of a Master's degree in a related field, with sufficiently high grades and proven research ability. In some cases, a student may progress directly from an Honours Bachelor's degree to a PhD program; other programs allow a student to fast-track to a doctoral program after one year of outstanding work in a Master's program (without having to complete the Master's). An application package typically includes a research proposal, letters of reference, transcripts, and in some cases, a writing sample or Graduate Record Examinations scores. A common criterion for prospective PhD students is the comprehensive or qualifying examination, a process that often commences in the second year of a graduate program. Generally, successful completion of the qualifying exam permits continuance in the graduate program. Formats for this examination include oral examination by the student's faculty committee (or a separate qualifying committee), or written tests designed to demonstrate the student's knowledge in a specialized area (see below) or both. At English-speaking universities, a student may also be required to demonstrate English language abilities, usually by achieving an acceptable score on a standard examination (for example the Test of English as a Foreign Language). Depending on the field, the student may also be required to demonstrate ability in one or more additional languages. A prospective student applying to French-speaking universities may also have to demonstrate some English language ability. While some students work outside the university (or at student jobs within the university), in some programs students are advised (or must agree) not to devote more than ten hours per week to activities (e.g., employment) outside of their studies, particularly if they have been given funding. For large and prestigious scholarships, such as those from NSERC and Fonds québécois de la recherche sur la nature et les technologies, this is an absolute requirement. At some Canadian universities, most PhD students receive an award equivalent to part or all of the tuition amount for the first four years (this is sometimes called a tuition deferral or tuition waiver). Other sources of funding include teaching assistantships and research assistantships; experience as a teaching assistant is encouraged but not requisite in many programs. Some programs may require all PhD candidates to teach, which may be done under the supervision of their supervisor or regular faculty. Besides these sources of funding, there are also various competitive scholarships, bursaries, and awards available, such as those offered by the federal government via NSERC, CIHR, or SSHRC. In general, the first two years of study are devoted to completion of coursework and the comprehensive examinations. At this stage, the student is known as a "PhD student" or "doctoral student." It is usually expected that the student will have completed most of their required coursework by the end of this stage. Furthermore, it is usually required that by the end of eighteen to thirty-six months after the first registration, the student will have successfully completed the comprehensive exams. Upon successful completion of the comprehensive exams, the student becomes known as a "PhD candidate." From this stage on, the bulk of the student's time will be devoted to their own research, culminating in the completion of a PhD thesis or dissertation. The final requirement is an oral defense of the thesis, which is open to the public in some, but not all, universities. At most Canadian universities, the time needed to complete a PhD degree typically ranges from four to six years. It is, however, not uncommon for students to be unable to complete all the requirements within six years, particularly given that funding packages often support students for only two to four years; many departments will allow program extensions at the discretion of the thesis supervisor or department chair. Alternative arrangements exist whereby a student is allowed to let their registration in the program lapse at the end of six years and re-register once the thesis is completed in draft form. The general rule is that graduate students are obligated to pay tuition until the initial thesis submission has been received by the thesis office. In other words, if a PhD student defers or delays the initial submission of their thesis they remain obligated to pay fees until such time that the thesis has been received in good standing. In China, doctoral programs can be applied directly after obtaining a bachelor's degree or applied after obtaining a master's degree. Those who directly apply for a doctoral program after a bachelor's degree usually need four to five years to obtain a doctorate and will not be awarded a master's degree during the period. The courses at the doctoral level are mainly completed in the first and second years, and the remaining years are spent doing experiments/research and writing papers. At most universities, the maximum duration of doctoral study is 7 years. If a doctoral student does not complete their degree within 7 years, it is likely that they can only obtain a study certificate without any degree. China has thirteen statutory types of academic degrees, which also apply to doctorate degrees. Despite the naming difference, all these thirteen types of doctoral degrees are research and academic degrees that are equivalent to PhD degrees. These thirteen doctorates are: In international academic communication, Chinese doctoral degree recipients sometimes translate their doctorate degree names to PhD in Discipline (such as PhD in Engineering, Computer Science) to facilitate peer understanding. In Colombia, the PhD course admission may require a master's degree (Magíster) in some universities, specially public universities. However, it could also be applied for a direct doctorate in specific cases, according to the jury's recommendations on the thesis proposal. Most of postgraduate students in Colombia must finance their tuition fees by means of teaching assistant seats or research works. Some institutions such as Colciencias, Colfuturo, CeiBA, and Icetex grant scholarships or provide awards in the form of forgivable loans. After two or two and a half years, it is expected that the research work of the doctoral candidate be submitted in the form of oral qualification, where suggestions and corrections about the research hypothesis and methodology, as well as on the course of the research work, are performed. The PhD degree is only received after a successful defense of the candidate's thesis is performed (four or five years after the enrollment), most of the time also requiring the most important results having been published in at least one peer-reviewed high-impact international journal. In Finland, the degree of filosofian tohtori (abbreviated FT) is awarded by traditional universities, such as University of Helsinki. A Master's degree is required, and the doctorate combines approximately 4–5 years of research (amounting to 3–5 scientific articles, some of which must be first-author) and 60 ECTS points of studies. Other universities such as Aalto University award degrees such as tekniikan tohtori (TkT, engineering), taiteen tohtori (TaT, art), etc., which are translated in English to Doctor of Science (D.Sc.), and they are formally equivalent. The licentiate (filosofian lisensiaatti or FL) requires only 2–3 years of research and is sometimes done before an FT. Before 1984 three research doctorates existed in France: the State doctorate (doctorat d'État, the old doctorate introduced in 1808), the third cycle doctorate (doctorat de troisième cycle, created in 1954 and shorter than the State doctorate) and the diploma of doctor-engineer (diplôme de docteur-ingénieur created in 1923), for technical research. After 1984, only one type of doctoral degree remained, called "doctorate" (Doctorat). The latter is equivalent to the PhD. Students pursuing the PhD degree must first complete a master's degree program, which takes two years after graduation with a bachelor's degree (five years in total). The candidate must apply to a doctoral research project associated with a doctoral advisor (Directeur de thèse or directeur doctoral) with a habilitation throughout the doctoral program. The PhD admission is granted by a graduate school (in French, "école doctorale"). A PhD candidate may follow some in-service training offered by the graduate school while continuing their research in a laboratory. Their research may be carried out in a laboratory,[clarification needed] at a university or in a company. In the first case, the candidates can be hired by the university or a research organisation. In the last case, the company hires the candidate and they are supervised by both the company's tutor and a lab's professor. Completion of the PhD degree generally requires 3 years after the master's degree but it can last longer in specific cases. The financing of PhD research comes mainly from funds for research of the French Ministry of Higher Education and Research. The most common procedure is a short-term employment contract called doctoral contract: the institution of higher education is the employer and the PhD candidate the employee. However, the candidate can apply for funds from a company, which can host them at its premises (as in the case where PhD candidates do their research at a company). In another possible situation, the company and the institute can sign a funding agreement together so that the candidate still has a public doctoral contract but is works at the company on a daily basis (for example, this is particularly the case for the (French) Scientific Cooperation Foundation). Many other resources come from some regional/city projects, some associations, etc. In Germany, admission to a doctoral program is generally on the basis of having an advanced degree (i.e., a master's degree, diplom, magister, or staatsexamen), mostly in a related field and having above-average grades. A candidate must also find a tenured professor from a university to serve as the formal advisor and supervisor (Betreuer) of the dissertation throughout the doctoral program. This supervisor is informally referred to as Doktorvater or Doktormutter, which literally translate to "doctor's father" and "doctor's mother" respectively. The formal admission is the beginning of the so-called Promotionsverfahren, while the final granting of the degree is called Promotion. The duration of the doctorate depends on the field. A doctorate in medicine may take less than a full-time year to complete; those in other fields, two to six years. Most doctorates are awarded with specific Latin designations for the field of research (except for engineering, where the designation is German), instead of a general name for all fields (such as the PhD). The most important degrees are: Over fifty such designations exist, many of them rare or no longer in use. As a title, the degree is commonly written in front of the name in abbreviated form, e.g., Dr. rer. nat. Max Mustermann or Dr. Max Mustermann, dropping the designation entirely. However, leaving out the designation is only allowed when the doctorate degree is not an honorary doctorate, which must be indicated by Dr. h.c. (from Latin honoris causa). While most German doctorates are considered equivalent to the PhD, an exception is the medical doctorate, where "doctoral" dissertations are often written alongside undergraduate study. The European Research Council decided in 2010 that those doctorates do not meet the international standards of a PhD research degree. There are different forms of university-level institution in Germany, but only professors from "Universities" (Univ.-Prof.) can serve as doctoral supervisors – "Universities of Applied Sciences" (Fachhochschulen) are not entitled to award doctorates, although some exceptions apply to this rule. Depending on the university, doctoral students (Doktoranden) can be required to attend formal classes or lectures, some of them also including exams or other scientific assignments, in order to get one or more certificates of qualification (Qualifikationsnachweise). Depending on the doctoral regulations (Promotionsordnung) of the university and sometimes on the status of the doctoral student, such certificates may not be required. Usually, former students, research assistants or lecturers from the same university, may be spared from attending extra classes. Instead, under the tutelage of a single professor or advisory committee, they are expected to conduct independent research. In addition to doctoral studies, many doctoral candidates work as teaching assistants, research assistants, or lecturers. Many universities have established research-intensive Graduiertenkollegs ("graduate colleges"), which are graduate schools that provide funding for doctoral studies. The typical duration of a doctoral program can depend heavily on the subject and area of research. Usually, three to five years of full-time research work are required. The average time to graduation is 4.5 years. In 2014, the median age of new PhD graduates was 30.4 years. In India, a master's degree is usually required to gain admission to a doctoral program. Direct admission to a PhD program after graduating to BTech may also be granted by the IITs, the IIITs, the NITs, and the Academy of Scientific and Innovative Research. In some subjects, completing a Master of Philosophy (MPhil) is a prerequisite to obtaining funding/fellowship for a PhD. According to new rules prescribed by the UGC, universities must conduct Research Eligibility Tests in ability and the selected subject. After clearing these tests, the shortlisted candidates are required to appear for an interview with the available PhD supervisor and give presentations of their research proposal (plan of work or synopsis). During study, candidates must submit progress reports and after successful completion of the coursework, are required to give a pre-submission presentation and finally defend their thesis in an open defense viva-voce. It is mandatory in India to qualify for the National Eligibility Test to apply for a professorship, lectureship or Junior Research Fellowship (NET for LS and JRF) conducted by the National Testing Agency (NTA). The Dottorato di ricerca (research doctorate), abbreviated to "Dott. Ric." or "PhD", is an academic title awarded at the end of a course of not less than three years, admission to which is based on entrance examinations and academic rankings in the Bachelor of Arts ("Laurea", a three-year diploma) and Master of Arts ("Laurea Magistrale" a two-year diploma). While the standard PhD follows the Bologna process, the MD–PhD programme may be completed in two years. The first institution in Italy to create a doctoral program (PhD) was Scuola Normale Superiore di Pisa in 1927 under the historic name "Diploma di Perfezionamento". Further, the research doctorates or PhD (Dottorato di ricerca) in Italy were introduced by law and Presidential Decree in 1980, referring to the reform of academic teaching, training and experimentation in organisation and teaching methods. The Superior Graduate Schools in Italy (Scuola Superiore Universitaria), also called Schools of Excellence (Scuole di Eccellenza) such as Scuola Normale Superiore di Pisa and Sant'Anna School of Advanced Studies still keep their reputed historical "Diploma di Perfezionamento" PhD title by law and MIUR Decree. Doctorate courses are open, without age or citizenship limits, to all those who already hold a "laurea magistrale" (master degree) or similar academic title awarded abroad which has been recognised as equivalent to an Italian degree by the Committee responsible for the entrance examinations. The number of places on offer each year and details of the entrance examinations are set out in the examination announcement. In Poland, a doctoral degree (Pol. doktor), abbreviated to PhD (Pol. dr) is an advanced academic degree awarded by universities in most fields and by the Polish Academy of Sciences, regulated by the Polish parliament acts and the government orders, in particular by the Ministry of Science and Higher Education of the Republic of Poland. Students with a master's degree or equivalent are accepted to a doctoral entrance exam. The title of PhD is awarded to a scientist who has completed a minimum of three years of PhD studies (Pol. studia doktoranckie; not required to obtain PhD), finished a theoretical or laboratory scientific work, passed all PhD examinations; submitted the dissertation, a document presenting the author's research and findings, and successfully defended the doctoral thesis. Typically, upon completion, the candidate undergoes an oral examination, always public, by a supervisory committee with expertise in the given discipline. The doctorate was introduced in Sweden in 1477 and in Denmark–Norway in 1479 and awarded in theology, law, and medicine, while the magister's degree was the highest degree at the Faculty of Philosophy, equivalent to the doctorate. Scandinavian countries were among the early adopters of a degree known as a doctorate of philosophy, based upon the German model. Denmark and Norway both introduced the Dr. Phil(os). degree in 1824, replacing the Magister's degree as the highest degree, while Uppsala University of Sweden renamed its Magister's degree Filosofie Doktor (fil. dr) in 1863. These degrees, however, became comparable to the German Habilitation rather than the doctorate, as Scandinavian countries did not have a separate Habilitation. The degrees were uncommon and not a prerequisite for employment as a professor; rather, they were seen as distinctions similar to the British (higher) doctorates (DLitt, DSc). Denmark introduced an American-style PhD, the ph.d., in 1989; it formally replaced the Licentiate's degree and is considered a lower degree than the dr. phil. degree; officially, the ph.d. is not considered a doctorate, but unofficially, it is referred to as "the smaller doctorate", as opposed to the dr. phil., "the grand doctorate." Holders of a ph.d. degree are not entitled to style themselves as "Dr." Currently Denmark distinctions between the dr. phil. as the proper doctorate and a higher degree than the ph.d., whereas in Norway, the historically analogous dr. philos. degree is officially regarded as equivalent to the new ph.d. Today, the Norwegian PhD degree is awarded to candidates who have completed a supervised doctoral programme at an institution, while candidates with a master's degree who have conducted research on their own may submit their work for a Dr. Philos. defence at a relevant institution. PhD candidates must complete one trial lecture before they can defend their thesis, whereas Dr. Philos. candidates must complete two trial lectures. In Sweden, the doctorate of philosophy was introduced at Uppsala University's Faculty of Philosophy in 1863. In Sweden, the Latin term is officially translated into Swedish filosofie doktor and commonly abbreviated fil. dr or FD. The degree represents the traditional Faculty of Philosophy and encompasses subjects from biology, physics, and chemistry, to languages, history, and social sciences, being the highest degree in these disciplines. Sweden currently has two research-level degrees, the Licentiate's degree, which is comparable to the Danish degree formerly known as the Licentiate's degree and now as the ph.d., and the higher doctorate of philosophy, Filosofie Doktor. Some universities in Sweden also use the term teknologie doktor for doctorates awarded by institutes of technology (for doctorates in engineering or natural science related subjects such as materials science, molecular biology, computer science etc.). The Swedish term fil. dr is often also used as a translation of corresponding degrees from e.g. Denmark and Norway. Singapore has six universities offering doctoral study opportunities: National University of Singapore, Nanyang Technological University, Singapore Management University, Singapore Institute of Technology, Singapore University of Technology and Design, and Singapore University of Social Sciences. The first doctoral degree in South Africa was issued in 1899 by the University of the Cape of Good Hope (now University of South Africa or UNISA) and the first PhDs were conferred in the 1920s by the University of Cape Town and the University of the Witwatersrand. Owing to the influence of British colonialism, South African higher education bears profound similarities to the modern UK universities system. South Africa boasts twenty-six state universities, all of which offer doctoral degrees. Presently, only two private institutions offer accredited PhDs, including the South African Theological Seminary and St. Augustine College of South Africa. Typically, South African colleges and universities abbreviate Doctor of Philosophy with either PhD or DPhil. South African PhD programs require both a four-year undergraduate and a relevant graduate degree. Certain PhD programs require preexisting knowledge of research languages or field experience. Some programs require applicants undergo an interview or provide references, a curriculum vitae, and letters of recommendation. Typically, PhD applicants must furnish a provisional research proposal which discloses the basic trajectory of their area of interest. English competency is a universal requirement. Akin to PhD programs in the UK and in the Netherlands, South African PhD programs consist of a research thesis or dissertation produced under the supervision of a subject-matter expert. South African PhD programs are designed to result in a substantial piece of scholarship that has undergone critical evaluation through peer review. Unlike PhD programs in many other African countries or the US, South African PhD programs rarely involve coursework and are undertaken through rigorous and semi-independent research. Most South African PhD programs are designed to be completed within three to six years. In Spain, doctoral degrees are regulated by Real Decreto (Royal Decree in Spanish) 99/2011 from the 2014/2015 academic year. They are granted by a university on behalf of the King, and its diploma has the force of a public document. The Ministry of Science keeps a National Registry of Theses called TESEO. All doctoral programs are of a research nature. The studies should include original results and can take a maximum of three years, although this period can be extended under certain circumstances to 5 years. The student must write their thesis presenting a new discovery or original contribution to science. If approved by her or his "thesis director (or directors)", the study will be presented to a panel of 3–5 distinguished scholars. Any doctor attending the public presentations is allowed to challenge the candidate with questions on their research. If approved, they will receive the doctorate. Four marks can be granted: Unsatisfactory, Pass, Satisfactory, and Excellent. "Cum laude" (with all honours, in Latin) denomination can be added to the Excellent ones if all five members of the tribunal agree. The social standing of doctors in Spain was evidenced by the fact that Philip III let PhD holders to take seat and cover their heads during an act in the University of Salamanca in which the King took part so as to recognise their merits. This right to cover their heads in the presence of the King is traditionally reserved in Spain to Grandees and Dukes. The concession is remembered in solemn ceremonies held by the university by telling Doctors to take seat and cover their heads as a reminder of that royal leave. All Doctor Degree holders are reciprocally recognized as equivalent in Germany and Spain ("Bonn Agreement of November 14, 1994"). In Ukraine, starting in 2016, in Ukraine Doctor of Philosophy (PhD, Ukrainian: Доктор філософії) is the highest education level and the first science degree. PhD is awarded in recognition of a substantial contribution to scientific knowledge, origination of new directions and visions in science. A PhD degree is a prerequisite for heading a university department in Ukraine. Upon completion of a PhD, a PhD holder can elect to continue their studies and get a post-doctoral degree called "Doctor of Sciences" (DSc. Ukrainian: Доктор наук), which is the second and the highest science degree in Ukraine. In the United Kingdom, universities admit applicants to PhD programs on a case-by-case basis; depending on the university, admission is typically conditional on the prospective student having completed an undergraduate degree with at least upper second-class honours or a postgraduate master's degree but requirements can vary even within institutions. For example, the University of Edinburgh requires a minimum of a 2:1 honours degree (or international equivalent) for a PhD in clinical psychology, while its business school requires a master's degree with an average of 65% in the taught components and a distinction-level dissertation. For students who are not from English-speaking countries, UK Visas and Immigration requires universities to assess English proficiency. Many do this using IELTS tests, although the requirements may vary depending on the institution. 143 UK universities require applicants to undergo IELTS before admission, with minimum acceptable scores ranging from 4 to 6.5 and above. However, some universities are willing to accept students without IELTS. Students are first accepted onto an MPhil or MRes programme and may transfer to PhD regulations upon satisfactory progress, this is sometimes referred to as APG (Advanced Postgraduate) status. This is typically done after one or two years and the research work done may count towards the PhD degree. If a student fails to make satisfactory progress, they may be offered the opportunity to write up and submit for an MPhil degree, e.g. at King's College London and the University of Manchester. In many universities, the MPhil is also offered as a stand-alone research degree. PhD students from outside the EU/EEA or other exempt countries are required to comply with the Academic Technology Approval Scheme (ATAS), which involves undergoing a security clearance process with the Foreign Office for courses in sensitive areas where research could be used for weapons development. This requirement was introduced in 2007 due to concerns about overseas terrorism and weapons proliferation. In the United Kingdom, funding for PhD students is sometimes provided by government-funded Research Councils (UK Research and Innovation – UKRI) or the European Social Fund, usually in the form of a tax-free bursary which consists of tuition fees together with a stipend. Tuition fees are charged at different rates for "Home/EU" and "Overseas" students, generally £3,000–£6,000 per year for the former and £9,000–14,500 for the latter (which includes EU citizens who have not been normally resident in the EEA for the last three years), although this can rise to over £16,000 at elite institutions. Higher fees are often charged for laboratory-based degrees. As of 2022/23[update], the national indicative fee for PhD students is £4,596, increasing annually, typically with inflation; there is no regulation of the fees charged by institutions, but if they charge a higher fee they may not require Research Council funded students to make up any difference themselves. As of 2022/23[update], the national minimum stipend for UKRI-funded students is £16,062 per year, increasing annually typically with inflation. The period of funding for a PhD project is between three and four years, depending on the research council and the decisions of individual institutions, with extensions in funding of up to twelve months available to offset periods of absence for maternity leave, shared parental leave, adoption leave, absences covered by a medical certificate, and extended jury service. PhD work beyond this may be unfunded or funded from other sources. A very small number of scientific studentships are sometimes paid at a higher rate – for example, in London, Cancer Research UK, the ICR and the Wellcome Trust stipend rates start at around £19,000 and progress annually to around £23,000 a year; an amount that is tax and national insurance free. Research Council funding is distributed to Doctoral Training Partnerships and Centres for Doctoral Training, who are responsible for student selection, within the eligibility guidelines established by the Research Councils. The ESRC (Economic and Social Science Research Council), for example, explicitly state that a 2.1 minimum (or a master's degree) is required. Many students who are not in receipt of external funding may choose to undertake the degree part-time, thus reducing the tuition fees. The tuition fee per annum for part-time PhD degrees are typically 50–60% of the equivalent full-time doctorate. However, since the duration of a part-time PhD degree is longer than a full-time degree, the overall cost may be the same or higher. The part-time PhD degree option provides free time in which to earn money for subsistence. Students may also take part in tutoring, work as research assistants, or (occasionally) deliver lectures, at a rate of typically £12–14 per hour, either to supplement existing low income or as a sole means of funding. There is usually a preliminary assessment to remain in the program and the thesis is submitted at the end of a three- to four-year program. These periods are usually extended pro rata for part-time students. With special dispensation, the final date for the thesis can be extended for up to four additional years, for a total of seven, but this is rare.[better source needed] For full-time PhDs, a four-year time limit has now been fixed and students must apply for an extension to submit a thesis past this point. Since the early 1990s, British funding councils have adopted a policy of penalising departments where large proportions of students fail to submit their theses in four years after achieving PhD-student status (or pro rata equivalent) by reducing the number of funded places in subsequent years. Inadvertently, this leads to significant pressure on the candidate to minimise the scope of projects with a view on thesis submission, regardless of quality, and discourage time spent on activities that would otherwise further the impact of the research on the community (e.g., publications in high-impact journals, seminars, workshops). Furthermore, supervising staff are encouraged in their career progression to ensure that the PhD students under their supervision finalise the projects in three rather than the four years that the program is permitted to cover. These issues contribute to an overall discrepancy between supervisors and PhD candidates in the priority they assign to the quality and impact of the research contained in a PhD project, the former favouring quick PhD projects over several students and the latter favouring a larger scope for their own ambitious project, training, and impact.[citation needed] There has recently been an increase in the number of Integrated PhD programs available, such as at the University of Southampton. These courses include a Master of Research (MRes) in the first year, which consists of a taught component as well as laboratory rotation projects. The PhD must then be completed within the next three years. As this includes the MRes all deadlines and timeframes are brought forward to encourage completion of both MRes and PhD within four years from commencement. These programs are designed to provide students with a greater range of skills than a standard PhD, and for the university, they are a means of gaining an extra years' fees from public sources. Some UK universities (e.g. Oxford) abbreviate their Doctor of Philosophy degree as "DPhil", while most use the abbreviation "PhD"; but these are stylistic conventions, and the degrees are in all other respects equivalent. In the United Kingdom, PhD degrees are distinct from other doctorates, most notably the higher doctorates such as DLitt (Doctor of Letters) or DSc (Doctor of Science), which may be granted on the recommendation of a committee of examiners on the basis of a substantial portfolio of submitted (and usually published) research. However, some UK universities still maintain the option of submitting a thesis for the award of a higher doctorate. Recent years have seen the introduction of professional doctorates, which are the same level as PhDs but more specific in their field. Most tend not to be solely academic, but combine academic research, a taught component or a professional qualification. These are most notably in the fields of engineering (EngD), educational psychology (DEdPsych), occupational psychology (DOccPsych), clinical psychology (DClinPsych), health psychology (DHealthPsy), social work (DSW), nursing (DNP), public administration (DPA), business administration (DBA), and music (DMA). A more generic degree also used is DProf or ProfD. These typically have a more formal taught component consisting of smaller research projects, as well as a 40,000–60,000-word thesis component, which together are officially considered equivalent to a PhD degree. In the United States, the PhD degree is the highest academic degree awarded by universities in most fields of study. There are more than 282 universities in the United States that award the PhD degree, and those universities vary widely in their criteria for admission, as well as the rigor of their academic programs. Typically, PhD programs require applicants to have a bachelor's degree in a relevant field, and, in many cases in the humanities, a master's degree, reasonably high grades, several letters of recommendation, relevant academic coursework, a cogent statement of interest in the field of study, and satisfactory performance on a graduate-level exam specified by the respective program (e.g., GRE, GMAT). Depending on the specific field of study, completion of a PhD program usually takes four to eight years of study after the bachelor's degree; those students who begin a PhD program with a master's degree may complete their PhD degree a year or two sooner. As PhD programs typically lack the formal structure of undergraduate education, there are significant individual differences in the time taken to complete the degree. Overall, 57% of students who begin a PhD program in the US will complete their degree within ten years, approximately 30% will drop out or be dismissed, and the remaining 13% of students will continue on past ten years. The median age of PhD recipients in the US is 32 years. While many candidates are awarded their degree in their 20s, 6% of PhD recipients in the US are older than 45 years. The number of PhD diplomas awarded by US universities has risen nearly every year since 1957, according to data compiled by the US National Science Foundation. In 1957, US universities awarded 8,611 PhD diplomas; 20,403 in 1967; 31,716 in 1977; 32,365 in 1987; 42,538 in 1997; 48,133 in 2007, and 55,006 in 2015. PhD students at US universities typically receive a tuition waiver and some form of annual stipend.[citation needed] Many US PhD students work as teaching assistants or research assistants. Graduate schools increasingly[citation needed] encourage their students to seek outside funding; many are supported by fellowships they obtain for themselves or by their advisers' research grants from government agencies such as the National Science Foundation and the National Institutes of Health. Many Ivy League and other well-endowed universities provide funding for the entire duration of the degree program (if it is short) or for most of it,[citation needed] especially in the forms of tuition waivers/stipends. In Russia, the degree of Candidate of Sciences (Russian: кандидат наук, Kandidat Nauk) was the first advanced research qualification in the former USSR (it was introduced there in 1934) and some Eastern Bloc countries (Czechoslovakia, Hungary) and is still awarded in some post-Soviet states (Russian Federation, Belarus, and others). According to "Guidelines for the recognition of Russian qualifications in the other European countries," in countries with a two-tier system of doctoral degrees (like Russian Federation, some post-Soviet states, Germany, Poland, Austria and Switzerland), should be considered for recognition at the level of the first doctoral degree, and in countries with only one doctoral degree, the degree of Candidate of Sciences should be considered for recognition as equivalent to this PhD degree. Since most education systems only have one advanced research qualification granting doctoral degrees or equivalent qualifications (ISCED 2011, par.270), the degree of Candidate of Sciences (Kandidat Nauk) of the former USSR countries is usually considered to be at the same level as the doctorate or PhD degrees of those countries. According to the Joint Statement by the Permanent Conference of the Ministers for Education and Cultural Affairs of the Länder of the Federal Republic of Germany (Kultusministerkonferenz, KMK), German Rectors' Conference (HRK) and the Ministry of General and Professional Education of the Russian Federation, the degree of Candidate of Sciences is recognised in Germany at the level of the German degree of Doktor and the degree of Doktor Nauk at the level of German Habilitation. The Russian degree of Candidate of Sciences is also officially recognised by the Government of the French Republic as equivalent to French doctorate. According to the International Standard Classification of Education, for purposes of international educational statistics, Candidate of Sciences belongs to ISCED level 8, or "doctoral or equivalent", together with PhD, DPhil, DLitt, DSc, LLD, Doctorate, or similar. It is mentioned in the Russian version of ISCED 2011 (par.262) on the UNESCO website as an equivalent to PhD belonging to this level. In the same way as PhD degrees awarded in many English-speaking countries, Candidate of Sciences allows its holders to reach the level of the Docent. The second doctorate (or post-doctoral degree) in some post-Soviet states called Doctor of Sciences (Russian: доктор наук, Doktor Nauk) is given as an example of second advanced research qualifications or higher doctorates in ISCED 2011 (par.270) and is similar to Habilitation in Germany, Poland and several other countries. It constitutes a higher qualification compared to PhD as against the European Qualifications Framework (EQF) or Dublin Descriptors. About 88% of Russian students studying at state universities study at the expense of budget funds. The average stipend in Russia (as of August 2011[update]) is $430 a year ($35/month). The average tuition fee in graduate school is $2,000 per year. On 19 June 2013, for the first time in the Russian Federation, defenses were held for the PhD degree awarded by universities, instead of the Candidate of Sciences degree awarded by the State Supreme Certification Commission. Renat Yuldashev, the graduate of the Department of Applied Cybernetics of the Faculty of Mathematics and Mechanics of St. Petersburg State University, was the first to defend his thesis in field of mathematics according to new rules for the PhD SPbSU[clarification needed] degree. For the defense procedure in the field of mathematics, it was used the experience of joint Finnish-Russian research and educational program organized in 2007 by the Faculty of Information Technology of the University of Jyväskylä and the Faculty of Mathematics and Mechanics of St. Petersburg State University: co-chairs of the program — N. Kuznetsov, G. Leonov, P. Neittaanmäki, were organizers of the first defenses and co-supervisors of dissertations. Models of supervision At some universities, there may be training for those wishing to supervise PhD studies. There is much literature available, such as Delamont, Atkinson, and Parry (1997). Dinham and Scott (2001) have argued that the worldwide growth in research students has been matched by the increase in the number of what they term "how-to" texts for both students and supervisors, citing examples such as Pugh and Phillips (1987). These authors report empirical data on the benefits to a PhD candidate from publishing; students are more likely to publish with adequate encouragement from their supervisors. Wisker (2005) has reported that research into this field distinguishes two models of supervision: The technical-rationality model of supervision, emphasising technique; and the negotiated order model, which is less mechanistic, emphasising fluid and dynamic change in the PhD process. These two models were first distinguished by Acker, Hill and Black (1994; cited in Wisker, 2005). Considerable literature exists on the expectations that supervisors may have of their students (Phillips & Pugh, 1987) and the expectations that students may have of their supervisors (Phillips & Pugh, 1987; Wilkinson, 2005) in the course of PhD supervision. Similar expectations are implied by the Quality Assurance Agency's Code for Supervision (Quality Assurance Agency, 1999; cited in Wilkinson, 2005). PhD in the workforce PhD graduates represent a relatively small, elite group within most countries — around 1.1% of adults among OECD countries. Slovenia, Switzerland and Luxembourg have higher numbers of PhD Graduates per capita as illustrated here. For Slovenia, this is because MSc degrees before Bologna Process are ranked at the same level of education as PhD. Without the MSc, Slovenia has 1.4% PhD graduates, which is comparable to the average in OECD and EU-23 countries. International PhD equivalent degrees See also References Further reading |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Animal#cite_note-121] | [TOKENS: 6011] |
Contents Animal Animals are multicellular, eukaryotic organisms belonging to the biological kingdom Animalia (/ˌænɪˈmeɪliə/). With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Animals form a clade, meaning that they arose from a single common ancestor. Over 1.5 million living animal species have been described, of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are as many as 7.77 million animal species on Earth. Animal body lengths range from 8.5 μm (0.00033 in) to 33.6 m (110 ft). They have complex ecologies and interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology, and the study of animal behaviour is known as ethology. The animal kingdom is divided into five major clades, namely Porifera, Ctenophora, Placozoa, Cnidaria and Bilateria. Most living animal species belong to the clade Bilateria, a highly proliferative clade whose members have a bilaterally symmetric and significantly cephalised body plan, and the vast majority of bilaterians belong to two large clades: the protostomes, which includes organisms such as arthropods, molluscs, flatworms, annelids and nematodes; and the deuterostomes, which include echinoderms, hemichordates and chordates, the latter of which contains the vertebrates. The much smaller basal phylum Xenacoelomorpha have an uncertain position within Bilateria. Animals first appeared in the fossil record in the late Cryogenian period and diversified in the subsequent Ediacaran period in what is known as the Avalon explosion. Nearly all modern animal phyla first appeared in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago (Mya), and most classes during the Ordovician radiation 485.4 Mya. Common to all living animals, 6,331 groups of genes have been identified that may have arisen from a single common ancestor that lived about 650 Mya during the Cryogenian period. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa. Humans make use of many other animal species for food (including meat, eggs, and dairy products), for materials (such as leather, fur, and wool), as pets and as working animals for transportation, and services. Dogs, the first domesticated animal, have been used in hunting, in security and in warfare, as have horses, pigeons and birds of prey; while other terrestrial and aquatic animals are hunted for sports, trophies or profits. Non-human animals are also an important cultural element of human evolution, having appeared in cave arts and totems since the earliest times, and are frequently featured in mythology, religion, arts, literature, heraldry, politics, and sports. Etymology The word animal comes from the Latin noun animal of the same meaning, which is itself derived from Latin animalis 'having breath or soul'. The biological definition includes all members of the kingdom Animalia. In colloquial usage, the term animal is often used to refer only to nonhuman animals. The term metazoa is derived from Ancient Greek μετα meta 'after' (in biology, the prefix meta- stands for 'later') and ζῷᾰ zōia 'animals', plural of ζῷον zōion 'animal'. A metazoan is any member of the group Metazoa. Characteristics Animals have several characteristics that they share with other living things. Animals are eukaryotic, multicellular, and aerobic, as are plants and fungi. Unlike plants and algae, which produce their own food, animals cannot produce their own food, a feature they share with fungi. Animals ingest organic material and digest it internally. Animals have structural characteristics that set them apart from all other living things: Typically, there is an internal digestive chamber with either one opening (in Ctenophora, Cnidaria, and flatworms) or two openings (in most bilaterians). Animal development is controlled by Hox genes, which signal the times and places to develop structures such as body segments and limbs. During development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised into specialised tissues and organs, making the formation of complex structures possible, and allowing cells to be differentiated. The extracellular matrix may be calcified, forming structures such as shells, bones, and spicules. In contrast, the cells of other multicellular organisms (primarily algae, plants, and fungi) are held in place by cell walls, and so develop by progressive growth. Nearly all animals make use of some form of sexual reproduction. They produce haploid gametes by meiosis; the smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm. In most cases, a third germ layer, the mesoderm, also develops between them. These germ layers then differentiate to form tissues and organs. Repeated instances of mating with a close relative during sexual reproduction generally leads to inbreeding depression within a population due to the increased prevalence of harmful recessive traits. Animals have evolved numerous mechanisms for avoiding close inbreeding. Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentation; budding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids. Ecology Animals are categorised into ecological groups depending on their trophic levels and how they consume organic material. Such groupings include carnivores (further divided into subcategories such as piscivores, insectivores, ovivores, etc.), herbivores (subcategorised into folivores, graminivores, frugivores, granivores, nectarivores, algivores, etc.), omnivores, fungivores, scavengers/detritivores, and parasites. Interactions between animals of each biome form complex food webs within that ecosystem. In carnivorous or omnivorous species, predation is a consumer–resource interaction where the predator feeds on another organism, its prey, who often evolves anti-predator adaptations to avoid being fed upon. Selective pressures imposed on one another lead to an evolutionary arms race between predator and prey, resulting in various antagonistic/competitive coevolutions. Almost all multicellular predators are animals. Some consumers use multiple methods; for example, in parasitoid wasps, the larvae feed on the hosts' living tissues, killing them in the process, but the adults primarily consume nectar from flowers. Other animals may have very specific feeding behaviours, such as hawksbill sea turtles which mainly eat sponges. Most animals rely on biomass and bioenergy produced by plants and phytoplanktons (collectively called producers) through photosynthesis. Herbivores, as primary consumers, eat the plant material directly to digest and absorb the nutrients, while carnivores and other animals on higher trophic levels indirectly acquire the nutrients by eating the herbivores or other animals that have eaten the herbivores. Animals oxidise carbohydrates, lipids, proteins and other biomolecules in cellular respiration, which allows the animal to grow and to sustain basal metabolism and fuel other biological processes such as locomotion. Some benthic animals living close to hydrothermal vents and cold seeps on the dark sea floor consume organic matter produced through chemosynthesis (via oxidising inorganic compounds such as hydrogen sulfide) by archaea and bacteria. Animals originated in the ocean; all extant animal phyla, except for Micrognathozoa and Onychophora, feature at least some marine species. However, several lineages of arthropods begun to colonise land around the same time as land plants, probably between 510 and 471 million years ago, during the Late Cambrian or Early Ordovician. Vertebrates such as the lobe-finned fish Tiktaalik started to move on to land in the late Devonian, about 375 million years ago. Other notable animal groups that colonized land environments are Mollusca, Platyhelmintha, Annelida, Tardigrada, Onychophora, Rotifera, Nematoda. Animals occupy virtually all of earth's habitats and microhabitats, with faunas adapted to salt water, hydrothermal vents, fresh water, hot springs, swamps, forests, pastures, deserts, air, and the interiors of other organisms. Animals are however not particularly heat tolerant; very few of them can survive at constant temperatures above 50 °C (122 °F) or in the most extreme cold deserts of continental Antarctica. The collective global geomorphic influence of animals on the processes shaping the Earth's surface remains largely understudied, with most studies limited to individual species and well-known exemplars. Diversity The blue whale (Balaenoptera musculus) is the largest animal that has ever lived, weighing up to 190 tonnes and measuring up to 33.6 metres (110 ft) long. The largest extant terrestrial animal is the African bush elephant (Loxodonta africana), weighing up to 12.25 tonnes and measuring up to 10.67 metres (35.0 ft) long. The largest terrestrial animals that ever lived were titanosaur sauropod dinosaurs such as Argentinosaurus, which may have weighed as much as 73 tonnes, and Supersaurus which may have reached 39 metres. Several animals are microscopic; some Myxozoa (obligate parasites within the Cnidaria) never grow larger than 20 μm, and one of the smallest species (Myxobolus shekel) is no more than 8.5 μm when fully grown. The following table lists estimated numbers of described extant species for the major animal phyla, along with their principal habitats (terrestrial, fresh water, and marine), and free-living or parasitic ways of life. Species estimates shown here are based on numbers described scientifically; much larger estimates have been calculated based on various means of prediction, and these can vary wildly. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million. Using patterns within the taxonomic hierarchy, the total number of animal species—including those not yet described—was calculated to be about 7.77 million in 2011.[a] 3,000–6,500 4,000–25,000 Evolutionary origin Evidence of animals is found as long ago as the Cryogenian period. 24-Isopropylcholestane (24-ipc) has been found in rocks from roughly 650 million years ago; it is only produced by sponges and pelagophyte algae. Its likely origin is from sponges based on molecular clock estimates for the origin of 24-ipc production in both groups. Analyses of pelagophyte algae consistently recover a Phanerozoic origin, while analyses of sponges recover a Neoproterozoic origin, consistent with the appearance of 24-ipc in the fossil record. The first body fossils of animals appear in the Ediacaran, represented by forms such as Charnia and Spriggina. It had long been doubted whether these fossils truly represented animals, but the discovery of the animal lipid cholesterol in fossils of Dickinsonia establishes their nature. Animals are thought to have originated under low-oxygen conditions, suggesting that they were capable of living entirely by anaerobic respiration, but as they became specialised for aerobic metabolism they became fully dependent on oxygen in their environments. Many animal phyla first appear in the fossil record during the Cambrian explosion, starting about 539 million years ago, in beds such as the Burgess Shale. Extant phyla in these rocks include molluscs, brachiopods, onychophorans, tardigrades, arthropods, echinoderms and hemichordates, along with numerous now-extinct forms such as the predatory Anomalocaris. The apparent suddenness of the event may however be an artefact of the fossil record, rather than showing that all these animals appeared simultaneously. That view is supported by the discovery of Auroralumina attenboroughii, the earliest known Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predators, catching small prey with its nematocysts as modern cnidarians do. Some palaeontologists have suggested that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Early fossils that might represent animals appear for example in the 665-million-year-old rocks of the Trezona Formation of South Australia. These fossils are interpreted as most probably being early sponges. Trace fossils such as tracks and burrows found in the Tonian period (from 1 gya) may indicate the presence of triploblastic worm-like animals, roughly as large (about 5 mm wide) and complex as earthworms. However, similar tracks are produced by the giant single-celled protist Gromia sphaerica, so the Tonian trace fossils may not indicate early animal evolution. Around the same time, the layered mats of microorganisms called stromatolites decreased in diversity, perhaps due to grazing by newly evolved animals. Objects such as sediment-filled tubes that resemble trace fossils of the burrows of wormlike animals have been found in 1.2 gya rocks in North America, in 1.5 gya rocks in Australia and North America, and in 1.7 gya rocks in Australia. Their interpretation as having an animal origin is disputed, as they might be water-escape or other structures. Phylogeny Animals are monophyletic, meaning they are derived from a common ancestor. Animals are the sister group to the choanoflagellates, with which they form the Choanozoa. Ros-Rocher and colleagues (2021) trace the origins of animals to unicellular ancestors, providing the external phylogeny shown in the cladogram. Uncertainty of relationships is indicated with dashed lines. The animal clade had certainly originated by 650 mya, and may have come into being as much as 800 mya, based on molecular clock evidence for different phyla. Holomycota (inc. fungi) Ichthyosporea Pluriformea Filasterea The relationships at the base of the animal tree have been debated. Other than Ctenophora, the Bilateria and Cnidaria are the only groups with symmetry, and other evidence shows they are closely related. In addition to sponges, Placozoa has no symmetry and was often considered a "missing link" between protists and multicellular animals. The presence of hox genes in Placozoa shows that they were once more complex. The Porifera (sponges) have long been assumed to be sister to the rest of the animals, but there is evidence that the Ctenophora may be in that position. Molecular phylogenetics has supported both the sponge-sister and ctenophore-sister hypotheses. In 2017, Roberto Feuda and colleagues, using amino acid differences, presented both, with the following cladogram for the sponge-sister view that they supported (their ctenophore-sister tree simply interchanging the places of ctenophores and sponges): Porifera Ctenophora Placozoa Cnidaria Bilateria Conversely, a 2023 study by Darrin Schultz and colleagues uses ancient gene linkages to construct the following ctenophore-sister phylogeny: Ctenophora Porifera Placozoa Cnidaria Bilateria Sponges are physically very distinct from other animals, and were long thought to have diverged first, representing the oldest animal phylum and forming a sister clade to all other animals. Despite their morphological dissimilarity with all other animals, genetic evidence suggests sponges may be more closely related to other animals than the comb jellies are. Sponges lack the complex organisation found in most other animal phyla; their cells are differentiated, but in most cases not organised into distinct tissues, unlike all other animals. They typically feed by drawing in water through pores, filtering out small particles of food. The Ctenophora and Cnidaria are radially symmetric and have digestive chambers with a single opening, which serves as both mouth and anus. Animals in both phyla have distinct tissues, but these are not organised into discrete organs. They are diploblastic, having only two main germ layers, ectoderm and endoderm. The tiny placozoans have no permanent digestive chamber and no symmetry; they superficially resemble amoebae. Their phylogeny is poorly defined, and under active research. The remaining animals, the great majority—comprising some 29 phyla and over a million species—form the Bilateria clade, which have a bilaterally symmetric body plan. The Bilateria are triploblastic, with three well-developed germ layers, and their tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and in the Nephrozoa there is an internal body cavity, a coelom or pseudocoelom. These animals have a head end (anterior) and a tail end (posterior), a back (dorsal) surface and a belly (ventral) surface, and a left and a right side. A modern consensus phylogenetic tree for the Bilateria is shown below. Xenacoelomorpha Ambulacraria Chordata Ecdysozoa Spiralia Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Many bilaterians have a combination of circular muscles that constrict the body, making it longer, and an opposing set of longitudinal muscles, that shorten the body; these enable soft-bodied animals with a hydrostatic skeleton to move by peristalsis. They also have a gut that extends through the basically cylindrical body from mouth to anus. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. However, over evolutionary time, descendant spaces have evolved which have lost one or more of each of these characteristics. For example, adult echinoderms are radially symmetric (unlike their larvae), while some parasitic worms have extremely simplified body structures. Genetic studies have considerably changed zoologists' understanding of the relationships within the Bilateria. Most appear to belong to two major lineages, the protostomes and the deuterostomes. It is often suggested that the basalmost bilaterians are the Xenacoelomorpha, with all other bilaterians belonging to the subclade Nephrozoa. However, this suggestion has been contested, with other studies finding that xenacoelomorphs are more closely related to Ambulacraria than to other bilaterians. Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage. Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm. The main deuterostome taxa are the Ambulacraria and the Chordata. Ambulacraria are exclusively marine and include acorn worms, starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals. The protostomes include the Ecdysozoa, named after their shared trait of ecdysis, growth by moulting, Among the largest ecdysozoan phyla are the arthropods and the nematodes. The rest of the protostomes are in the Spiralia, named for their pattern of developing by spiral cleavage in the early embryo. Major spiralian phyla include the annelids and molluscs. History of classification In the classical era, Aristotle divided animals,[d] based on his own observations, into those with blood (roughly, the vertebrates) and those without. The animals were then arranged on a scale from man (with blood, two legs, rational soul) down through the live-bearing tetrapods (with blood, four legs, sensitive soul) and other groups such as crustaceans (no blood, many legs, sensitive soul) down to spontaneously generating creatures like sponges (no blood, no legs, vegetable soul). Aristotle was uncertain whether sponges were animals, which in his system ought to have sensation, appetite, and locomotion, or plants, which did not: he knew that sponges could sense touch and would contract if about to be pulled off their rocks, but that they were rooted like plants and never moved about. In 1758, Carl Linnaeus created the first hierarchical classification in his Systema Naturae. In his original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then, the last four have all been subsumed into a single phylum, the Chordata, while his Insecta (which included the crustaceans and arachnids) and Vermes have been renamed or broken up. The process was begun in 1793 by Jean-Baptiste de Lamarck, who called the Vermes une espèce de chaos ('a chaotic mess')[e] and split the group into three new phyla: worms, echinoderms, and polyps (which contained corals and jellyfish). By 1809, in his Philosophie Zoologique, Lamarck had created nine phyla apart from vertebrates (where he still had four phyla: mammals, birds, reptiles, and fish) and molluscs, namely cirripedes, annelids, crustaceans, arachnids, insects, worms, radiates, polyps, and infusorians. In his 1817 Le Règne Animal, Georges Cuvier used comparative anatomy to group the animals into four embranchements ('branches' with different body plans, roughly corresponding to phyla), namely vertebrates, molluscs, articulated animals (arthropods and annelids), and zoophytes (radiata) (echinoderms, cnidaria and other forms). This division into four was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860. In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals, with five phyla: coelenterates, echinoderms, articulates, molluscs, and vertebrates) and Protozoa (single-celled animals), including a sixth animal phylum, sponges. The protozoa were later moved to the former kingdom Protista, leaving only the Metazoa as a synonym of Animalia. In human culture The human population exploits a large number of other animal species for food, both of domesticated livestock species in animal husbandry and, mainly at sea, by hunting wild species. Marine fish of many species are caught commercially for food. A smaller number of species are farmed commercially. Humans and their livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined. Invertebrates including cephalopods, crustaceans, insects—principally bees and silkworms—and bivalve or gastropod molluscs are hunted or farmed for food, fibres. Chickens, cattle, sheep, pigs, and other animals are raised as livestock for meat across the world. Animal fibres such as wool and silk are used to make textiles, while animal sinews have been used as lashings and bindings, and leather is widely used to make shoes and other items. Animals have been hunted and farmed for their fur to make items such as coats and hats. Dyestuffs including carmine (cochineal), shellac, and kermes have been made from the bodies of insects. Working animals including cattle and horses have been used for work and transport from the first days of agriculture. Animals such as the fruit fly Drosophila melanogaster serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug trabectedin are based on toxins or other molecules of animal origin. People have used hunting dogs to help chase down and retrieve animals, and birds of prey to catch birds and mammals, while tethered cormorants have been used to catch fish. Poison dart frogs have been used to poison the tips of blowpipe darts. A wide variety of animals are kept as pets, from invertebrates such as tarantulas, octopuses, and praying mantises, reptiles such as snakes and chameleons, and birds including canaries, parakeets, and parrots all finding a place. However, the most kept pet species are mammals, namely dogs, cats, and rabbits. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own. A wide variety of terrestrial and aquatic animals are hunted for sport. The signs of the Western and Chinese zodiacs are based on animals. In China and Japan, the butterfly has been seen as the personification of a person's soul, and in classical representation the butterfly is also the symbol of the soul. Animals have been the subjects of art from the earliest times, both historical, as in ancient Egypt, and prehistoric, as in the cave paintings at Lascaux. Major animal paintings include Albrecht Dürer's 1515 The Rhinoceros, and George Stubbs's c. 1762 horse portrait Whistlejacket. Insects, birds and mammals play roles in literature and film, such as in giant bug movies. Animals including insects and mammals feature in mythology and religion. The scarab beetle was sacred in ancient Egypt, and the cow is sacred in Hinduism. Among other mammals, deer, horses, lions, bats, bears, and wolves are the subjects of myths and worship. See also Notes References External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.