text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Interstellar_travel] | [TOKENS: 8440] |
Contents Interstellar travel Interstellar travel is the hypothetical travel of spacecraft between star systems. Due to the vast distances between the Solar System and nearby stars, interstellar travel is not practicable with current propulsion technologies. To travel between stars within a reasonable amount of time (decades or centuries), an interstellar spacecraft must reach a significant fraction of the speed of light, requiring enormous amounts of energy. Communication with such interstellar craft will experience years of delay due to the speed of light. Collisions with cosmic dust and gas at such speeds can be catastrophic for such spacecrafts. Crewed interstellar travel could possibly be conducted more slowly (far beyond the scale of a human lifetime) by making a generation ship. Hypothetical interstellar propulsion systems include nuclear pulse propulsion, fission-fragment rocket, fusion rocket, beamed solar sail, and antimatter rocket. The benefits of interstellar travel include detailed surveys of habitable exoplanets and distant stars, comprehensive search for extraterrestrial intelligence and space colonization. Even though five uncrewed spacecraft have left the Solar System, they are not "interstellar craft" because they are not purposefully designed to explore other star systems. Thus, as of the 2020s, interstellar spaceflight remains a popular trope in speculative future studies and science fiction. A civilization that has mastered interstellar travel is called an interstellar species. Challenges Distances between the planets in the Solar System are often measured in astronomical units (AU), defined as the average distance between the Sun and Earth, some 1.5×108 kilometers (93 million miles). Venus, the closest planet to Earth, is (at closest approach) 0.28 AU away. Neptune, the farthest planet from the Sun, is 29.8 AU away. As of January 20, 2023, Voyager 1, the farthest human-made object from Earth, is 163 AU away, exiting the Solar System at a speed of 17 km/s (0.006% of the speed of light). The closest known star, Proxima Centauri, is approximately 268,332 AU away, or over 9,000 times farther away than Neptune. Because of this, distances between stars are usually expressed in light-years (defined as the distance that light travels in vacuum in one Julian year) or in parsecs (one parsec is 3.26 ly, the distance at which stellar parallax is exactly one arcsecond, hence the name). Light in a vacuum travels around 300,000 kilometres (186,000 mi) per second, so 1 light-year is about 9.461×1012 kilometers (5.879 trillion miles) or 63,241 AU. Hence, Proxima Centauri is approximately 4.243 light-years from Earth. Another way of understanding the vastness of interstellar distances is by scaling. One of the closest stars to the Sun, Alpha Centauri A (a Sun-like star that is one of two companions of Proxima Centauri), can be pictured by scaling down the Earth–Sun distance to one meter (3.28 ft). On this scale, the distance to Alpha Centauri A would be 276 kilometers (171 miles). The fastest outward-bound spacecraft yet sent, Voyager 1, has covered 1/390 of a light-year in 46 years and is currently moving at 1/17,600 the speed of light. At this rate, a journey to Proxima Centauri would take 75,000 years. A significant factor contributing to the difficulty is the energy that must be supplied to obtain a reasonable travel time. A lower bound for the required energy is the kinetic energy K = 1 2 m v 2 {\displaystyle K={\tfrac {1}{2}}mv^{2}} where m {\displaystyle m} is the final mass. If deceleration on arrival is desired and cannot be achieved by any means other than the engines of the ship, then the lower bound for the required energy is doubled to m v 2 {\displaystyle mv^{2}} . The velocity for a crewed round trip of a few decades to even the nearest star is several thousand times greater than those of present space vehicles. This means that due to the v 2 {\displaystyle v^{2}} term in the kinetic energy formula, millions of times as much energy is required. Accelerating one ton to one-tenth of the speed of light requires at least 450 petajoules or 4.50×1017 joules or 125 terawatt-hours (world energy consumption 2008 was 143,851 terawatt-hours), without factoring in efficiency of the propulsion mechanism. This energy has to be generated onboard from stored fuel, harvested from the interstellar medium, or projected over immense distances. A knowledge of the properties of the interstellar gas and dust through which the vehicle must pass is essential for the design of any interstellar space mission. A major issue with traveling at extremely high speeds is that, due to the requisite high relative speeds and large kinetic energies, collisions with interstellar dust could cause considerable damage to the craft. Various shielding methods to mitigate this problem have been proposed. Larger objects (such as macroscopic dust grains) are far less common, but would be much more destructive. The risks of impacting such objects and mitigation methods have been discussed in literature, but many unknowns remain. An additional consideration is that, due to the non-homogeneous distribution of interstellar matter around the Sun, these risks would vary between different trajectories. Although a high density interstellar medium may cause difficulties for many interstellar travel concepts, interstellar ramjets, and some proposed concepts for decelerating interstellar spacecraft, would actually benefit from a denser interstellar medium. The crew of an interstellar ship would face several significant hazards, including the psychological effects of long-term isolation, the physiological effects of extreme acceleration, the effects of exposure to ionising radiation, and the physiological effects of weightlessness to the muscles, joints, bones, immune system, and eyes. There also exists the risk of impact by micrometeoroids and other space debris. These risks represent challenges that have yet to be overcome. The speculative fiction writer and physicist Robert L. Forward has argued that an interstellar mission that cannot be completed within 50 years should not be started at all. Instead, assuming that a civilization is still on an increasing curve of propulsion system velocity and not yet having reached the limit, the resources should be invested in designing a better propulsion system. This is because a slow spacecraft would probably be passed by another mission sent later with more advanced propulsion (the incessant obsolescence postulate). In 2006, Andrew Kennedy calculated ideal departure dates for a trip to Barnard's Star using a more precise concept of the wait calculation where for a given destination and growth rate in propulsion capacity there is a departure point that overtakes earlier launches and will not be overtaken by later ones and concluded "an interstellar journey of 6 light years can best be made in about 635 years from now if growth continues at about 1.4% per annum", or approximately 2641 AD. It may be the most significant calculation for competing cultures occupying the galaxy. Prime targets for interstellar travel There are 59 known stellar systems within 40 light years of the Sun, containing 81 visible stars. The following could be considered prime targets for interstellar missions: Existing astronomical technology is capable of finding planetary systems around these objects, increasing their potential for exploration. Proposed methods "Slow" interstellar missions (still fast by other standards) based on current and near-future propulsion technologies are associated with trip times starting from about several decades to thousands of years. These missions consist of sending a robotic probe to a nearby star for exploration, similar to interplanetary probes like those used in the Voyager program. By taking along no crew, the cost and complexity of the mission is significantly reduced, as is the mass that needs to be accelerated, although technology lifetime is still a significant issue next to obtaining a reasonable speed of travel. Proposed concepts include Project Daedalus, Project Icarus, Project Dragonfly, Project Longshot, and more recently Breakthrough Starshot. Near-lightspeed nano spacecraft might be possible within the near future built on existing microchip technology with a newly developed nanoscale thruster. Researchers at the University of Michigan are developing thrusters that use nanoparticles as propellant. Their technology is called "nanoparticle field extraction thruster", or nanoFET. These devices act like small particle accelerators shooting conductive nanoparticles out into space. Michio Kaku, a theoretical physicist, has suggested that clouds of "smart dust" be sent to the stars, which may become possible with advances in nanotechnology. Kaku also notes that a large number of nanoprobes would need to be sent due to the vulnerability of very small probes to be easily deflected by magnetic fields, micrometeorites and other dangers to ensure the chances that at least one nanoprobe will survive the journey and reach the destination. As a near-term solution, small, laser-propelled interstellar probes, based on current CubeSat technology were proposed in the context of Project Dragonfly. Starseed is a similar proposed method of launching interstellar nanoprobes at one-third light speed. The proposed launcher uses a 1,000 km-long small-diameter hollow wire, with electrodes lining the hollow wire, an electrostatic accelerator tube, similar to K. Eric Drexler's ideas. In crewed missions, the duration of a slow interstellar journey presents a major obstacle and existing concepts deal with this problem in different ways. They can be distinguished by the "state" in which humans are transported on-board of the spacecraft. A generation ship (or world ship) is a type of interstellar ark in which the crew that arrives at the destination is descended from those who started the journey. Generation ships are not currently feasible because of the difficulty of constructing a ship of the enormous required scale and the great biological and sociological problems that life aboard such a ship raises. Scientists and writers have postulated various techniques for suspended animation. These include human hibernation and cryonic preservation. Although neither is currently practical, they offer the possibility of sleeper ships in which the passengers lie inert for the long duration of the voyage. A robotic interstellar mission carrying some number of frozen early stage human embryos is another theoretical possibility. This method of space colonization requires, among other things, the development of an artificial uterus, the prior detection of a habitable terrestrial planet, and advances in the field of fully autonomous mobile robots and educational robots that would replace human parents. Interstellar space is not completely empty; it contains trillions of icy bodies ranging from small asteroids (Oort cloud) to possible rogue planets. There may be ways to take advantage of these resources for a good part of an interstellar trip, slowly hopping from body to body or setting up waystations along the way. If a spaceship could average 10 percent of light speed (and decelerate at the destination, for human crewed missions), this would be enough to reach Proxima Centauri in forty years. Several propulsion concepts have been proposed that might be eventually developed to accomplish this (see § Propulsion below), but none of them are ready for near-term (few decades) developments at acceptable cost. Physicists generally believe faster-than-light travel is impossible. Relativistic time dilation allows a traveler to experience time more slowly, the closer their speed is to the speed of light. This apparent slowing becomes noticeable when velocities above 80% of the speed of light are attained. Clocks aboard an interstellar ship would run slower than Earth clocks, so if a ship's engines were capable of continuously generating around 1 g of acceleration (which is comfortable for humans), the ship could reach almost anywhere in the galaxy and return to Earth within 40 years ship-time (see diagram). Upon return, there would be a difference between the time elapsed on the astronaut's ship and the time elapsed on Earth. For example, a spaceship could travel to a star 32 light-years away, initially accelerating at a constant 1.03g (i.e. 10.1 m/s2) for 1.32 years (ship time), then stopping its engines and coasting for the next 17.3 years (ship time) at a constant speed, then decelerating again for 1.32 ship-years, and coming to a stop at the destination. After a short visit, the astronaut could return to Earth the same way. After the full round-trip, the clocks on board the ship show that 40 years have passed, but according to those on Earth, the ship comes back 76 years after launch. From the viewpoint of the astronaut, onboard clocks seem to be running normally. The star ahead seems to be approaching at a speed of 0.87 light years per ship-year. The universe would appear contracted along the direction of travel to half the size it had when the ship was at rest; the distance between that star and the Sun would seem to be 16 light years as measured by the astronaut. At higher speeds, the time on board will run even slower, so the astronaut could travel to the center of the Milky Way (30,000 light years from Earth) and back in 40 years ship-time. But the speed according to Earth clocks will always be less than 1 light year per Earth year, so, when back home, the astronaut will find that more than 60 thousand years will have passed on Earth. Regardless of how it is achieved, a propulsion system that could produce acceleration continuously from departure to arrival would be the fastest method of travel. A constant acceleration journey is one where the propulsion system accelerates the ship at a constant rate for the first half of the journey, and then decelerates for the second half, so that it arrives at the destination stationary relative to where it began. If this were performed with an acceleration similar to that experienced at the Earth's surface, it would have the added advantage of producing artificial "gravity" for the crew. Supplying the energy required, however, would be prohibitively expensive with current technology. From the perspective of a planetary observer, the ship will appear to accelerate steadily at first, but then more gradually as it approaches the speed of light (which it cannot exceed). It will undergo hyperbolic motion. The ship will be close to the speed of light after about a year of accelerating and remain at that speed until it brakes for the end of the journey. From the perspective of an onboard observer, the crew will feel a gravitational field opposite the engine's acceleration, and the universe ahead will appear to fall in that field, undergoing hyperbolic motion. As part of this, distances between objects in the direction of the ship's motion will gradually contract until the ship begins to decelerate, at which time an onboard observer's experience of the gravitational field will be reversed. When the ship reaches its destination, if it were to exchange a message with its origin planet, it would find that less time had elapsed on board than had elapsed for the planetary observer, due to time dilation and length contraction. The result is a fast journey for the crew. Propulsion According to physicist Michio Kaku, although chemical rocket engines have a thrust of several thousand tons, they operate only for a few minutes, so in terms of final acceleration they are inferior, for example, to ion engines with low thrust and a lifespan of several years, but ion thruster and plasma engines are too weak for human flight to the stars. The most advanced electric rocket engines available today have a characteristic velocity ΔV of about 100 km/s, which, according to Edgar Choueiri, is too slow for travel to distant stars. For an object to reach the nearest star in a reasonable time (approximately half a century), it must accelerate to a near-relativistic speed in a short period of time and then, if possible, decelerate again. The problem here can be illustrated by the Tsiolkovsky equation: for a launch mass ( m 0 {\displaystyle {m_{0}}} ) and a payload of m 1 {\displaystyle {m_{1}}} . To achieve a large change in velocity Delta-v ( Δ v {\displaystyle \Delta v} ), a high effective reaction gas exhaust velocity v e {\displaystyle v_{\mathrm {e} }} (equal to the engine's specific impulse ( I s p {\displaystyle I_{\mathrm {sp} }} ) is required. Furthermore, to obtain the required energy, a large amount of propellant must be processed ( m 0 / m 1 {\displaystyle m_{0}/m_{1}} ). Another effect that should be accounted for is the thickening (increase of viscosity) of the fuel and of the exhaust gases at relativistic speeds. Based on this consideration, two categories of engines can be excluded: According to Dr. Tony Martin, controlled-fusion engines and the nuclear–electric systems have very low thrust because equipment to convert nuclear energy into electrical has a large mass which results in small acceleration, taking a century to achieve the desired speed (for example 15% of the velocity of light, thus unsuitable for interstellar flight during a single human lifetime); thermodynamic nuclear engines of the NERVA type require a great quantity of fuel. Photon rockets have to generate power at a rate of 3×109 W per kg of vehicle mass and require mirrors with absorptivity of less than 1 part in 106. Interstellar ramjet's problems are the tenuous interstellar medium with a density of about 1 atom/cm3, a large diameter funnel, and high power required for its electric field. Thus the only suitable propulsion method for the project Daedalus was thermonuclear pulse propulsion. All rocket concepts are limited by the rocket equation, which sets the characteristic velocity available as a function of exhaust velocity and mass ratio, the ratio of initial (M0, including fuel) to final (M1, fuel depleted) mass. Very high specific power, the ratio of thrust to total vehicle mass, is required to reach interstellar targets within sub-century time-frames. Some heat transfer is inevitable, resulting in an extreme thermal load. Thus, for interstellar rocket concepts of all technologies, a key engineering problem (seldom explicitly discussed) is limiting the heat transfer from the exhaust stream back into the vehicle. Fission-fragment rockets use nuclear fission to create high-speed jets of fission fragments, which are ejected at speeds of up to 12,000 km/s (7,500 mi/s). With fission, the energy output is approximately 0.1% of the total mass-energy of the reactor fuel and limits the effective exhaust velocity to about 5% of the velocity of light. For maximum velocity, the reaction mass should optimally consist of fission products, the "ash" of the primary energy source, so no extra reaction mass need be bookkept in the mass ratio. Based on work in the late 1950s to the early 1960s, it has been technically possible to build spaceships with nuclear pulse propulsion engines, i.e. driven by a series of nuclear explosions. This propulsion system contains the prospect of very high specific impulse and high specific power. Project Orion team member Freeman Dyson proposed in 1968 an interstellar spacecraft using nuclear pulse propulsion that used pure deuterium fusion detonations with a very high fuel-burnup fraction. He computed an exhaust velocity of 15,000 km/s and a 100,000-tonne space vehicle able to achieve a 20,000 km/s delta-v allowing a flight-time to Alpha Centauri of 130 years. Later studies indicate that the top cruise velocity that can theoretically be achieved by a Teller-Ulam thermonuclear unit powered Orion starship, assuming no fuel is saved for slowing back down, is about 8% to 10% of the speed of light (0.08-0.1c). An atomic (fission) Orion can achieve perhaps 3%-5% of the speed of light. A nuclear pulse drive starship powered by fusion-antimatter catalyzed nuclear pulse propulsion units would be similarly in the 10% range and pure matter-antimatter annihilation rockets would be theoretically capable of obtaining a velocity between 50% and 80% of the speed of light. In each case saving fuel for slowing down halves the maximum speed. The concept of using a magnetic sail to decelerate the spacecraft as it approaches its destination has been discussed as an alternative to using propellant, which would allow the ship to travel near the maximum theoretical velocity. Alternative designs utilizing similar principles include Project Longshot, Project Daedalus, and Mini-Mag Orion. The principle of external nuclear pulse propulsion to maximize survivable power has remained common among serious concepts for interstellar flight without external power beaming and for very high-performance interplanetary flight. In the 1970s the Nuclear Pulse Propulsion concept further was refined by Project Daedalus by use of externally triggered inertial confinement fusion, in this case producing fusion explosions via compressing fusion fuel pellets with high-powered electron beams. Since then, lasers, ion beams, neutral particle beams and hyper-kinetic projectiles have been suggested to produce nuclear pulses for propulsion purposes. A current impediment to the development of any nuclear-explosion-powered spacecraft is the 1963 Partial Test Ban Treaty, which includes a prohibition on the detonation of any nuclear devices (even non-weapon based) in outer space. This treaty would, therefore, need to be renegotiated, although a project on the scale of an interstellar mission using currently foreseeable technology would probably require international cooperation on at least the scale of the International Space Station. Another issue to be considered would be the g-forces imparted to a rapidly accelerated spacecraft, cargo, and passengers inside (see Inertia negation). Fusion rocket starships, powered by nuclear fusion reactions, should conceivably be able to reach speeds of the order of 10% of that of light, based on energy considerations alone. In theory, a large number of stages could push a vehicle arbitrarily close to the speed of light. These would "burn" such light element fuels as deuterium, tritium, 3He, 11B, and 7Li. Because fusion yields about 0.3–0.9% of the mass of the nuclear fuel as released energy, it is energetically more favorable than fission, which releases <0.1% of the fuel's mass-energy. The maximum exhaust velocities potentially energetically available are correspondingly higher than for fission, typically 4–10% of the speed of light. However, the most easily achievable fusion reactions release a large fraction of their energy as high-energy neutrons, which are a significant source of energy loss. Thus, although these concepts seem to offer the best (nearest-term) prospects for travel to the nearest stars within a (long) human lifetime, they still involve massive technological and engineering difficulties, which may turn out to be intractable for decades or centuries. Early studies include Project Daedalus, performed by the British Interplanetary Society in 1973–1978, and Project Longshot, a student project sponsored by NASA and the US Naval Academy, completed in 1988. Another fairly detailed vehicle system, "Discovery II", designed and optimized for crewed Solar System exploration, based on the D3He reaction but using hydrogen as reaction mass, has been described by a team from NASA's Glenn Research Center. It achieves characteristic velocities of >300 km/s with an acceleration of ~1.7•10−3 g, with a ship initial mass of ~1700 metric tons, and payload fraction above 10%. Although these are still far short of the requirements for interstellar travel on human timescales, the study seems to represent a reasonable benchmark towards what may be approachable within several decades, which is not impossibly beyond the current state-of-the-art. Based on the concept's 2.2% burnup fraction it could achieve a pure fusion product exhaust velocity of ~3,000 km/s. An antimatter rocket would have a far higher energy density and specific impulse than any other proposed class of rocket. If energy resources and efficient production methods are found to make antimatter in the quantities required and store it safely, it would be theoretically possible to reach speeds of several tens of percent that of light. Whether antimatter propulsion could lead to the higher speeds (>90% that of light) at which relativistic time dilation would become more noticeable, thus making time pass at a slower rate for the travelers as perceived by an outside observer, is doubtful owing to the large quantity of antimatter that would be required. Speculating that production and storage of antimatter should become feasible, two further issues need to be considered. First, in the annihilation of antimatter, much of the energy is lost as high-energy gamma radiation, and especially also as neutrinos, so that only about 40% of mc2 would actually be available if the antimatter were simply allowed to annihilate into radiations thermally. Even so, the energy available for propulsion would be substantially higher than the ~1% of mc2 yield of nuclear fusion, the next-best rival candidate. Second, heat transfer from the exhaust to the vehicle seems likely to transfer enormous wasted energy into the ship (e.g. for 0.1g ship acceleration, approaching 0.3 trillion watts per ton of ship mass), considering the large fraction of the energy that goes into penetrating gamma rays. Even assuming shielding was provided to protect the payload (and passengers on a crewed vehicle), some of the energy would inevitably heat the vehicle, and may thereby prove a limiting factor if useful accelerations are to be achieved. More recently, Friedwardt Winterberg proposed that a matter-antimatter GeV gamma ray laser photon rocket is possible by a relativistic proton-antiproton pinch discharge, where the recoil from the laser beam is transmitted by the Mössbauer effect to the spacecraft. Rockets deriving their power from external sources, such as a laser, could replace their internal energy source with an energy collector, potentially reducing the mass of the ship greatly and allowing much higher travel speeds. Geoffrey A. Landis proposed an interstellar probe propelled by an ion thruster powered by the energy beamed to it from a base station laser. Lenard and Andrews proposed using a base station laser to accelerate nuclear fuel pellets towards a Mini-Mag Orion spacecraft that ignites them for propulsion. A problem with all traditional rocket propulsion methods is that the spacecraft would need to carry its fuel with it, thus making it very massive, in accordance with the rocket equation. Several concepts attempt to escape from this problem: In 1960, Robert W. Bussard proposed the Bussard ramjet, a fusion rocket in which a huge scoop would collect the diffuse hydrogen in interstellar space, "burn" it on the fly using a proton–proton chain reaction, and expel it out of the back. Later calculations with more accurate estimates suggest that the thrust generated would be less than the drag caused by any conceivable scoop design.[citation needed] Yet the idea is attractive because the fuel would be collected en route (commensurate with the concept of energy harvesting), so the craft could theoretically accelerate to near the speed of light. The limitation is due to the fact that the reaction can only accelerate the propellant to 0.12c. Thus the drag of catching interstellar dust and the thrust of accelerating that same dust to 0.12c would be the same when the speed is 0.12c, preventing further acceleration. A light sail or magnetic sail powered by a massive laser or particle accelerator in the home star system could potentially reach even greater speeds than rocket- or pulse propulsion methods, because it would not need to carry its own reaction mass and therefore would only need to accelerate the craft's payload. Robert L. Forward proposed a means for decelerating an interstellar craft with a light sail of 100 kilometers in the destination star system without requiring a laser array to be present in that system. In this scheme, a secondary sail of 30 kilometers is deployed to the rear of the spacecraft, while the large primary sail is detached from the craft to keep moving forward on its own. Light is reflected from the large primary sail to the secondary sail, which is used to decelerate the secondary sail and the spacecraft payload. In 2002, Geoffrey A. Landis of NASA's Glen Research center also proposed a laser-powered, propulsion, sail ship that would host a diamond sail (of a few nanometers thick) powered with the use of solar energy. With this proposal, this interstellar ship would, theoretically, be able to reach 10 percent the speed of light. It has also been proposed to use beamed-powered propulsion to accelerate a spacecraft, and electromagnetic propulsion to decelerate it; thus, eliminating the problem that the Bussard ramjet has with the drag produced during acceleration. A magnetic sail could also decelerate at its destination without depending on carried fuel or a driving beam in the destination system, by interacting with the plasma found in the solar wind of the destination star and the interstellar medium. The following table lists some example concepts using beamed laser propulsion as proposed by the physicist Robert L. Forward: The following table is based on work by Heller, Hippke and Kervella. Achieving start-stop interstellar trip times of less than a human lifetime require mass-ratios of between 1,000 and 1,000,000, even for the nearer stars. This could be achieved by multi-staged vehicles on a vast scale. Alternatively large linear accelerators could propel fuel to fission-propelled space-vehicles, avoiding the limitations of the rocket equation. Dynamic soaring as a way to travel across interstellar space has been proposed. Uploaded human minds or AI could be transmitted with laser or radio signals at the speed of light. This requires a receiver at the destination which would first have to be set up e.g. by humans, probes, self replicating machines (potentially along with AI or uploaded humans), or an alien civilization (which might also be in a different galaxy, perhaps a Kardashev type III civilization). A theoretical idea for enabling interstellar travel is to propel a starship by creating an artificial black hole and using a parabolic reflector to reflect its Hawking radiation. Although beyond current technological capabilities, a black hole starship offers some advantages compared to other possible methods. Getting the black hole to act as a power source and engine also requires a way to convert the Hawking radiation into energy and thrust. One potential method involves placing the hole at the focal point of a parabolic reflector attached to the ship, creating forward thrust. A slightly easier, but less efficient method would involve simply absorbing all the gamma radiation heading towards the fore of the ship to push it onwards, and let the rest shoot out the back. A radio frequency (RF) resonant cavity thruster is a device that is claimed to be a spacecraft thruster. In 2016, the Advanced Propulsion Physics Laboratory at NASA reported observing a small apparent thrust from one such test, a result not since replicated. One of the designs is called EMDrive. In December 2002, Satellite Propulsion Research Ltd described a working prototype with an alleged total thrust of about 0.02 newtons powered by an 850 W cavity magnetron. The device could operate for only a few dozen seconds before the magnetron failed, due to overheating. The latest test on the EMDrive concluded that it does not work. Proposed in 2019 by NASA scientist Dr. David Burns, the helical engine concept would use a particle accelerator to accelerate particles to near the speed of light. Since particles traveling at such speeds acquire more mass, it is believed that this mass change could create acceleration. According to Burns, the spacecraft could theoretically reach 99% the speed of light. Scientists and authors have postulated a number of ways by which it might be possible to surpass the speed of light, but even the most serious-minded of these are highly speculative. It is also debatable whether faster-than-light travel is physically possible, in part because of causality concerns: travel faster than light may, under certain conditions, permit travel backwards in time within the context of special relativity. Proposed mechanisms for faster-than-light travel within the theory of general relativity require the existence of exotic matter, and it is not known if it could be produced in sufficient quantities if at all. In physics, the Alcubierre drive is based on an argument, within the framework of general relativity and without the introduction of wormholes, that it is possible to modify spacetime in a way that allows a spaceship to travel with an arbitrarily large speed by a local expansion of spacetime behind the spaceship and an opposite contraction in front of it. Alcubierre stated in an email to William Shatner that his theory was directly inspired by the term used in the show and cites the "'warp drive' of science fiction" in his 1994 article. Nevertheless, this concept would require the spaceship to incorporate a region of exotic matter, or the hypothetical concept of negative mass. Wormholes are conjectural distortions in spacetime that theorists postulate could connect two arbitrary points in the universe, across an Einstein–Rosen Bridge. It is not known whether wormholes are possible in practice. Although there are solutions to the Einstein equation of general relativity that allow for wormholes, all of the currently known solutions involve some assumption, for example the existence of negative mass, which may be unphysical. However, Cramer et al. argue that such wormholes might have been created in the early universe, stabilized by cosmic strings. The general theory of wormholes is discussed by Visser in the book Lorentzian Wormholes. Designs and studies Project Hyperion has looked into various feasibility issues of crewed interstellar travel. Notable results of the project include an assessment of world ship system architectures and adequate population size. Its members continue to publish on crewed interstellar travel in collaboration with the Initiative for Interstellar Studies. The Enzmann starship, as detailed by G. Harry Stine in the October 1973 issue of Analog, was a design for a future starship, based on the ideas of Robert Duncan-Enzmann. The spacecraft itself as proposed used a 12,000,000 ton ball of frozen deuterium to power 12–24 thermonuclear pulse propulsion units. Twice as long as the Empire State Building is tall and assembled in-orbit, the spacecraft was part of a larger project preceded by interstellar probes and telescopic observation of target star systems. NASA has been researching interstellar travel since its formation, translating important foreign language papers and conducting early studies on applying fusion propulsion, in the 1960s, and laser propulsion, in the 1970s, to interstellar travel. In 1994, NASA and JPL cosponsored a "Workshop on Advanced Quantum/Relativity Theory Propulsion" to "establish and use new frames of reference for thinking about the faster-than-light (FTL) question". The NASA Breakthrough Propulsion Physics Program (terminated in FY 2003 after a 6-year, $1.2-million study, because "No breakthroughs appear imminent.") identified some breakthroughs that are needed for interstellar travel to be possible. Geoffrey A. Landis of NASA's Glenn Research Center states that a laser-powered interstellar sail ship could possibly be launched within 50 years, using new methods of space travel. "I think that ultimately we're going to do it, it's just a question of when and who," Landis said in an interview. Rockets are too slow to send humans on interstellar missions. Instead, he envisions interstellar craft with extensive sails, propelled by laser light to about one-tenth the speed of light. It would take such a ship about 43 years to reach Alpha Centauri if it passed through the system without stopping. Slowing down to stop at Alpha Centauri could increase the trip to 100 years, whereas a journey without slowing down raises the issue of making sufficiently accurate and useful observations and measurements during a fly-by. The 100 Year Starship (100YSS) study was the name of a one-year project to assess the attributes of and lay the groundwork for an organization that can carry forward the 100 Year Starship vision. 100YSS-related symposia were organized between 2011 and 2015. Harold ("Sonny") White from NASA's Johnson Space Center is a member of Icarus Interstellar, the nonprofit foundation whose mission is to realize interstellar flight before the year 2100. At the 2012 meeting of 100YSS, he reported using a laser to try to warp spacetime by 1 part in 10 million with the aim of helping to make interstellar travel possible. Non-profit organizations A few organisations dedicated to interstellar propulsion research and advocacy for the case exist worldwide. These are still in their infancy, but are already backed up by a membership of a wide variety of scientists, students and professionals. Feasibility The energy requirements make interstellar travel very difficult. It has been reported that at the 2008 Joint Propulsion Conference, multiple experts opined that it was improbable that humans would ever explore beyond the Solar System. Brice N. Cassenti, an associate professor with the Department of Engineering and Science at Rensselaer Polytechnic Institute, stated that at least 100 times the total energy output of the entire world [in a given year] would be required to send a probe to the nearest star. Astrophysicist Sten Odenwald stated that the basic problem is that through intensive studies of thousands of detected exoplanets, most of the closest destinations within 50 light years do not yield Earth-like planets in the star's habitable zones. Given the multitrillion-dollar expense of some of the proposed technologies, travelers will have to spend up to 200 years traveling at 20% the speed of light to reach the best known destinations. Moreover, once the travelers arrive at their destination (by any means), they will not be able to travel down to the surface of the target world and set up a colony unless the atmosphere is non-lethal. The prospect of making such a journey, only to spend the rest of the colony's life inside a sealed habitat and venturing outside in a spacesuit, may eliminate many prospective targets from the list. Moving at a speed close to the speed of light and encountering even a tiny stationary object like a grain of sand will have fatal consequences. For example, a gram of matter moving at 90% of the speed of light contains a kinetic energy corresponding to a small nuclear bomb (around 30kt TNT). One of the major stumbling blocks is having enough Onboard Spares & Repairs facilities for such a lengthy time journey assuming all other considerations are solved, without access to all the resources available on Earth. Explorative high-speed missions to Alpha Centauri, as planned for by the Breakthrough Starshot initiative, are projected to be realizable within the 21st century. It is alternatively possible to plan for uncrewed slow-cruising missions taking millennia to arrive. These probes would not be for human benefit in the sense that one can not foresee whether there would be anybody around on Earth interested in then back-transmitted science data. An example would be the Genesis mission, which aims to bring unicellular life, in the spirit of directed panspermia, to habitable but otherwise barren planets. Comparatively slow cruising Genesis probes, with a typical speed of c / 300 {\displaystyle c/300} , corresponding to about 1000 km/s {\displaystyle 1000\,{\mbox{km/s}}} , can be decelerated using a magnetic sail. Uncrewed missions not for human benefit would hence be feasible. Discovery of Earth-like planets On August 24, 2016, Earth-size exoplanet Proxima Centauri b orbiting in the habitable zone of Proxima Centauri, 4.2 light-years away, was announced. This is the nearest known potentially-habitable exoplanet outside the Solar System. In February 2017, NASA announced that its Spitzer Space Telescope had revealed seven Earth-size planets in the TRAPPIST-1 system orbiting an ultra-cool dwarf star 40 light-years away from the Solar System. Three of these planets are firmly located in the habitable zone, the area around the parent star where a rocky planet is most likely to have liquid water. The discovery sets a new record for greatest number of habitable-zone planets found around a single star outside the Solar System. All of these seven planets could have liquid water – the key to life as we know it – under the right atmospheric conditions, but the chances are highest with the three in the habitable zone. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Extraterrestrial_life#cite_note-13] | [TOKENS: 11349] |
Contents Extraterrestrial life Extraterrestrial life, or alien life (colloquially aliens), is life that originates from another world rather than on Earth. No extraterrestrial life has yet been scientifically or conclusively detected. Such life might range from simple forms such as prokaryotes to intelligent beings, possibly bringing forth civilizations that might be far more, or far less, advanced than humans. The Drake equation speculates about the existence of sapient life elsewhere in the universe. The science of extraterrestrial life is known as astrobiology. Speculation about inhabited worlds beyond Earth dates back to antiquity. Early Christian writers, including Augustine, discussed ideas from thinkers like Democritus and Epicurus about countless worlds in the vast universe. Pre-modern writers typically assumed extraterrestrial "worlds" were inhabited by living beings. William Vorilong, in the 15th century, acknowledged the possibility Jesus could have visited extraterrestrial worlds to redeem their inhabitants.: 26 In 1440, Nicholas of Cusa suggested Earth is a "brilliant star"; he theorized that all celestial bodies, even the Sun, could host life. Descartes wrote that there were no means to prove the stars were not inhabited by "intelligent creatures", but their existence was a matter of speculation.: 67 In comparison to the life-abundant Earth, the vast majority of intrasolar and extrasolar planets and moons have harsh surface conditions and disparate atmospheric chemistry, or lack an atmosphere. However, there are many extreme and chemically harsh ecosystems on Earth that do support forms of life and are often hypothesized to be the origin of life on Earth. Examples include life surrounding hydrothermal vents, acidic hot springs, and volcanic lakes, as well as halophiles and the deep biosphere. Since the mid-20th century, researchers have searched for extraterrestrial life and intelligence. Solar system studies focus on Venus, Mars, Europa, and Titan, while exoplanet discoveries now total 6,022 confirmed planets in 4,490 systems as of October 2025. Depending on the category of search, methods range from analysis of telescope and specimen data to radios used to detect and transmit interstellar communication. Interstellar travel remains largely hypothetical, with only the Voyager 1 and Voyager 2 probes confirmed to have entered the interstellar medium. The concept of extraterrestrial life, especially intelligent life, has greatly influenced culture and fiction. A key debate centers on contacting extraterrestrial intelligence: some advocate active attempts, while others warn it could be risky, given human history of exploiting other societies. Context Initially, after the Big Bang, the universe was too hot to allow life. It is estimated that the temperature of the universe was around 10 billion Kelvin at the one-second mark. Roughly 15 million years later, it cooled to temperate levels, though the elements of organic life were yet nonexistent. The only freely available elements at that point were hydrogen and helium. Carbon and oxygen (and later, water) would not appear until 50 million years later, created through stellar fusion. At that point, the difficulty for life to appear was not the temperature, but the scarcity of free heavy elements. Planetary systems emerged, and the first organic compounds may have formed in the protoplanetary disk of dust grains that would eventually create rocky planets like Earth. Although Earth was in a molten state after its birth and may have burned any organics that fell on it, it would have been more receptive once it cooled down. Once the right conditions on Earth were met, life started by a chemical process known as abiogenesis. Alternatively, life may have formed less frequently, then spread—by meteoroids, for example—between habitable planets in a process called panspermia. During most of its stellar evolution, stars combine hydrogen nuclei to make helium nuclei by stellar fusion, and the comparatively lighter weight of helium allows the star to release the extra energy. The process continues until the star uses all of its available fuel, with the speed of consumption being related to the size of the star. During its last stages, stars start combining helium nuclei to form carbon nuclei. The larger stars can further combine carbon nuclei to create oxygen and silicon, oxygen into neon and sulfur, and so on until iron. Ultimately, the star blows much of its content back into the stellar medium, where it would join clouds that would eventually become new generations of stars and planets. Many of those materials are the raw components of life on Earth. As this process takes place in all the universe, said materials are ubiquitous in the cosmos and not a rarity from the Solar System. Earth is a planet in the Solar System, a planetary system formed by a star at the center, the Sun, and the objects that orbit it: other planets, moons, asteroids, and comets. The sun is part of the Milky Way, a galaxy. The Milky Way is part of the Local Group, a galaxy group that is in turn part of the Laniakea Supercluster. The universe is composed of all similar structures in existence. The immense distances between celestial objects are a difficulty for studying extraterrestrial life. So far, humans have only set foot on the Moon and sent robotic probes to other planets and moons in the Solar System. Although probes can withstand conditions that may be lethal to humans, the distances cause time delays: the New Horizons took nine years after launch to reach Pluto. No probe has ever reached extrasolar planetary systems. The Voyager 2 left the Solar System at a speed of 50,000 kilometers per hour; if it headed towards the Alpha Centauri system, the closest one to Earth at 4.4 light years, it would reach it in 100,000 years. Under current technology, such systems can only be studied by telescopes, which have limitations. It is estimated that dark matter has a larger amount of combined matter than stars and gas clouds, but as it plays no role in the stellar evolution of stars and planets, it is usually not taken into account by astrobiology. There is an area around a star, the circumstellar habitable zone or "Goldilocks zone", wherein water may be at the right temperature to exist in liquid form at a planetary surface. This area is neither too close to the star, where water would become steam, nor too far away, where water would be frozen as ice. However, although useful as an approximation, planetary habitability is complex and defined by several factors. Being in the habitable zone is not enough for a planet to be habitable, not even to actually have such liquid water. Venus is located in the solar system's habitable zone, but does not have liquid water because of the conditions of its atmosphere. Jovian planets or gas giants are not considered habitable even if they orbit close enough to their stars as hot Jupiters, due to crushing atmospheric pressures. The actual distances for the habitable zones vary according to the type of star, and even the solar activity of each specific star influences the local habitability. The type of star also defines the time the habitable zone will exist, as its presence and limits will change along with the star's stellar evolution. The Big Bang occurred 13.8 billion years ago, the Solar System was formed 4.6 billion years ago, and the first hominids appeared 6 million years ago. Life on other planets may have started, evolved, given birth to extraterrestrial intelligences, and perhaps even faced a planetary extinction event millions or billions of years ago. When considered from a cosmic perspective, the brief times of existence of Earth's species may suggest that extraterrestrial life may be equally fleeting under such a scale. During a period of about 7 million years, from about 10 to 17 million years after the Big Bang, the background temperature was between 373 and 273 K (100 and 0 °C; 212 and 32 °F), allowing the possibility of liquid water if any planets existed. Avi Loeb (2014) speculated that primitive life might in principle have appeared during this window, which he called "the Habitable Epoch of the Early Universe". Life on Earth is quite ubiquitous across the planet and has adapted over time to almost all the available environments in it, extremophiles and the deep biosphere thrive at even the most hostile ones. As a result, it is inferred that life in other celestial bodies may be equally adaptive. However, the origin of life is unrelated to its ease of adaptation and may have stricter requirements. A celestial body may not have any life on it, even if it were habitable. Likelihood of existence Life in the cosmos beyond Earth has been observed. The hypothesis of ubiquitous extraterrestrial life relies on three main ideas. The first one, the size of the universe, allows for plenty of planets to have a similar habitability to Earth, and the age of the universe gives enough time for a long process analog to the history of Earth to happen there. The second is that the substances that make life, such as carbon and water, are ubiquitous in the universe. The third is that the physical laws are universal, which means that the forces that would facilitate or prevent the existence of life would be the same ones as on Earth. According to this argument, made by scientists such as Carl Sagan and Stephen Hawking, it would be improbable for life not to exist somewhere else other than Earth. This argument is embodied in the Copernican principle, which states that Earth does not occupy a unique position in the Universe, and the mediocrity principle, which states that there is nothing special about life on Earth. Other authors consider instead that life in the cosmos, or at least multicellular life, may actually be rare. The Rare Earth hypothesis maintains that life on Earth is possible because of a series of factors that range from the location in the galaxy and the configuration of the Solar System to local characteristics of the planet, and that it is unlikely that another planet simultaneously meets all such requirements. The proponents of this hypothesis consider that very little evidence suggests the existence of extraterrestrial life and that, at this point, it is just a desired result and not a reasonable scientific explanation for any gathered data. In 1961, astronomer and astrophysicist Frank Drake devised the Drake equation as a way to stimulate scientific dialogue at a meeting on the search for extraterrestrial intelligence (SETI). The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy. The Drake equation is:: xix where: and Drake's proposed estimates are as follows, but numbers on the right side of the equation are agreed as speculative and open to substitution: 10,000 = 5 ⋅ 0.5 ⋅ 2 ⋅ 1 ⋅ 0.2 ⋅ 1 ⋅ 10,000 {\displaystyle 10{,}000=5\cdot 0.5\cdot 2\cdot 1\cdot 0.2\cdot 1\cdot 10{,}000} [better source needed] The Drake equation has proved controversial since, although it is written as a math equation, none of its values were known at the time. Although some values may eventually be measured, others are based on social sciences and are not knowable by their very nature. This does not allow one to make noteworthy conclusions from the equation. Based on observations from the Hubble Space Telescope, there are nearly 2 trillion galaxies in the observable universe. It is estimated that at least ten percent of all Sun-like stars have a system of planets. In other words, there are 6.25×1018 stars with planets orbiting them in the observable universe. Even if it is assumed that only one out of a billion of these stars has planets supporting life, there would be some 6.25 billion life-supporting planetary systems in the observable universe. A 2013 study based on results from the Kepler spacecraft estimated that the Milky Way contains at least as many planets as it does stars, resulting in 100–400 billion exoplanets. The Nebular hypothesis that explains the formation of the Solar System and other planetary systems would suggest that those can have several configurations, and not all of them may have rocky planets within the habitable zone. The apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilisations and the lack of evidence for such civilisations is known as the Fermi paradox. Dennis W. Sciama claimed that life's existence in the universe depends on various fundamental constants. Zhi-Wei Wang and Samuel L. Braunstein suggest that a random universe capable of supporting life is likely to be just barely able to do so, giving a potential explanation to the Fermi paradox. Biochemical basis If extraterrestrial life exists, it could range from simple microorganisms and multicellular organisms similar to animals or plants, to complex alien intelligences akin to humans. When scientists talk about extraterrestrial life, they consider all those types. Although it is possible that extraterrestrial life may have other configurations, scientists use the hierarchy of lifeforms from Earth for simplicity, as it is the only one known to exist. The first basic requirement for life is an environment with non-equilibrium thermodynamics, which means that the thermodynamic equilibrium must be broken by a source of energy. The traditional sources of energy in the cosmos are the stars, such as for life on Earth, which depends on the energy of the sun. However, there are other alternative energy sources, such as volcanoes, plate tectonics, and hydrothermal vents. There are ecosystems on Earth in deep areas of the ocean that do not receive sunlight, and take energy from black smokers instead. Magnetic fields and radioactivity have also been proposed as sources of energy, although they would be less efficient ones. Life on Earth requires water in a liquid state as a solvent in which biochemical reactions take place. It is highly unlikely that an abiogenesis process can start within a gaseous or solid medium: the atom speeds, either too fast or too slow, make it difficult for specific ones to meet and start chemical reactions. A liquid medium also allows the transport of nutrients and substances required for metabolism. Sufficient quantities of carbon and other elements, along with water, might enable the formation of living organisms on terrestrial planets with a chemical make-up and temperature range similar to that of Earth. Life based on ammonia rather than water has been suggested as an alternative, though this solvent appears less suitable than water. It is also conceivable that there are forms of life whose solvent is a liquid hydrocarbon, such as methane, ethane or propane. Another unknown aspect of potential extraterrestrial life would be the chemical elements that would compose it. Life on Earth is largely composed of carbon, but there could be other hypothetical types of biochemistry. A replacement for carbon would need to be able to create complex molecules, store information required for evolution, and be freely available in the medium. To create DNA, RNA, or a close analog, such an element should be able to bind its atoms with many others, creating complex and stable molecules. It should be able to create at least three covalent bonds: two for making long strings and at least a third to add new links and allow for diverse information. Only nine elements meet this requirement: boron, nitrogen, phosphorus, arsenic, antimony (three bonds), carbon, silicon, germanium and tin (four bonds). As for abundance, carbon, nitrogen, and silicon are the most abundant ones in the universe, far more than the others. On Earth's crust the most abundant of those elements is silicon, in the Hydrosphere it is carbon and in the atmosphere, it is carbon and nitrogen. Silicon, however, has disadvantages over carbon. The molecules formed with silicon atoms are less stable, and more vulnerable to acids, oxygen, and light. An ecosystem of silicon-based lifeforms would require very low temperatures, high atmospheric pressure, an atmosphere devoid of oxygen, and a solvent other than water. The low temperatures required would add an extra problem, the difficulty to kickstart a process of abiogenesis to create life in the first place. Norman Horowitz, head of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976 considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life. Even if extraterrestrial life is based on carbon and uses water as a solvent, like Earth life, it may still have a radically different biochemistry. Life is generally considered to be a product of natural selection. It has been proposed that to undergo natural selection a living entity must have the capacity to replicate itself, the capacity to avoid damage/decay, and the capacity to acquire and process resources in support of the first two capacities. Life on Earth may have started with an RNA world and later evolved to its current form, where some of the RNA tasks were transferred to DNA and proteins. Extraterrestrial life may still be stuck using RNA, or evolve into other configurations. It is unclear if our biochemistry is the most efficient one that could be generated, or which elements would follow a similar pattern. However, it is likely that, even if cells had a different composition to those from Earth, they would still have a cell membrane. Life on Earth jumped from prokaryotes to eukaryotes and from unicellular organisms to multicellular organisms through evolution. So far no alternative process to achieve such a result has been conceived, even if hypothetical. Evolution requires life to be divided into individual organisms, and no alternative organisation has been satisfactorily proposed either. At the basic level, membranes define the limit of a cell, between it and its environment, while remaining partially open to exchange energy and resources with it. The evolution from simple cells to eukaryotes, and from them to multicellular lifeforms, is not guaranteed. The Cambrian explosion took place thousands of millions of years after the origin of life, and its causes are not fully known yet. On the other hand, the jump to multicellularity took place several times, which suggests that it could be a case of convergent evolution, and so likely to take place on other planets as well. Palaeontologist Simon Conway Morris considers that convergent evolution would lead to kingdoms similar to our plants and animals, and that many features are likely to develop in alien animals as well, such as bilateral symmetry, limbs, digestive systems and heads with sensory organs. Scientists from the University of Oxford analysed it from the perspective of evolutionary theory and wrote in a study in the International Journal of Astrobiology that aliens may be similar to humans. The planetary context would also have an influence: a planet with higher gravity would have smaller animals, and other types of stars can lead to non-green photosynthesizers. The amount of energy available would also affect biodiversity, as an ecosystem sustained by black smokers or hydrothermal vents would have less energy available than those sustained by a star's light and heat, and so its lifeforms would not grow beyond a certain complexity. There is also research in assessing the capacity of life for developing intelligence. It has been suggested that this capacity arises with the number of potential niches a planet contains, and that the complexity of life itself is reflected in the information density of planetary environments, which in turn can be computed from its niches. It is common knowledge that the conditions on other planets in the solar system, in addition to the many galaxies outside of the Milky Way galaxy, are very harsh and seem to be too extreme to harbor any life. The environmental conditions on these planets can have intense UV radiation paired with extreme temperatures, lack of water, and much more that can lead to conditions that don't seem to favor the creation or maintenance of extraterrestrial life. However, there has been much historical evidence that some of the earliest and most basic forms of life on Earth originated in some extreme environments that seem unlikely to have harbored life at least at one point in Earth's history. Fossil evidence as well as many historical theories backed up by years of research and studies have marked environments like hydrothermal vents or acidic hot springs as some of the first places that life could have originated on Earth. These environments can be considered extreme when compared to the typical ecosystems that the majority of life on Earth now inhabit, as hydrothermal vents are scorching hot due to the magma escaping from the Earth's mantle and meeting the much colder oceanic water. Even in today's world, there can be a diverse population of bacteria found inhabiting the area surrounding these hydrothermal vents which can suggest that some form of life can be supported even in the harshest of environments like the other planets in the solar system. The aspects of these harsh environments that make them ideal for the origin of life on Earth, as well as the possibility of creation of life on other planets, is the chemical reactions forming spontaneously. For example, the hydrothermal vents found on the ocean floor are known to support many chemosynthetic processes which allow organisms to utilize energy through reduced chemical compounds that fix carbon. In return, these reactions will allow for organisms to live in relatively low oxygenated environments while maintaining enough energy to support themselves. The early Earth environment was reducing and therefore, these carbon fixing compounds were necessary for the survival and possible origin of life on Earth. With the little amount of information that scientists have found regarding the atmosphere on other planets in the Milky Way galaxy and beyond, the atmospheres are most likely reducing or with very low oxygen levels, especially when compared with Earth's atmosphere. If there were the necessary elements and ions on these planets, the same carbon fixing, reduced chemical compounds occurring around hydrothermal vents could also occur on these planets' surfaces and possibly result in the origin of extraterrestrial life. Planetary habitability in the Solar System The Solar System has a wide variety of planets, dwarf planets, and moons, and each one is studied for its potential to host life. Each one has its own specific conditions that may benefit or harm life. So far, the only lifeforms found are those from Earth. No extraterrestrial intelligence other than humans exists or has ever existed within the Solar System. Astrobiologist Mary Voytek points out that it would be unlikely to find large ecosystems, as they would have already been detected by now. The inner Solar System is likely devoid of life. However, Venus is still of interest to astrobiologists, as it is a terrestrial planet that was likely similar to Earth in its early stages and developed in a different way. There is a greenhouse effect, the surface is the hottest in the Solar System, sulfuric acid clouds, all surface liquid water is lost, and it has a thick carbon-dioxide atmosphere with huge pressure. Comparing both helps to understand the precise differences that lead to beneficial or harmful conditions for life. And despite the conditions against life on Venus, there are suspicions that microbial life-forms may still survive in high-altitude clouds. Mars is a cold and almost airless desert, inhospitable to life. However, recent studies revealed that water on Mars used to be quite abundant, forming rivers, lakes, and perhaps even oceans. Mars may have been habitable back then, and life on Mars may have been possible. But when the planetary core ceased to generate a magnetic field, solar winds removed the atmosphere and the planet became vulnerable to solar radiation. Ancient life-forms may still have left fossilised remains, and microbes may still survive deep underground. As mentioned, the gas giants and ice giants are unlikely to contain life. The most distant solar system bodies, found in the Kuiper Belt and outwards, are locked in permanent deep-freeze, but cannot be ruled out completely. Although the giant planets themselves are highly unlikely to have life, there is much hope to find it on moons orbiting these planets. Europa, from the Jovian system, has a subsurface ocean below a thick layer of ice. Ganymede and Callisto also have subsurface oceans, but life is less likely in them because water is sandwiched between layers of solid ice. Europa would have contact between the ocean and the rocky surface, which helps the chemical reactions. It may be difficult to dig so deep in order to study those oceans, though. Enceladus, a tiny moon of Saturn with another subsurface ocean, may not need to be dug, as it releases water to space in eruption columns. The space probe Cassini flew inside one of these, but could not make a full study because NASA did not expect this phenomenon and did not equip the probe to study ocean water. Still, Cassini detected complex organic molecules, salts, evidence of hydrothermal activity, hydrogen, and methane. Titan is the only celestial body in the Solar System besides Earth that has liquid bodies on the surface. It has rivers, lakes, and rain of hydrocarbons, methane, and ethane, and even a cycle similar to Earth's water cycle. This special context encourages speculations about lifeforms with different biochemistry, but the cold temperatures would make such chemistry take place at a very slow pace. Water is rock-solid on the surface, but Titan does have a subsurface water ocean like several other moons. However, it is of such a great depth that it would be very difficult to access it for study. Scientific search The science that searches and studies life in the universe, both on Earth and elsewhere, is called astrobiology. With the study of Earth's life, the only known form of life, astrobiology seeks to study how life starts and evolves and the requirements for its continuous existence. This helps to determine what to look for when searching for life in other celestial bodies. This is a complex area of study, and uses the combined perspectives of several scientific disciplines, such as astronomy, biology, chemistry, geology, oceanography, and atmospheric sciences. The scientific search for extraterrestrial life is being carried out both directly and indirectly. As of September 2017[update], 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in the Solar System hold the potential for hosting primitive life such as microorganisms. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. Although all the unusual properties of the meteorite were eventually explained as the result of inorganic processes, the controversy over its discovery laid the groundwork for the development of astrobiology. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In February 2005 NASA scientists reported they may have found some evidence of extraterrestrial life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions. In November 2011, NASA launched the Mars Science Laboratory that landed the Curiosity rover on Mars. It is designed to assess the past and present habitability on Mars using a variety of scientific instruments. The rover landed on Mars at Gale Crater in August 2012. A group of scientists at Cornell University started a catalog of microorganisms, with the way each one reacts to sunlight. The goal is to help with the search for similar organisms in exoplanets, as the starlight reflected by planets rich in such organisms would have a specific spectrum, unlike that of starlight reflected from lifeless planets. If Earth was studied from afar with this system, it would reveal a shade of green, as a result of the abundance of plants with photosynthesis. In August 2011, NASA studied meteorites found on Antarctica, finding adenine, guanine, hypoxanthine, and xanthine. Adenine and guanine are components of DNA, and the others are used in other biological processes. The studies ruled out pollution of the meteorites on Earth, as those components would not be freely available the way they were found in the samples. This discovery suggests that several organic molecules that serve as building blocks of life may be generated within asteroids and comets. In October 2011, scientists reported that cosmic dust contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. It is still unclear if those compounds played a role in the creation of life on Earth, but Sun Kwok, of the University of Hong Kong, thinks so. "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." In August 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. In December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life." Although most searches are focused on the biology of extraterrestrial life, an extraterrestrial intelligence capable enough to develop a civilization may be detectable by other means as well. Technology may generate technosignatures, effects on the native planet that may not be caused by natural causes. There are three main types of techno-signatures considered: interstellar communications, effects on the atmosphere, and planetary-sized structures such as Dyson spheres. Organizations such as the SETI Institute search the cosmos for potential forms of communication. They started with radio waves, and now search for laser pulses as well. The challenge for this search is that there are natural sources of such signals as well, such as gamma-ray bursts and supernovae, and the difference between a natural signal and an artificial one would be in its specific patterns. Astronomers intend to use artificial intelligence for this, as it can manage large amounts of data and is devoid of biases and preconceptions. Besides, even if there is an advanced extraterrestrial civilization, there is no guarantee that it is transmitting radio communications in the direction of Earth. The length of time required for a signal to travel across space means that a potential answer may arrive decades or centuries after the initial message. The atmosphere of Earth is rich in nitrogen dioxide as a result of air pollution, which can be detectable. The natural abundance of carbon, which is also relatively reactive, makes it likely to be a basic component of the development of a potential extraterrestrial technological civilization, as it is on Earth. Fossil fuels may likely be generated and used on such worlds as well. The abundance of chlorofluorocarbons in the atmosphere can also be a clear technosignature, considering their role in ozone depletion. Light pollution may be another technosignature, as multiple lights on the night side of a rocky planet can be a sign of advanced technological development. However, modern telescopes are not strong enough to study exoplanets with the required level of detail to perceive it. The Kardashev scale proposes that a civilization may eventually start consuming energy directly from its local star. This would require giant structures built next to it, called Dyson spheres. Those speculative structures would cause an excess infrared radiation, that telescopes may notice. The infrared radiation is typical of young stars, surrounded by dusty protoplanetary disks that will eventually form planets. An older star such as the Sun would have no natural reason to have excess infrared radiation. The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star were being used as an incinerator/repository for nuclear waste products. Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zones of their stars. Since 1992, over four thousand exoplanets have been discovered (6,128 planets in 4,584 planetary systems including 1,017 multiple planetary systems as of 30 October 2025). The extrasolar planets so far discovered range in size from that of terrestrial planets similar to Earth's size to that of gas giants larger than Jupiter. The number of observed exoplanets is expected to increase greatly in the coming years.[better source needed] The Kepler space telescope has also detected a few thousand candidate planets, of which about 11% may be false positives. There is at least one planet on average per star. About 1 in 5 Sun-like stars[a] have an "Earth-sized"[b] planet in the habitable zone,[c] with the nearest expected to be within 12 light-years distance from Earth. Assuming 200 billion stars in the Milky Way,[d] that would be 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if red dwarfs are included. The rogue planets in the Milky Way possibly number in the trillions. The nearest known exoplanet is Proxima Centauri b, located 4.2 light-years (1.3 pc) from Earth in the southern constellation of Centaurus. As of March 2014[update], the least massive exoplanet known is PSR B1257+12 A, which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is DENIS-P J082303.1−491201 b, about 29 times the mass of Jupiter, although according to most definitions of a planet, it is too massive to be a planet and may be a brown dwarf instead. Almost all of the planets detected so far are within the Milky Way, but there have also been a few possible detections of extragalactic planets. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life. One sign that a planet probably already contains life is the presence of an atmosphere with significant amounts of oxygen, since that gas is highly reactive and generally would not last long without constant replenishment. This replenishment occurs on Earth through photosynthetic organisms. One way to analyse the atmosphere of an exoplanet is through spectrography when it transits its star, though this might only be feasible with dim stars like white dwarfs. History and cultural impact The modern concept of extraterrestrial life is based on assumptions that were not commonplace during the early days of astronomy. The first explanations for the celestial objects seen in the night sky were based on mythology. Scholars from Ancient Greece were the first to consider that the universe is inherently understandable and rejected explanations based on supernatural incomprehensible forces, such as the myth of the Sun being pulled across the sky in the chariot of Apollo. They had not developed the scientific method yet and based their ideas on pure thought and speculation, but they developed precursor ideas to it, such as that explanations had to be discarded if they contradict observable facts. The discussions of those Greek scholars established many of the pillars that would eventually lead to the idea of extraterrestrial life, such as Earth being round and not flat. The cosmos was first structured in a geocentric model that considered that the sun and all other celestial bodies revolve around Earth. However, they did not consider them as worlds. In Greek understanding, the world was composed by both Earth and the celestial objects with noticeable movements. Anaximander thought that the cosmos was made from apeiron, a substance that created the world, and that the world would eventually return to the cosmos. Eventually two groups emerged, the atomists that thought that matter at both Earth and the cosmos was equally made of small atoms of the classical elements (earth, water, fire and air), and the Aristotelians who thought that those elements were exclusive of Earth and that the cosmos was made of a fifth one, the aether. Atomist Epicurus thought that the processes that created the world, its animals and plants should have created other worlds elsewhere, along with their own animals and plants. Aristotle thought instead that all the earth element naturally fell towards the center of the universe, and that would make it impossible for other planets to exist elsewhere. Under that reasoning, Earth was not only in the center, it was also the only planet in the universe. Cosmic pluralism, the plurality of worlds, or simply pluralism, describes the philosophical belief in numerous "worlds" in addition to Earth, which might harbor extraterrestrial life. The earliest recorded assertion of extraterrestrial human life is found in ancient scriptures of Jainism. There are multiple "worlds" mentioned in Jain scriptures that support human life. These include, among others, Bharat Kshetra, Mahavideh Kshetra, Airavat Kshetra, and Hari kshetra. Medieval Muslim writers like Fakhr al-Din al-Razi and Muhammad al-Baqir supported cosmic pluralism on the basis of the Qur'an. Chaucer's poem The House of Fame engaged in medieval thought experiments that postulated the plurality of worlds. However, those ideas about other worlds were different from the current knowledge about the structure of the universe, and did not postulate the existence of planetary systems other than the Solar System. When those authors talk about other worlds, they talk about places located at the center of their own systems, and with their own stellar vaults and cosmos surrounding them. The Greek ideas and the disputes between atomists and Aristotelians outlived the fall of the Greek empire. The Great Library of Alexandria compiled information about it, part of which was translated by Islamic scholars and thus survived the end of the Library. Baghdad combined the knowledge of the Greeks, the Indians, the Chinese and its own scholars, and the knowledge expanded through the Byzantine Empire. From there it eventually returned to Europe by the time of the Middle Ages. However, as the Greek atomist doctrine held that the world was created by random movements of atoms, with no need for a creator deity, it became associated with atheism, and the dispute intertwined with religious ones. Still, the Church did not react to those topics in a homogeneous way, and there were stricter and more permissive views within the church itself. The first known mention of the term 'panspermia' was in the writings of the 5th-century BC Greek philosopher Anaxagoras. He proposed the idea that life exists everywhere. By the time of the late Middle Ages there were many known inaccuracies in the geocentric model, but it was kept in use because naked eye observations provided limited data. Nicolaus Copernicus started the Copernican Revolution by proposing that the planets revolve around the sun rather than Earth. His proposal had little acceptance at first because, as he kept the assumption that orbits were perfect circles, his model led to as many inaccuracies as the geocentric one. Tycho Brahe improved the available data with naked-eye observatories, which worked with highly complex sextants and quadrants. Tycho could not make sense of his observations, but Johannes Kepler did: orbits were not perfect circles, but ellipses. This knowledge benefited the Copernican model, which worked now almost perfectly. The invention of the telescope a short time later, perfected by Galileo Galilei, clarified the final doubts, and the paradigm shift was completed. Under this new understanding, the notion of extraterrestrial life became feasible: if Earth is but just a planet orbiting around a star, there may be planets similar to Earth elsewhere. The astronomical study of distant bodies also proved that physical laws are the same elsewhere in the universe as on Earth, with nothing making the planet truly special. The new ideas were met with resistance from the Catholic church. Galileo was tried for the heliocentric model, which was considered heretical, and forced to recant it. The best-known early-modern proponent of ideas of extraterrestrial life was the Italian philosopher Giordano Bruno, who argued in the 16th century for an infinite universe in which every star is surrounded by its own planetary system. Bruno wrote that other worlds "have no less virtue nor a nature different to that of our earth" and, like Earth, "contain animals and inhabitants". Bruno's belief in the plurality of worlds was one of the charges leveled against him by the Venetian Holy Inquisition, which tried and executed him. The heliocentric model was further strengthened by the postulation of the theory of gravity by Sir Isaac Newton. This theory provided the mathematics that explains the motions of all things in the universe, including planetary orbits. By this point, the geocentric model was definitely discarded. By this time, the use of the scientific method had become a standard, and new discoveries were expected to provide evidence and rigorous mathematical explanations. Science also took a deeper interest in the mechanics of natural phenomena, trying to explain not just the way nature works but also the reasons for working that way. There was very little actual discussion about extraterrestrial life before this point, as the Aristotelian ideas remained influential while geocentrism was still accepted. When it was finally proved wrong, it not only meant that Earth was not the center of the universe, but also that the lights seen in the sky were not just lights, but physical objects. The notion that life may exist in them as well soon became an ongoing topic of discussion, although one with no practical ways to investigate. The possibility of extraterrestrials remained a widespread speculation as scientific discovery accelerated. William Herschel, the discoverer of Uranus, was one of many 18th–19th-century astronomers who believed that the Solar System is populated by alien life. Other scholars of the period who championed "cosmic pluralism" included Immanuel Kant and Benjamin Franklin. At the height of the Enlightenment, even the Sun and Moon were considered candidates for extraterrestrial inhabitants. Speculation about life on Mars increased in the late 19th century, following telescopic observation of apparent Martian canals – which soon, however, turned out to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, proposing that the canals were the work of a long-gone civilisation. Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen was present in the Martian atmosphere. By 1909 better telescopes and the best perihelic opposition of Mars since 1877 conclusively put an end to the canal hypothesis. As a consequence of the belief in the spontaneous generation there was little thought about the conditions of each celestial body: it was simply assumed that life would thrive anywhere. This theory was disproved by Louis Pasteur in the 19th century. Popular belief in thriving alien civilisations elsewhere in the solar system still remained strong until Mariner 4 and Mariner 9 provided close images of Mars, which debunked forever the idea of the existence of Martians and decreased the previous expectations of finding alien life in general. The end of the spontaneous generation belief forced investigation into the origin of life. Although abiogenesis is the more accepted theory, a number of authors reclaimed the term "panspermia" and proposed that life was brought to Earth from elsewhere. Some of those authors are Jöns Jacob Berzelius (1834), Kelvin (1871), Hermann von Helmholtz (1879) and, somewhat later, by Svante Arrhenius (1903). The science fiction genre, although not so named during the time, developed during the late 19th century. The expansion of the genre of extraterrestrials in fiction influenced the popular perception over the real-life topic, making people eager to jump to conclusions about the discovery of aliens. Science marched at a slower pace, some discoveries fueled expectations and others dashed excessive hopes. For example, with the advent of telescopes, most structures seen on the Moon or Mars were immediately attributed to Selenites or Martians, and later ones (such as more powerful telescopes) revealed that all such discoveries were natural features. A famous case is the Cydonia region of Mars, first imaged by the Viking 1 orbiter. The low-resolution photos showed a rock formation that resembled a human face, but later spacecraft took photos in higher detail that showed that there was nothing special about the site. The search and study of extraterrestrial life became a science of its own, astrobiology. Also known as exobiology, this discipline is studied by the NASA, the ESA, the INAF, and others. Astrobiology studies life from Earth as well, but with a cosmic perspective. For example, abiogenesis is of interest to astrobiology, not because of the origin of life on Earth, but for the chances of a similar process taking place in other celestial bodies. Many aspects of life, from its definition to its chemistry, are analyzed as either likely to be similar in all forms of life across the cosmos or only native to Earth. Astrobiology, however, remains constrained by the current lack of extraterrestrial life-forms to study, as all life on Earth comes from the same ancestor, and it is hard to infer general characteristics from a group with a single example to analyse. The 20th century came with great technological advances, speculations about future hypothetical technologies, and an increased basic knowledge of science by the general population thanks to science divulgation through the mass media. The public interest in extraterrestrial life and the lack of discoveries by mainstream science led to the emergence of pseudosciences that provided affirmative, if questionable, answers to the existence of aliens. Ufology claims that many unidentified flying objects (UFOs) would be spaceships from alien species, and ancient astronauts hypothesis claim that aliens would have visited Earth in antiquity and prehistoric times but people would have failed to understand it by then. Most UFOs or UFO sightings can be readily explained as sightings of Earth-based aircraft (including top-secret aircraft), known astronomical objects or weather phenomenons, or as hoaxes. Looking beyond the pseudosciences, Lewis White Beck strove to elevate the level of public discourse on the topic of extraterrestrial life by tracing the evolution of philosophical thought over the centuries from ancient times into the modern era. His review of the contributions made by Lucretius, Plutarch, Aristotle, Copernicus, Immanuel Kant, John Wilkins, Charles Darwin and Karl Marx demonstrated that even in modern times, humanity could be profoundly influenced in its search for extraterrestrial life by subtle and comforting archetypal ideas which are largely derived from firmly held religious, philosophical and existential belief systems. On a positive note, however, Beck further argued that even if the search for extraterrestrial life proves to be unsuccessful, the endeavor itself could have beneficial consequences by assisting humanity in its attempt to actualize superior ways of living here on Earth. By the 21st century, it was accepted that multicellular life in the Solar System can only exist on Earth, but the interest in extraterrestrial life increased regardless. This is a result of the advances in several sciences. The knowledge of planetary habitability allows to consider on scientific terms the likelihood of finding life at each specific celestial body, as it is known which features are beneficial and harmful for life. Astronomy and telescopes also improved to the point exoplanets can be confirmed and even studied, increasing the number of search places. Life may still exist elsewhere in the Solar System in unicellular form, but the advances in spacecraft allow to send robots to study samples in situ, with tools of growing complexity and reliability. Although no extraterrestrial life has been found and life may still be just a rarity from Earth, there are scientific reasons to suspect that it can exist elsewhere, and technological advances that may detect it if it does. Many scientists are optimistic about the chances of finding alien life. In the words of SETI's Frank Drake, "All we know for sure is that the sky is not littered with powerful microwave transmitters". Drake noted that it is entirely possible that advanced technology results in communication being carried out in some way other than conventional radio transmission. At the same time, the data returned by space probes, and giant strides in detection methods, have allowed science to begin delineating habitability criteria on other worlds, and to confirm that at least other planets are plentiful, though aliens remain a question mark. The Wow! signal, detected in 1977 by a SETI project, remains a subject of speculative debate. On the other hand, other scientists are pessimistic. Jacques Monod wrote that "Man knows at last that he is alone in the indifferent immensity of the universe, whence which he has emerged by chance". In 2000, geologist and paleontologist Peter Ward and astrobiologist Donald Brownlee published a book entitled Rare Earth: Why Complex Life is Uncommon in the Universe.[better source needed] In it, they discussed the Rare Earth hypothesis, in which they claim that Earth-like life is rare in the universe, whereas microbial life is common. Ward and Brownlee are open to the idea of evolution on other planets that is not based on essential Earth-like characteristics such as DNA and carbon. As for the possible risks, theoretical physicist Stephen Hawking warned in 2010 that humans should not try to contact alien life forms. He warned that aliens might pillage Earth for resources. "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans", he said. Jared Diamond had earlier expressed similar concerns. On 20 July 2015, Hawking and Russian billionaire Yuri Milner, along with the SETI Institute, announced a well-funded effort, called the Breakthrough Initiatives, to expand efforts to search for extraterrestrial life. The group contracted the services of the 100-meter Robert C. Byrd Green Bank Telescope in West Virginia in the United States and the 64-meter Parkes Telescope in New South Wales, Australia. On 13 February 2015, scientists (including Geoffrey Marcy, Seth Shostak, Frank Drake and David Brin) at a convention of the American Association for the Advancement of Science, discussed Active SETI and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea; one result was a statement, signed by many, that a "worldwide scientific, political and humanitarian discussion must occur before any message is sent". Government responses The 1967 Outer Space Treaty and the 1979 Moon Agreement define rules of planetary protection against potentially hazardous extraterrestrial life. COSPAR also provides guidelines for planetary protection. A committee of the United Nations Office for Outer Space Affairs had in 1977 discussed for a year strategies for interacting with extraterrestrial life or intelligence. The discussion ended without any conclusions. As of 2010, the UN lacks response mechanisms for the case of an extraterrestrial contact. One of the NASA divisions is the Office of Safety and Mission Assurance (OSMA), also known as the Planetary Protection Office. A part of its mission is to "rigorously preclude backward contamination of Earth by extraterrestrial life." In 2016, the Chinese Government released a white paper detailing its space program. According to the document, one of the research objectives of the program is the search for extraterrestrial life. It is also one of the objectives of the Chinese Five-hundred-meter Aperture Spherical Telescope (FAST) program. In 2020, Dmitry Rogozin, the head of the Russian space agency, said the search for extraterrestrial life is one of the main goals of deep space research. He also acknowledged the possibility of existence of primitive life on other planets of the Solar System. The French space agency has an office for the study of "non-identified aero spatial phenomena". The agency is maintaining a publicly accessible database of such phenomena, with over 1600 detailed entries. According to the head of the office, the vast majority of entries have a mundane explanation; but for 25% of entries, their extraterrestrial origin can neither be confirmed nor denied. In 2020, chairman of the Israel Space Agency Isaac Ben-Israel stated that the probability of detecting life in outer space is "quite large". But he disagrees with his former colleague Haim Eshed who stated that there are contacts between an advanced alien civilisation and some of Earth's governments. In fiction Although the idea of extraterrestrial peoples became feasible once astronomy developed enough to understand the nature of planets, they were not thought of as being any different from humans. Having no scientific explanation for the origin of mankind and its relation to other species, there was no reason to expect them to be any other way. This was changed by the 1859 book On the Origin of Species by Charles Darwin, which proposed the theory of evolution. Now with the notion that evolution on other planets may take other directions, science fiction authors created bizarre aliens, clearly distinct from humans. A usual way to do that was to add body features from other animals, such as insects or octopuses. Costuming and special effects feasibility alongside budget considerations forced films and TV series to tone down the fantasy, but these limitations lessened since the 1990s with the advent of computer-generated imagery (CGI), and later on as CGI became more effective and less expensive. Real-life events sometimes captivate people's imagination and this influences the works of fiction. For example, during the Barney and Betty Hill incident, the first recorded claim of an alien abduction, the couple reported that they were abducted and experimented on by aliens with oversized heads, big eyes, pale grey skin, and small noses, a description that eventually became the grey alien archetype once used in works of fiction. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Group_behaviour] | [TOKENS: 7431] |
Contents Group dynamics Group dynamics is a system of behaviors and psychological processes occurring within a social group (intragroup dynamics), or between social groups (intergroup dynamics). The study of group dynamics can be useful in understanding decision-making behavior, tracking the spread of diseases in society, creating effective therapy techniques, and following the emergence and popularity of new ideas and technologies. These applications of the field are studied in psychology, sociology, anthropology, political science, epidemiology, education, social work, leadership studies, business and managerial studies, as well as communication studies. History The history of group dynamics (or group processes) has a consistent, underlying premise: "the whole is greater than the sum of its parts." A social group is an entity that has qualities which cannot be understood just by studying the individuals that make up the group. In 1924, Gestalt psychologist Max Wertheimer proposed "There are entities where the behaviour of the whole cannot be derived from its individual elements nor from the way these elements fit together; rather the opposite is true: the properties of any of the parts are determined by the intrinsic structural laws of the whole". As a field of study, group dynamics has roots in both psychology and sociology. Wilhelm Wundt (1832–1920), credited as the founder of experimental psychology, had a particular interest in the psychology of communities, which he believed possessed phenomena (human language, customs, and religion) that could not be described through a study of the individual. On the sociological side, Émile Durkheim (1858–1917), who was influenced by Wundt, also recognized collective phenomena, such as public knowledge. Other key theorists include Gustave Le Bon (1841–1931) who believed that crowds possessed a 'racial unconscious' with primitive, aggressive, and antisocial instincts, and William McDougall, who believed in a 'group mind,' which had a distinct existence born from the interaction of individuals. Eventually, the social psychologist Kurt Lewin (1890–1947) coined the term group dynamics to describe the positive and negative forces within groups of people. In 1945, he established The Group Dynamics Research Center at the Massachusetts Institute of Technology, the first institute devoted explicitly to the study of group dynamics. Throughout his career, Lewin was focused on how the study of group dynamics could be applied to real-world, social issues. Increasingly, research has applied evolutionary psychology principles to group dynamics. As humans' social environments became more complex, they acquired adaptations by way of group dynamics that enhance survival. Examples include mechanisms for dealing with status, reciprocity, identifying cheaters, ostracism, altruism, group decision, leadership, and intergroup relations. Key theorists Gustave Le Bon was a French social psychologist whose seminal study, The Crowd: A Study of the Popular Mind (1896) led to the development of group psychology. The British psychologist William McDougall in his work The Group Mind (1920) researched the dynamics of groups of various sizes and degrees of organization. In Group Psychology and the Analysis of the Ego, (1922), Sigmund Freud based his preliminary description of group psychology on Le Bon's work, but went on to develop his own, original theory, related to what he had begun to elaborate in Totem and Taboo. Theodor Adorno reprised Freud's essay in 1951 with his Freudian Theory and the Pattern of Fascist Propaganda, and said that "It is not an overstatement if we say that Freud, though he was hardly interested in the political phase of the problem, clearly foresaw the rise and nature of fascist mass movements in purely psychological categories." Jacob L. Moreno was a psychiatrist, dramatist, philosopher and theoretician who coined the term "group psychotherapy" in the early 1930s and was highly influential at the time. Kurt Lewin (1943, 1948, 1951) is commonly identified as the founder of the movement to study groups scientifically. He coined the term group dynamics to describe the way groups and individuals act and react to changing circumstances. William Schutz (1958, 1966) looked at interpersonal relations as stage-developmental, inclusion (am I included?), control (who is top dog here?), and affection (do I belong here?). Schutz sees groups resolving each issue in turn in order to be able to progress to the next stage. Conversely, a struggling group can devolve to an earlier stage, if unable to resolve outstanding issues at its present stage. Schutz referred to these group dynamics as "the interpersonal underworld," group processes which are largely unseen and un-acknowledged, as opposed to "content" issues, which are nominally the agenda of group meetings. Wilfred Bion (1961) studied group dynamics from a psychoanalytic perspective, and stated that he was much influenced by Wilfred Trotter for whom he worked at University College Hospital London, as did another key figure in the Psychoanalytic movement, Ernest Jones. He discovered several mass group processes which involved the group as a whole adopting an orientation which, in his opinion, interfered with the ability of a group to accomplish the work it was nominally engaged in. Bion's experiences are reported in his published books, especially Experiences in Groups. The Tavistock Institute has further developed and applied the theory and practices developed by Bion. Bruce Tuckman (1965) proposed the four-stage model called Tuckman's Stages for a group. Tuckman's model states that the ideal group decision-making process should occur in four stages: Tuckman later added a fifth stage for the dissolution of a group called adjourning. (Adjourning may also be referred to as mourning, i.e. mourning the adjournment of the group). This model refers to the overall pattern of the group, but of course individuals within a group work in different ways. If distrust persists, a group may never even get to the norming stage. M. Scott Peck developed stages for larger-scale groups (i.e., communities) which are similar to Tuckman's stages of group development. Peck describes the stages of a community as: Communities may be distinguished from other types of groups, in Peck's view, by the need for members to eliminate barriers to communication in order to be able to form true community. Examples of common barriers are: expectations and preconceptions; prejudices; ideology, counterproductive norms, theology and solutions; the need to heal, convert, fix or solve and the need to control. A community is born when its members reach a stage of "emptiness" or peace. Richard Hackman developed a synthetic, research-based model for designing and managing work groups. Hackman suggested that groups are successful when they satisfy internal and external clients, develop capabilities to perform in the future, and when members find meaning and satisfaction in the group. Hackman proposed five conditions that increase the chance that groups will be successful. These include: Intragroup dynamics Intragroup dynamics (also referred to as in-group, within-group, or commonly just ‘group dynamics’) are the underlying processes that give rise to a set of norms, roles, relations, and common goals that characterize a particular social group. Examples of groups include religious, political, military, and environmental groups, sports teams, work groups, and therapy groups. Amongst the members of a group, there is a state of interdependence, through which the behaviours, attitudes, opinions, and experiences of each member are collectively influenced by the other group members. In many fields of research, there is an interest in understanding how group dynamics influence individual behaviour, attitudes, and opinions. The dynamics of a particular group depend on how one defines the boundaries of the group. Often, there are distinct subgroups within a more broadly defined group. For example, one could define U.S. residents ('Americans') as a group, but could also define a more specific set of U.S. residents (for example, 'Americans in the South'). For each of these groups, there are distinct dynamics that can be discussed. Notably, on this very broad level, the study of group dynamics is similar to the study of culture. For example, there are group dynamics in the U.S. South that sustain a culture of honor, which is associated with norms of toughness, honour-related violence, and self-defence. Group formation starts with a psychological bond between individuals. The social cohesion approach suggests that group formation comes out of bonds of interpersonal attraction. In contrast, the social identity approach suggests that a group starts when a collection of individuals perceive that they share some social category (smokers, nurses, students, hockey players), and that interpersonal attraction only secondarily enhances the connection between individuals. Additionally, from the social identity approach, group formation involves both identifying with some individuals and explicitly not identifying with others. So to say, a level of psychological distinctiveness is necessary for group formation. Through interaction, individuals begin to develop group norms, roles, and attitudes which define the group, and are internalized to influence behaviour. Emergent groups arise from a relatively spontaneous process of group formation. For example, in response to a natural disaster, an emergent response group may form. These groups are characterized as having no preexisting structure (e.g. group membership, allocated roles) or prior experience working together. Yet, these groups still express high levels of interdependence and coordinate knowledge, resources, and tasks. Joining a group is determined by a number of different factors, including an individual's personal traits; gender; social motives such as need for affiliation, need for power, and need for intimacy; attachment style; and prior group experiences. Groups can offer some advantages to its members that would not be possible if an individual decided to remain alone, including gaining social support in the forms of emotional support, instrumental support, and informational support. It also offers friendship, potential new interests, learning new skills, and enhancing self esteem. However, joining a group may also cost an individual time, effort, and personal resources as they may conform to social pressures and strive to reap the benefits that may be offered by the group. The Minimax Principle is a part of social exchange theory that states that people will join and remain in a group that can provide them with the maximum amount of valuable rewards while at the same time, ensuring the minimum amount of costs to themselves. However, this does not necessarily mean that a person will join a group simply because the reward/cost ratio seems attractive. According to Howard Kelley and John Thibaut, a group may be attractive to us in terms of costs and benefits, but that attractiveness alone does not determine whether or not we will join the group. Instead, our decision is based on two factors: our comparison level, and our comparison level for alternatives. In John Thibaut and Harold Kelley's social exchange theory, comparison level is the standard by which an individual will evaluate the desirability of becoming a member of the group and forming new social relationships within the group. This comparison level is influenced by previous relationships and membership in different groups. Those individuals who have experienced positive rewards with few costs in previous relationships and groups will have a higher comparison level than a person who experienced more negative costs and fewer rewards in previous relationships and group memberships. According to the social exchange theory, group membership will be more satisfying to a new prospective member if the group's outcomes, in terms of costs and rewards, are above the individual's comparison level. As well, group membership will be unsatisfying to a new member if the outcomes are below the individual's comparison level. Comparison level only predicts how satisfied a new member will be with the social relationships within the group. To determine whether people will actually join or leave a group, the value of other, alternative groups needs to be taken into account. This is called the comparison level for alternatives. This comparison level for alternatives is the standard by which an individual will evaluate the quality of the group in comparison to other groups the individual has the opportunity to join. Thiabaut and Kelley stated that the "comparison level for alternatives can be defined informally as the lowest level of outcomes a member will accept in the light of available alternative opportunities.” Joining and leaving groups is ultimately dependent on the comparison level for alternatives, whereas member satisfaction within a group depends on the comparison level. To summarize, if membership in the group is above the comparison level for alternatives and above the comparison level, the membership within the group will be satisfying and an individual will be more likely to join the group. If membership in the group is above the comparison level for alternatives but below the comparison level, membership will be not be satisfactory; however, the individual will likely join the group since no other desirable options are available. When group membership is below the comparison level for alternatives but above the comparison level, membership is satisfying but an individual will be unlikely to join. If group membership is below both the comparison and alternative comparison levels, membership will be dissatisfying and the individual will be less likely to join the group. Groups can vary drastically from one another. For example, three best friends who interact every day as well as a collection of people watching a movie in a theater both constitute a group. Past research has identified four basic types of groups which include, but are not limited to: primary groups, social groups, collective groups, and categories. It is important to define these four types of groups because they are intuitive to most lay people. For example, in an experiment, participants were asked to sort a number of groups into categories based on their own criteria. Examples of groups to be sorted were a sports team, a family, people at a bus stop and women. It was found that participants consistently sorted groups into four categories: intimacy groups, task groups, loose associations, and social categories. These categories are conceptually similar to the four basic types to be discussed. Therefore, it seems that individuals intuitively define aggregations of individuals in this way. Primary groups are characterized by relatively small, long-lasting groups of individuals who share personally meaningful relationships. Since the members of these groups often interact face-to-face, they know each other very well and are unified. Individuals that are a part of primary groups consider the group to be an important part of their lives. Consequently, members strongly identify with their group, even without regular meetings. Cooley believed that primary groups were essential for integrating individuals into their society since this is often their first experience with a group. For example, individuals are born into a primary group, their family, which creates a foundation for them to base their future relationships. Individuals can be born into a primary group; however, primary groups can also form when individuals interact for extended periods of time in meaningful ways. Examples of primary groups include family, close friends, and gangs. A social group is characterized by a formally organized group of individuals who are not as emotionally involved with each other as those in a primary group. These groups tend to be larger, with shorter memberships compared to primary groups. Further, social groups do not have as stable memberships, since members are able to leave their social group and join new groups. The goals of social groups are often task-oriented as opposed to relationship-oriented. Examples of social groups include coworkers, clubs, and sports teams. Collectives are characterized by large groups of individuals who display similar actions or outlooks. They are loosely formed, spontaneous, and brief. Examples of collectives include a flash mob, an audience at a movie, and a crowd watching a building burn. Categories are characterized by a collection of individuals who are similar in some way. Categories become groups when their similarities have social implications. For example, when people treat others differently because of certain aspects of their appearance or heritage, for example, this creates groups of different races. For this reason, categories can appear to be higher in entitativity and essentialism than primary, social, and collective groups. Entitativity is defined by Campbell as the extent to which collections of individuals are perceived to be a group. The degree of entitativity that a group has is influenced by whether a collection of individuals experience the same fate, display similarities, and are close in proximity. If individuals believe that a group is high in entitativity, then they are likely to believe that the group has unchanging characteristics that are essential to the group, known as essentialism. Examples of categories are New Yorkers, gamblers, and women. The social group is a critical source of information about individual identity. We naturally make comparisons between our own group and other groups, but we do not necessarily make objective comparisons. Instead, we make evaluations that are self-enhancing, emphasizing the positive qualities of our own group (in-group bias). In this way, these comparisons give us a distinct and valued social identity that benefits our self-esteem. Our social identity and group membership also satisfies a need to belong. Of course, individuals belong to multiple groups. Therefore, one's social identity can have several, qualitatively distinct parts (for example, one's ethnic identity, religious identity, and political identity). Optimal distinctiveness theory suggests that individuals have a desire to be similar to others, but also a desire to differentiate themselves, ultimately seeking some balance of these two desires (to obtain optimal distinctiveness). For example, one might imagine a young teenager in the United States who tries to balance these desires, not wanting to be ‘just like everyone else,’ but also wanting to ‘fit in’ and be similar to others. One's collective self may offer a balance between these two desires. That is, to be similar to others (those who you share group membership with), but also to be different from others (those who are outside of your group). In the social sciences, group cohesion refers to the processes that keep members of a social group connected. Terms such as attraction, solidarity, and morale are often used to describe group cohesion. It is thought to be one of the most important characteristics of a group, and has been linked to group performance, intergroup conflict and therapeutic change. Group cohesion, as a scientifically studied property of groups, is commonly associated with Kurt Lewin and his student, Leon Festinger. Lewin defined group cohesion as the willingness of individuals to stick together, and believed that without cohesiveness a group could not exist. As an extension of Lewin's work, Festinger (along with Stanley Schachter and Kurt Back) described cohesion as, “the total field of forces which act on members to remain in the group” (Festinger, Schachter, & Back, 1950, p. 37). Later, this definition was modified to describe the forces acting on individual members to remain in the group, termed attraction to the group. Since then, several models for understanding the concept of group cohesion have been developed, including Albert Carron's hierarchical model and several bi-dimensional models (vertical v. horizontal cohesion, task v. social cohesion, belongingness and morale, and personal v. social attraction). Before Lewin and Festinger, there were, of course, descriptions of a very similar group property. For example, Emile Durkheim described two forms of solidarity (mechanical and organic), which created a sense of collective conscious and an emotion-based sense of community. Beliefs within the in-group are based on how individuals in the group see their other members. Individuals tend to upgrade likeable in-group members and deviate from unlikeable group members, making them a separate out-group. This is called the black sheep effect. The way a person judges socially desirable and socially undesirable individuals depends upon whether they are part of the in-group or out-group. This phenomenon has been later accounted for by subjective group dynamics theory. According to this theory, people derogate socially undesirable (deviant) in-group members relative to out-group members, because they give a bad image of the in-group and jeopardize people's social identity. In more recent studies, Marques and colleagues have shown that this occurs more strongly with regard to in-group full members than other members. Whereas new members of a group must prove themselves to the full members to become accepted, full members have undergone socialization and are already accepted within the group. They have more privilege than newcomers but more responsibility to help the group achieve its goals. Marginal members were once full members but have lost membership because they failed to live up to the group's expectations. They can rejoin the group if they go through re-socialization. Therefore, full members' behavior is paramount to define the in-group's image. Bogart and Ryan surveyed the development of new members' stereotypes about in-groups and out-groups during socialization. Results showed that the new members judged themselves as consistent with the stereotypes of their in-groups, even when they had recently committed to join those groups or existed as marginal members. They also tended to judge the group as a whole in an increasingly less positive manner after they became full members. However, there is no evidence that this affects the way they are judged by other members. Nevertheless, depending on the self-esteem of an individual, members of the in-group may experience different private beliefs about the group's activities but will publicly express the opposite—that they actually share these beliefs. One member may not personally agree with something the group does, but to avoid the black sheep effect, they will publicly agree with the group and keep the private beliefs to themselves. If the person is privately self-aware, he or she is more likely to comply with the group even if they possibly have their own beliefs about the situation. In situations of hazing within fraternities and sororities on college campuses, pledges may encounter this type of situation and may outwardly comply with the tasks they are forced to do regardless of their personal feelings about the Greek institution they are joining. This is done in an effort to avoid becoming an outcast of the group. Outcasts who behave in a way that might jeopardize the group tend to be treated more harshly than the likeable ones in a group, creating a black sheep effect. Full members of a fraternity might treat the incoming new members harshly, causing the pledges to decide if they approve of the situation and if they will voice their disagreeing opinions about it. Individual behaviour is influenced by the presence of others. For example, studies have found that individuals work harder and faster when others are present (see social facilitation), and that an individual's performance is reduced when others in the situation create distraction or conflict. Groups also influence individuals' decision-making processes. These include decisions related to in-group bias, persuasion (see Asch conformity experiments), obedience (see Milgram Experiment), and groupthink. There are both positive and negative implications of group influence on individual behaviour. This type of influence is often useful in the context of work settings, team sports, and political activism. However, the influence of groups on the individual can also generate extremely negative behaviours, evident in Nazi Germany, the My Lai massacre, and in the Abu Ghraib prison (also see Abu Ghraib torture and prisoner abuse). A group's structure is the internal framework that defines members' relations to one another over time. Frequently studied elements of group structure include roles, norms, values, communication patterns, and status differentials. Group structure has also been defined as the underlying pattern of roles, norms, and networks of relations among members that define and organize the group. Roles can be defined as a tendency to behave, contribute and interrelate with others in a particular way. Roles may be assigned formally, but more often are defined through the process of role differentiation. Role differentiation is the degree to which different group members have specialized functions. A group with a high level of role differentiation would be categorized as having many different roles that are specialized and narrowly defined. A key role in a group is the leader, but there are other important roles as well, including task roles, relationship roles, and individual roles. Functional (task) roles are generally defined in relation to the tasks the team is expected to perform. Individuals engaged in task roles focus on the goals of the group and on enabling the work that members do; examples of task roles include coordinator, recorder, critic, or technician. A group member engaged in a relationship role (or socioemotional role) is focused on maintaining the interpersonal and emotional needs of the groups' members; examples of relationship role include encourager, harmonizer, or compromiser. Norms are the informal rules that groups adopt to regulate members' behaviour. Norms refer to what should be done and represent value judgments about appropriate behaviour in social situations. Although they are infrequently written down or even discussed, norms have powerful influence on group behaviour.[unreliable source?] They are a fundamental aspect of group structure as they provide direction and motivation, and organize the social interactions of members. Norms are said to be emergent, as they develop gradually throughout interactions between group members. While many norms are widespread throughout society, groups may develop their own norms that members must learn when they join the group. There are various types of norms, including: prescriptive, proscriptive, descriptive, and injunctive. Intermember relations are the connections among the members of a group, or the social network within a group. Group members are linked to one another at varying levels. Examining the intermember relations of a group can highlight a group's density (how many members are linked to one another), or the degree centrality of members (number of ties between members). Analysing the intermember relations aspect of a group can highlight the degree centrality of each member in the group, which can lead to a better understanding of the roles of certain group (e.g. an individual who is a "go-between" in a group will have closer ties to numerous group members which can aid in communication, etc.). Values are goals or ideas that serve as guiding principles for the group. Like norms, values may be communicated either explicitly or on an ad hoc basis. Values can serve as a rallying point for the team. However, some values (such as conformity) can also be dysfunction and lead to poor decisions by the team. Communication patterns describe the flow of information within the group and they are typically described as either centralized or decentralized. With a centralized pattern, communications tend to flow from one source to all group members. Centralized communications allow standardization of information, but may restrict the free flow of information. Decentralized communications make it easy to share information directly between group members. When decentralized, communications tend to flow more freely, but the delivery of information may not be as fast or accurate as with centralized communications. Another potential downside of decentralized communications is the sheer volume of information that can be generated, particularly with electronic media. Status differentials are the relative differences in status among group members. When a group is first formed the members may all be on an equal level, but over time certain members may acquire status and authority within the group; this can create what is known as a pecking order within a group. Status can be determined by a variety of factors and characteristics, including specific status characteristics (e.g. task-specific behavioural and personal characteristics, such as experience) or diffuse status characteristics (e.g. age, race, ethnicity). It is important that other group members perceive an individual's status to be warranted and deserved, as otherwise they may not have authority within the group. Status differentials may affect the relative amount of pay among group members and they may also affect the group's tolerance to violation of group norms (e.g. people with higher status may be given more freedom to violate group norms). Forsyth suggests that while many daily tasks undertaken by individuals could be performed in isolation, the preference is to perform with other people. In a study of dynamogenic stimulation for the purpose of explaining pacemaking and competition in 1898, Norman Triplett theorized that "the bodily presence of another rider is a stimulus to the racer in arousing the competitive instinct...". This dynamogenic factor is believed to have laid the groundwork for what is now known as social facilitation—an "improvement in task performance that occurs when people work in the presence of other people". Further to Triplett's observation, in 1920, Floyd Allport found that although people in groups were more productive than individuals, the quality of their product/effort was inferior. In 1965, Robert Zajonc expanded the study of arousal response (originated by Triplett) with further research in the area of social facilitation. In his study, Zajonc considered two experimental paradigms. In the first—audience effects—Zajonc observed behaviour in the presence of passive spectators, and the second—co-action effects—he examined behaviour in the presence of another individual engaged in the same activity. Zajonc observed two categories of behaviours—dominant responses to tasks that are easier to learn and which dominate other potential responses and nondominant responses to tasks that are less likely to be performed. In his Theory of Social Facilitation, Zajonc concluded that in the presence of others, when action is required, depending on the task requirement, either social facilitation or social interference will impact the outcome of the task. If social facilitation occurs, the task will have required a dominant response from the individual resulting in better performance in the presence of others, whereas if social interference occurs the task will have elicited a nondominant response from the individual resulting in subpar performance of the task. Several theories analysing performance gains in groups via drive, motivational, cognitive and personality processes, explain why social facilitation occurs. Zajonc hypothesized that compresence (the state of responding in the presence of others) elevates an individual's drive level which in turn triggers social facilitation when tasks are simple and easy to execute, but impedes performance when tasks are challenging. Nickolas Cottrell, 1972, proposed the evaluation apprehension model whereby he suggested people associate social situations with an evaluative process. Cottrell argued this situation is met with apprehension and it is this motivational response, not arousal/elevated drive, that is responsible for increased productivity on simple tasks and decreased productivity on complex tasks in the presence of others. In The Presentation of Self in Everyday Life (1959), Erving Goffman assumes that individuals can control how they are perceived by others. He suggests that people fear being perceived as having negative, undesirable qualities and characteristics by other people, and that it is this fear that compels individuals to portray a positive self-presentation/social image of themselves. In relation to performance gains, Goffman's self-presentation theory predicts, in situations where they may be evaluated, individuals will consequently increase their efforts in order to project/preserve/maintain a positive image. Distraction-conflict theory contends that when a person is working in the presence of other people, an interference effect occurs splitting the individual's attention between the task and the other person. On simple tasks, where the individual is not challenged by the task, the interference effect is negligible and performance, therefore, is facilitated. On more complex tasks, where drive is not strong enough to effectively compete against the effects of distraction, there is no performance gain. The Stroop task (Stroop effect) demonstrated that, by narrowing a person's focus of attention on certain tasks, distractions can improve performance. Social orientation theory considers the way a person approaches social situations. It predicts that self-confident individuals with a positive outlook will show performance gains through social facilitation, whereas a self-conscious individual approaching social situations with apprehension is less likely to perform well due to social interference effects. Intergroup dynamics Intergroup dynamics (or intergroup relations) refers to the behavioural and psychological relationship between two or more groups. This includes perceptions, attitudes, opinions, and behaviours towards one's own group, as well as those towards another group. In some cases, intergroup dynamics is prosocial, positive, and beneficial (for example, when multiple research teams work together to accomplish a task or goal). In other cases, intergroup dynamics can create conflict. For example, Fischer & Ferlie found initially positive dynamics between a clinical institution and its external authorities dramatically changed to a 'hot' and intractable conflict when authorities interfered with its embedded clinical model. Similarly, underlying the 1999 Columbine High School shooting in Littleton, Colorado, United States, intergroup dynamics played a significant role in Eric Harris’ and Dylan Klebold's decision to kill a teacher and 14 students (including themselves). According to social identity theory, intergroup conflict starts with a process of comparison between individuals in one group (the ingroup) to those of another group (the outgroup). This comparison process is not unbiased and objective. Instead, it is a mechanism for enhancing one's self-esteem. In the process of such comparisons, an individual tends to: Even without any intergroup interaction (as in the minimal group paradigm), individuals begin to show favouritism towards their own group, and negative reactions towards the outgroup. This conflict can result in prejudice, stereotypes, and discrimination. Intergroup conflict can be highly competitive, especially for social groups with a long history of conflict (for example, the 1994 Rwandan genocide, rooted in group conflict between the ethnic Hutu and Tutsi). In contrast, intergroup competition can sometimes be relatively harmless, particularly in situations where there is little history of conflict (for example, between students of different universities) leading to relatively harmless generalizations and mild competitive behaviours. Intergroup conflict is commonly recognized amidst racial, ethnic, religious, and political groups. The formation of intergroup conflict was investigated in a popular series of studies by Muzafer Sherif and colleagues in 1961, called the Robbers Cave Experiment. The Robbers Cave Experiment was later used to support realistic conflict theory. Other prominent theories relating to intergroup conflict include social dominance theory, and social-/self-categorization theory. There have been several strategies developed for reducing the tension, bias, prejudice, and conflict between social groups. These include the contact hypothesis, the jigsaw classroom, and several categorization-based strategies. In 1954, Gordon Allport suggested that by promoting contact between groups, prejudice can be reduced. Further, he suggested four optimal conditions for contact: equal status between the groups in the situation; common goals; intergroup cooperation; and the support of authorities, law, or customs. Since then, over 500 studies have been done on prejudice reduction under variations of the contact hypothesis, and a meta-analytic review suggests overall support for its efficacy. In some cases, even without the four optimal conditions outlined by Allport, prejudice between groups can be reduced. Under the contact hypothesis, several models have been developed. A number of these models utilize a superordinate identity to reduce prejudice: that is, a more broadly defined, umbrella group or identity that includes the groups that are in conflict. By emphasizing this superordinate identity, individuals in both subgroups can share a common social identity. For example, if there is conflict between White, Black, and Latino students in a high school, one might try to emphasize the high school group/identity that students share to reduce conflict between the groups. Models utilizing superordinate identities include the common ingroup identity model, the ingroup projection model, the mutual intergroup differentiation model, and the ingroup identity model. Similarly, recategorization is a broader term used by Gaertner et al. to describe the strategies aforementioned. There are techniques that utilize interdependence, between two or more groups, with the aim of reducing prejudice. That is, members across groups have to rely on one another to accomplish some goal or task. In the Robbers Cave Experiment, Sherif used this strategy to reduce conflict between groups. Elliot Aronson’s Jigsaw Classroom also uses this strategy of interdependence. In 1971, thick racial tensions were abounding in Austin, Texas. Aronson was brought in to examine the nature of this tension within schools, and to devise a strategy for reducing it (so to improve the process of school integration, mandated under Brown v. Board of Education in 1954). Despite strong evidence for the effectiveness of the jigsaw classroom, the strategy was not widely used (arguably because of strong attitudes existing outside of the schools, which still resisted the notion that racial and ethnic minority groups are equal to Whites and, similarly, should be integrated into schools). Selected academic journals See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Jewish_culture] | [TOKENS: 15730] |
Contents Jewish culture Hebrew Judeo-Aramaic Judeo-Arabic Other Jewish diaspora languages Jewish folklore Jewish poetry Jewish culture is the culture of the Jewish people, from its formation in ancient times until the current age. Judaism itself is not simply a faith-based religion, but an orthopraxy and ethnoreligion, pertaining to deed, practice, and identity. Jewish culture covers many aspects, including religion and worldviews, literature, media, and cinema, art and architecture, cuisine and traditional dress, attitudes to gender, marriage, family, social customs and lifestyles, music and dance. Some elements of Jewish culture come from within Judaism, others from the interaction of Jews with host populations, and others still from the inner social and cultural dynamics of the community. Before the 18th century, religion dominated virtually all aspects of Jewish life, and infused culture. Since the advent of secularization, wholly secular Jewish culture emerged likewise. History There has not been a political unity of Jewish society since the united monarchy. Since then Israelite populations were always geographically dispersed (see Jewish diaspora), so that by the 19th century, the Ashkenazi Jews were mainly located in Eastern and Central Europe; the Sephardi Jews were largely spread among various communities which lived in the Mediterranean region; Mizrahi Jews were primarily spread throughout Western Asia; and other populations of Jews lived in the Caucasus, Crimea, Central Asia, Ethiopia, and India. (See Jewish ethnic divisions.) While there has been communication and traffic between these Jewish communities, many Sephardic exiles blended into the Ashkenazi communities which existed in Central Europe following the Spanish Inquisition; many Ashkenazim migrated to the Ottoman Empire, giving rise to the characteristic Syrian-Jewish family name "Ashkenazi"; Iraqi-Jewish traders formed a distinct Jewish community in India; to some degree, many of these Jewish populations were cut off from the cultures which surrounded them by ghettoization, Muslim laws of dhimma, and the traditional discouragement of contact between Jews and members of polytheistic populations by their religious leaders. Medieval Jewish communities in Eastern Europe continued to display distinct cultural traits over the centuries. Despite the universalist leanings of the Enlightenment (and its echo within Judaism in the Haskalah movement), many Yiddish-speaking Jews in Eastern Europe continued to see themselves as forming a distinct national group — " 'am yehudi", from the Biblical Hebrew – but, adapting this idea to Enlightenment values, they assimilated the concept as that of an ethnic group whose identity did not depend on religion, which under Enlightenment thinking fell under a separate category. Constantin Măciucă writes of the existence of "a differentiated but not isolated Jewish spirit" permeating the culture of Yiddish-speaking Jews. This was only intensified as the rise of Romanticism amplified the sense of national identity across Europe generally. Thus, for example, members of the General Jewish Labour Bund in the late 19th and early 20th centuries were generally non-religious, and one of the historical leaders of the Bund was the child of converts to Christianity, though not a practicing or believing Christian himself. The Haskalah combined with the Jewish Emancipation movement under way in Central and Western Europe to create an opportunity for Jews to enter secular society. At the same time, pogroms in Eastern Europe provoked a surge of migration, in large part to the United States, where some 2 million Jewish immigrants resettled between 1880 and 1920. By 1931, shortly before The Holocaust, 92% of the World's Jewish population was Ashkenazi in origin. Secularism originated in Europe as series of movements that militated for a new, heretofore unheard-of concept called "secular Judaism". For these reasons, much of what is thought of by English-speakers and, to a lesser extent, by non-English-speaking Europeans as "secular Jewish culture" is, in essence, the Jewish cultural movement that evolved in Central and Eastern Europe, and subsequently brought to North America by immigrants. During the 1940s, the Holocaust uprooted and destroyed most of the Jewish communities living in much of Europe. This, in combination with the creation of the State of Israel and the consequent Jewish exodus from Arab lands, resulted in a further geographic shift. Defining secular culture among those who practice traditional Judaism is difficult, because the entire culture is, by definition, entwined with religious traditions: the idea of separate ethnic and religious identity is foreign to the Hebrew tradition of an " 'am yisrael". (This is particularly true for Orthodox Judaism.) Gary Tobin, head of the Institute for Jewish and Community Research, said of traditional Jewish culture: The dichotomy between religion and culture doesn't really exist. Every religious attribute is filled with culture; every cultural act filled with religiosity. Synagogues themselves are great centers of Jewish culture. After all, what is life really about? Food, relationships, enrichment ... So is Jewish life. So many of our traditions inherently contain aspects of culture. Look at the Passover Seder — it's essentially great theater. Jewish education and religiosity bereft of culture is not as interesting. Yaakov Malkin, Professor of Aesthetics and Rhetoric at Tel Aviv University and the founder and academic director of Meitar College for Judaism as Culture in Jerusalem, writes: Today very many secular Jews take part in Jewish cultural activities, such as celebrating Jewish holidays as historical and nature festivals, imbued with new content and form, or marking life-cycle events such as birth, bar/bat mitzvah, marriage, and mourning in a secular fashion. They come together to study topics pertaining to Jewish culture and its relation to other cultures, in havurot, cultural associations, and secular synagogues, and they participate in public and political action coordinated by secular Jewish movements, such as the former movement to free Soviet Jews, and movements to combat pogroms, discrimination, and religious coercion. Jewish secular humanistic education inculcates universal moral values through classic Jewish and world literature and through organizations for social change that aspire to ideals of justice and charity. In North America, the secular and cultural Jewish movements are divided into three umbrella organizations: the Society for Humanistic Judaism (SHJ), the Congress of Secular Jewish Organizations (CSJO), and The Workers Circle. Philosophy and religion Jewish philosophy includes all philosophy carried out by Jews, or in relation to the religion of Judaism. The Jewish philosophy is extended over several main eras in Jewish history, including the ancient and biblical era, medieval era and modern era (see Haskalah). The ancient Jewish philosophy is expressed in the bible. According to Prof. Israel Efros, the principles of the Jewish philosophy start in the bible, where the foundations of the Jewish monotheistic beliefs can be found, such as the belief in one god, the separation of god and the world and nature (as opposed to Pantheism) and the creation of the world. Other biblical writings that associated with philosophy are Psalms that contains invitations to admire the wisdom of God through his works; from this, some scholars suggest, Judaism harbors a Philosophical under-current and Ecclesiastes that is often considered to be the only genuine philosophical work in the Hebrew Bible; its author seeks to understand the place of human beings in the world and life's meaning. Other writings related to philosophy can be found in the Deuterocanonical books such as Sirach and Book of Wisdom. During the Hellenistic era, Hellenistic Judaism aspired to combine Jewish religious tradition with elements of Greek culture and philosophy. The philosopher Philo used philosophical allegory to attempt to fuse and harmonize Greek philosophy with Jewish philosophy. His work attempts to combine Plato and Moses into one philosophical system. He developed an allegoric approach of interpreting holy scriptures (the bible), in contrast to (old-fashioned) literally interpretation approaches. His allegorical exegesis was important for several Christian Church Fathers and some scholars hold that his concept of the Logos as God's creative principle influenced early Christology. Other scholars, however, deny direct influence but say both Philo and Early Christianity borrow from a common source. Between the Ancient era and the Middle Ages most of the Jewish philosophy concentrated around the Rabbinic literature that is expressed in the Talmud and Midrash. In the 9th century Saadia Gaon wrote the text Emunoth ve-Deoth which is the first systematic presentation and philosophic foundation of the dogmas of Judaism. The Golden age of Jewish culture in Spain included many influential Jewish philosophers such as Moses ibn Ezra, Abraham ibn Ezra, Solomon ibn Gabirol, Yehuda Halevi, Isaac Abravanel, Nahmanides, Joseph Albo, Abraham ibn Daud, Nissim of Gerona, Bahya ibn Paquda, Abraham bar Hiyya, Joseph ibn Tzaddik, Hasdai Crescas and Isaac ben Moses Arama. The most notable is Maimonides who is considered in the Jewish world to be a prominent philosopher and polymath in the Islamic and Western worlds. Outside of Spain, other philosophers are Natan'el al-Fayyumi, Elia del Medigo, Jedaiah ben Abraham Bedersi and Gersonides. Jewish philosophers of the modern era, mainly in Europe, include Baruch Spinoza, founder of Spinozism, whose work included modern Rationalism and Biblical criticism and laid the groundwork for the 18th-century Enlightenment. His work has earned him recognition as one of Western philosophy's most important thinkers; others are Isaac Orobio de Castro, Tzvi Ashkenazi, David Nieto, Isaac Cardoso, Jacob Abendana, Uriel da Costa, Francisco Sanches and Moses Almosnino. A new era began in the 18th century with the thought of Moses Mendelssohn. Mendelssohn has been described as the "'third Moses', with whom begins a new era in Judaism", just as new eras began with Moses the prophet and with Moses Maimonides. Mendelssohn was a German Jewish philosopher to whose ideas the renaissance of European Jews, Haskalah (the Jewish Enlightenment) is indebted. He has been referred to as the father of Reform Judaism, though Reform spokesmen have been "resistant to claim him as their spiritual father". Mendelssohn came to be regarded as a leading cultural figure of his time by both Germans and Jews. The Jewish Enlightenment philosophy included Menachem Mendel Lefin, Salomon Maimon and Isaac Satanow. The next 19th century comprised both secular and religious philosophy and included philosophers such as Elijah Benamozegh, Hermann Cohen, Moses Hess, Samson Raphael Hirsch, Samuel Hirsch, Nachman Krochmal, Samuel David Luzzatto, and Nachman of Breslov founder of Breslov. The 20th century included the notable philosophers Jacques Derrida, Karl Popper, Emmanuel Levinas, Claude Lévi-Strauss, Hilary Putnam, Alfred Tarski, Ludwig Wittgenstein, A. J. Ayer, Walter Benjamin, Raymond Aron, Theodor W. Adorno, Isaiah Berlin and Henri Bergson. Education and politics A range of moral and political views is evident early in the history of Judaism, that serves to partially explain the diversity that is apparent among secular Jews who are often influenced by moral beliefs that can be found in Jewish scripture, and traditions. In recent centuries, secular Jews in Europe and the Americas have tended towards the political left[citation needed], and played key roles in the birth of the 19th century's labor movement and socialism. The biographies of women like Emma Goldman and Hannah Arendt embody complicated relationships between politics, Judaism and feminism. While Diaspora Jews have also been represented on the conservative side of the political spectrum, even politically conservative Jews have tended to support pluralism more consistently than many other elements of the political right. Some scholars attribute this to the fact that Jews are not expected to proselytize, derived from Halakha. This lack of a universalizing religion is combined with the fact that most Jews live as minorities in diaspora countries, and that no central Jewish religious authority has existed since 363 CE. Jews value education, and the value of education is strongly embedded in Jewish culture. Economic activity In the Middle Ages, European laws prevented Jews from owning land and gave them important incentives to go into professions that non-Jewish Europeans were unwilling to undertake. During the medieval period, there was a very strong social stigma against lending money and charging interest among the Christian majority. In most of Europe until the late 18th century, and in some places to an even later date, Jews were prohibited by Roman Catholic governments (and others) from owning land. On the other hand, the Church, because of a number of Bible verses (e.g., Leviticus 25:36) forbidding usury, declared that charging any interest was against the divine law, and this prevented any mercantile use of capital by pious Christians. As the Canon law did not apply to Jews, they were not liable to the ecclesiastical punishments which were placed upon usurers by the popes. Christian rulers gradually saw the advantage of having a class of men like the Jews who could supply capital for their use without being liable to excommunication, and so the money trade of western Europe by this means fell into the hands of the Jews. However, in almost every instance where large amounts were acquired by Jews through banking transactions the property thus acquired fell either during their life or upon their death into the hands of the king. This happened to Aaron of Lincoln in England, Ezmel de Ablitas in Navarre, Heliot de Vesoul in Provence, Benveniste de Porta in Aragon, etc. It was often for this reason that kings supported the Jews, and even objected to them becoming Christians (because in that case their fortunes earned by usury could not be seized by the crown after their deaths). Thus, both in England and in France the kings demanded to be compensated by the church for every Jew converted. This type of royal trickery was one factor in creating the stereotypical Jewish role of banker and/or merchant. As a modern system of capital began to develop, loans became necessary for commerce and industry. Jews were able to gain a foothold in the new field of finance by providing these services: as non-Catholics, they were not bound by the ecclesiastical prohibition against "usury"; and in terms of Judaism itself, Hillel had long ago re-interpreted the Torah's ban on charging interest, allowing interest when it is needed to make a living.[citation needed] Science and technology The strong Jewish tradition of religious scholarship often left Jews well prepared for secular scholarship. In some times and places, this was countered by banning Jews from studying at universities, or admitting them only in limited numbers (see Jewish quota). Over the centuries, Jews have been poorly represented among land-holding classes, but far better represented in academia, professions, finance, commerce and many scientific fields. The strong representation of Jews in science and academia is evidenced by the fact that 193 persons known to be Jews or of Jewish ancestry have been awarded the Nobel Prize, accounting for 22% of all individual recipients worldwide between 1901 and 2014. Of whom, 26% in physics, 22% in chemistry and 27% in Physiology or Medicine. In the fields of mathematics and computer science, 31% of Turing Award recipients and 27% of Fields Medal in mathematics were or are Jewish. The early Jewish activity in science can be found in the Hebrew Bible where some of the books contain descriptions of the physical world. Biblical cosmology provides sporadic glimpses that may be stitched together to form a Biblical impression of the physical universe. There have been comparisons between the Bible, with passages such as from the Genesis creation narrative, and the astronomy of classical antiquity more generally. The Bible also contains various cleansing rituals. One suggested ritual, for example, deals with the proper procedure for cleansing a leper (Leviticus 14:1–32). It is a fairly elaborate process, which is to be performed after a leper was already healed of leprosy (Leviticus 14:3), involving extensive cleansing and personal hygiene, but also includes sacrificing a bird and lambs with the addition of using their blood to symbolize that the afflicted has been cleansed. The Torah proscribes Intercropping (Lev. 19:19, Deut 22:9), a practice often associated with sustainable agriculture and organic farming in modern agricultural science. The Mosaic code has provisions concerning the conservation of natural resources, such as trees (Deuteronomy 20:19–20) and birds (Deuteronomy 22:6–7). During Medieval era astronomy was a primary field among Jewish scholars and was widely studied and practiced. Prominent astronomers included Abraham Zacuto who published in 1478 his Hebrew book Ha-hibbur ha-gadol where he wrote about the Solar System, charting the positions of the Sun, Moon and five planets. His work served Portugal's exploration journeys and was used by Vasco da Gama and also by Christopher Columbus. The lunar crater Zagut is named after Zacuto's name. The mathematician and astronomer Abraham bar Hiyya Ha-Nasi authored the first European book to include the full solution to the quadratic equation x2 – ax + b = 0, and influenced the work of Leonardo Fibonacci. Bar Hiyya proved by the method of indivisibles the following equation for any circle: S = LxR/2, where S is the surface area, L is the circumference length and R is radius. Garcia de Orta, Portuguese Renaissance Jewish physician, was a pioneer of Tropical medicine. He published his work Colóquios dos simples e drogas da India in 1563, which deals with a series of substances, many of them unknown or the subject of confusion and misinformation in Europe at this period. He was the first European to describe Asiatic tropical diseases, notably cholera; he performed an autopsy on a cholera victim, the first recorded autopsy in India. Bonet de Lattes known chiefly as the inventor of an astronomical ring-dial by means of which solar and stellar altitudes can be measured and the time determined with great precision by night as well as by day. Other related personalities are Abraham ibn Ezra, whose the Moon crater Abenezra named after, David Gans, Judah ibn Verga, Mashallah ibn Athari an astronomer, The crater Messala on the Moon is named after him. Albert Einstein was a German-born theoretical physicist and is considered one of the most prominent scientists in history, often regarded as the "father of modern physics". His revolutionary work on the relativity theory transformed theoretical physics and astronomy during the 20th century. When first published, relativity superseded a 200-year-old theory of mechanics created primarily by Isaac Newton. In the field of physics, relativity improved the science of elementary particles and their fundamental interactions, along with ushering in the nuclear age. With relativity, cosmology and astrophysics predicted extraordinary astronomical phenomena such as neutron stars, black holes, and gravitational waves. Einstein formulated the well-known Mass–energy equivalence, E = mc2, and explained the photoelectric effect. His work also effected and influenced a large variety of fields of physics including the Big Bang theory (Einstein's General relativity influenced Georges Lemaître), Quantum mechanics and nuclear energy. The Manhattan Project was a research and development project that produced the first atomic bombs during World War II and many Jewish scientists had a significant role in the project. The theoretical physicist Robert Oppenheimer, often considered the "father of the atomic bomb", was chosen to direct the Manhattan Project at Los Alamos National Laboratory in 1942. The physicist Leó Szilárd, that conceived the nuclear chain reaction; Edward Teller, "the father of the hydrogen bomb" and Stanislaw Ulam; Eugene Wigner contributed to theory of Atomic nucleus and Elementary particle; Hans Bethe whose work included Stellar nucleosynthesis and was head of the Theoretical Division at the secret Los Alamos laboratory; Richard Feynman, Niels Bohr, Victor Weisskopf and Joseph Rotblat. The mathematician and physicist Alexander Friedmann pioneered the theory that universe was expanding governed by a set of equations he developed now known as the Friedmann equations. Arno Allan Penzias, the physicist and radio astronomer co-discoverer of the cosmic microwave background radiation, which helped establish the Big Bang theory, the scientists Robert Herman and Ralph Alpher had also worked on that field. In quantum mechanics Jewish role was significant as well and many of most influential figures and pioneers of the theory were Jewish: Niels Bohr and his work on the atom structure, Max Born (Schrödinger equation), Wolfgang Pauli, Richard Feynman (Quantum chromodynamics), Fritz London work on London dispersion force and London equations, Walter Heitler and Julian Schwinger work on Quantum electrodynamics, Asher Peres a pioneer in Quantum information, David Bohm (Quantum potential). Sigmund Freud, known as the father of psychoanalysis, is one of the most influential scientists of the 20th century. In creating psychoanalysis, a clinical method for treating psychopathology through dialogue between a patient and a psychoanalyst, Freud developed therapeutic techniques such as the use of free association and discovered transference, establishing its central role in the analytic process. Freud's redefinition of sexuality to include its infantile forms led him to formulate the Oedipus complex as the central tenet of psychoanalytical theory. His analysis of dreams as wish-fulfillments provided him with models for the clinical analysis of symptom formation and the mechanisms of repression as well as for elaboration of his theory of the unconscious as an agency disruptive of conscious states of mind. Freud postulated the existence of libido, an energy with which mental processes and structures are invested and which generates erotic attachments, and a death drive, the source of repetition, hate, aggression and neurotic guilt. John von Neumann, a mathematician and physicist, made major contributions to a number of fields, including foundations of mathematics, functional analysis, ergodic theory, geometry, topology, numerical analysis, quantum mechanics, hydrodynamics and game theory. In made also a major work with computing and the development of the computer, he suggested and described a computer architecture called Von Neumann architecture and worked on linear programming, self-replicating machines, stochastic computing), and statistics. Emmy Noether was an influential mathematician known for her groundbreaking contributions to abstract algebra and theoretical physics. Described by many prominent scientists as the most important woman in the history of mathematics,[incomplete short citation] she revolutionized the theories of rings, fields, and algebras. In physics, Noether's theorem explains the fundamental connection between symmetry and conservation laws. More remarkable contributors include Heinrich Hertz and Steven Weinberg in Electromagnetism; Carl Sagan, his contributions were central to the discovery of the high surface temperatures of Venus and known for his contributions to the scientific research of extraterrestrial life; Felix Hausdorff (founder of topology); Edward Witten (M-theory); Vitaly Ginzburg and Lev Landau (Ginzburg–Landau theory); Yakir Aharonov (Aharonov–Bohm effect); Boris Podolsky and Nathan Rosen (EPR paradox); Moshe Carmeli (Gauge theory). Rudolf Lipschitz (Lipschitz continuity); Paul Cohen (Continuum hypothesis, Axiom of choice); Laurent Schwartz (theory of distribution); Grigory Margulis (Lie group); Richard M. Karp (Theory of computation); Adi Shamir (RSA, cryptography); Judea Pearl (Artificial intelligence, Bayesian network); Max Newman (Colossus computer); Carl Gustav Jacob Jacobi (Jacobi elliptic functions, Jacobian matrix and determinant, Jacobi symbol). Sidney Altman (Molecular biology, RNA); Melvin Calvin (Calvin Cycle); Otto Wallach (Alicyclic compound); Paul Berg (biochemistry of nucleic acids); Lazăr Edeleanu (synthesis of amphetamine); Ada Yonath (Crystallography, structure of the ribosome); Dan Shechtman (Quasicrystal); Julius Axelrod and Bernard Katz (Neurotransmitter); Elie Metchnikoff (discovery of Macrophage); Selman Waksman (discovery of Streptomycin); Rosalind Franklin (DNA); Carl Djerassi (the pill); Stephen Jay Gould (Evolutionary biology); Baruch Samuel Blumberg (Hepatitis B virus); Jonas Salk and Albert Sabin (developers of the Polio vaccines); Paul Ehrlich (discovery of the Blood–brain barrier); In fields such as psychology and neurology: Otto Rank, Viktor Frankl, Stanley Milgram and Solomon Asch; linguistics: Noam Chomsky, Franz Boas, Roman Jakobson, Edward Sapir, Joseph Greenberg; and sociology: Theodor Adorno, Nathan Glazer, Erving Goffman, Georg Simmel. Beside Scientific discoveries and researches, Jews have created significant and influential innovations in a large variety of fields such as the listed samples: Siegfried Marcus- automobile pioneer, inventor of the first petroleum-powered car (56 years after the first internal combustion car); Emile Berliner- developer of the disc record phonograph; Mikhail Gurevich- co-inventor of the MIG aircraft; Theodore Maiman- inventor of the laser; Robert Adler- inventor of the wireless remote control for televisions; Edwin H. Land – inventor of Land Camera; Bob Kahn- inventor of TCP and IP; Bram Cohen- creator of Bittorent; Sergei Brin and Larry Page- creators of Google; Laszlo Biro – Ballpoint pen; Simcha Blass- Drip irrigation; Lee Felsenstein – designer of Osborne 1; Zeev Suraski and Andi Gutmans co-creators of PHP and founders of Zend Technologies; Ralph H. Baer, "The Father of Video Games". Literature and poetry In some places where there have been relatively high concentrations of Jews, distinct secular Jewish subcultures have arisen. For example, ethnic Jews formed an enormous proportion of the literary and artistic life of Vienna, Austria at the end of the 19th century, or of New York City 50 years later (and Los Angeles in the mid-late 20th century). Many of these creative Jews were not particularly religious people. In general, Jewish artistic culture in various periods reflected the culture in which they lived. Literary and theatrical expressions of secular Jewish culture may be in specifically Jewish languages such as Hebrew, Yiddish, Judeo-Tat or Ladino, or it may be in the language of the surrounding cultures, such as English or German. Secular literature and theater in Yiddish largely began in the 19th century and was in decline by the middle of the 20th century. The revival of Hebrew beyond its use in the liturgy is largely an early 20th-century phenomenon, and is closely associated with Zionism. Apart from the use of Hebrew in Israel, whether a Jewish community will speak a Jewish or non-Jewish language as its main vehicle of discourse is generally dependent on how isolated or assimilated that community is. For example, the Jews in the shtetls of Poland and the Lower East Side of Manhattan during the early 20th century spoke Yiddish at most times, while assimilated Jews in 19th and early 20th-century Germany spoke German, and American-born Jews in the United States speak English. Jewish authors have both created a unique Jewish literature and contributed to the national literature of many of the countries in which they live. Though not strictly secular, the Yiddish works of authors like Sholem Aleichem (whose collected works amounted to 28 volumes) and Isaac Bashevis Singer (winner of the 1978 Nobel Prize), form their own canon, focusing on the Jewish experience in both Eastern Europe, and in America. In the United States, Jewish writers like Philip Roth, Saul Bellow, and many others are considered among the greatest American authors, and incorporate a distinctly secular Jewish view into many of their works. The poetry of Allen Ginsberg often touches on Jewish themes (notably the early autobiographical works such as Howl and Kaddish). Other famous Jewish authors that made contributions to world literature include Heinrich Heine, German poet, Miklós Radnóti, Hungarian poet, Mordecai Richler, Canadian author, Isaac Babel, Russian author, Franz Kafka, of Prague, and Harry Mulisch, whose novel The Discovery of Heaven was revealed by a 2007 poll as the "Best Dutch Book Ever". In Modern Judaism: An Oxford Guide, Yaakov Malkin, Professor of Aesthetics and Rhetoric at Tel Aviv University and the founder and academic director of Meitar College for Judaism as Culture in Jerusalem, writes:" Secular Jewish culture embraces literary works that have stood the test of time as sources of aesthetic pleasure and ideas shared by Jews and non-Jews, works that live on beyond the immediate socio-cultural context within which they were created. They include the writings of such Jewish authors as Sholem Aleichem, Itzik Manger, Isaac Bashevis Singer, Philip Roth, Saul Bellow, S.Y. Agnon, Isaac Babel, Martin Buber, Isaiah Berlin, Haim Nahman Bialik, Yehuda Amichai, Amos Oz, A.B. Yehoshua, and David Grossman. It boasts masterpieces that have had a considerable influence on all of western culture, Jewish culture included – works such as those of Heinrich Heine, Gustav Mahler, Leonard Bernstein, Marc Chagall, Jacob Epstein, Ben Shahn, Amedeo Modigliani, Franz Kafka, Max Reinhardt (Goldman), Ernst Lubitsch, and Woody Allen." Other notable contributors are Isaac Asimov author of the Foundation series and others such as I, robot, Nightfall and The Gods Themselves; Joseph Heller (Catch-22); R.L. Stine (Goosebumps series); J. D. Salinger (The Catcher in the Rye); Michael Chabon (The Amazing Adventures of Kavalier & Clay, The Yiddish Policemen's Union); Marcel Proust (In Search of Lost Time); Arthur Miller (Death of a Salesman and The Crucible); Will Eisner (A Contract with God); Shel Silverstein (The Giving Tree); Arthur Koestler (Darkness at Noon, The Thirteenth Tribe); Saul Bellow (Herzog); The historical novel series The Accursed Kings by Maurice Druon is an inspiration for George R. R. Martin's A Song of Ice and Fire novels. Among recipient of Nobel Prize in Literature, 13% were or are Jewish. Another aspect of Jewish literature is the ethical, called Musar literature. This literature has been composed by both religious and secular authors. Hebrew poetry is expressed by various of poets in different eras of Jewish history. Biblical poetry is related to the poetry in biblical times as it expressed in the Hebrew Bible and Jewish sacred texts. In medieval times the Jewish poetry was mainly expressed by piyyutim and several poets such as Yehuda Halevi, Samuel ibn Naghrillah, Solomon ibn Gabirol, Moses ibn Ezra, Abraham ibn Ezra and Dunash ben Labrat. Modern Hebrew poetry is mostly related to the era of and after the revival of the Hebrew language, pioneered by Moshe Chaim Luzzatto in the Haskalah era and succeeded by poets such as Hayim Nahman Bialik, Nathan Alterman and Shaul Tchernichovsky. Theatre The Ukrainian Jew Abraham Goldfaden founded the first professional Yiddish-language theatre troupe in Iași, Romania in 1876. The next year, his troupe achieved enormous success in Bucharest. Within a decade, Goldfaden and others brought Yiddish theater to Ukraine, Russia, Poland, Germany, New York City, and other cities with significant Ashkenazic populations. Between 1890 and 1940, over a dozen Yiddish theatre groups existed in New York City alone, in the Yiddish Theater District, performing original plays, musicals, and Yiddish translations of theatrical works and opera. Perhaps the most famous of Yiddish-language plays is The Dybbuk (1919) by S. Ansky. Yiddish theater in New York in the early 20th century rivalled English-language theater in quantity and often surpassed it in quality. A 1925 New York Times article remarks, "…Yiddish theater… is now a stable American institution and no longer dependent on immigration from Eastern Europe. People who can neither speak nor write Yiddish attend Yiddish stage performances and pay Broadway prices on Second Avenue." This article also mentions other aspects of a New York Jewish cultural life "in full flower" at that time, among them the fact that the extensive New York Yiddish-language press of the time included seven daily newspapers. In fact, however, the next generation of American Jews spoke mainly English to the exclusion of Yiddish; they brought the artistic energy of Yiddish theater into the American theatrical mainstream, but usually in a less specifically Jewish form. Yiddish theater, most notably Moscow State Jewish Theater directed by Solomon Mikhoels, also played a prominent role in the arts scene of the Soviet Union until Stalin's 1948 reversal in government policy toward the Jews. (See Rootless cosmopolitan, Night of the Murdered Poets.) Montreal's Dora Wasserman Yiddish Theatre continues to thrive after 50 years of performance. From their Emancipation to World War II, Jews were very active and sometimes even dominant in certain forms of European theatre, and after the Holocaust many Jews continued to that cultural form. For example, in pre-Nazi Germany, where Nietzsche asked "What good actor of today is not Jewish?", acting, directing and writing positions were often filled by Jews. Both MacDonald and Jewish Tribal Review would generally be counted as antisemitic sources, but reasonably careful in their factual claims. "In Imperial Berlin, Jewish artists could be found in the forefront of the performing arts, from high drama to more popular forms like cabaret and revue, and eventually film. Jewish audiences patronized innovative theater, regardless of whether they approved of what they saw." The British historian Paul Johnson, commenting on Jewish contributions to European culture at the Fin de siècle, writes that The area where Jewish influence was strongest was the theatre, especially in Berlin. Playwrights like Carl Sternheim, Arthur Schnitzler, Ernst Toller, Erwin Piscator, Walter Hasenclever, Ferenc Molnár and Carl Zuckmayer, and influential producers like Max Reinhardt, appeared at times to dominate the stage, which tended to be modishly left-wing, pro-republican, experimental and sexually daring. But it was certainly not revolutionary, and it was cosmopolitan rather than Jewish. Jews also made similar, if not as massive, contributions to theatre and drama in Austria, Britain, France, and Russia (in the national languages of those countries). Jews in Vienna, Paris and German cities found cabaret both a popular and effective means of expression, as German cabaret in the Weimar Republic "was mostly a Jewish art form". The involvement of Jews in Central European theatre was halted during the rise of the Nazis and the purging of Jews from cultural posts, though many emigrated to Western Europe or the United States and continued working there. In the early 20th century the traditions of New York's vibrant Yiddish Theatre District both rivaled and fed into Broadway. In the English-speaking theatre Jewish émigrés brought novel theatrical ideas from Europe, such as the theatrical realist movement and the philosophy of Konstantin Stanislavski, whose teachings would influence many Jewish American acting teachers such as the Yiddish theatre-trained acting theorist Stella Adler. Jewish immigrants were instrumental in the creation and development of the genre of musical theatre and earlier forms of theatrical entertainment in America, and would innovate the new, distinctly American, art form, the Broadway musical. Brandeis University Professor Stephen J. Whitfield has commented that "More so than behind the screen, the talent behind the stage was for over half a century virtually the monopoly of one ethnic group. That is... [a] feature which locates Broadway at the center of Jewish culture". New York University Professor Laurence Maslon says that "There would be no American musical without Jews… Their influence is corollary to the influence of black musicians on jazz; there were as many Jews involved in the form". Other writers, such as Jerome Caryn, have noted that musical theatre and other forms of American entertainment are uniquely indebted to the contributions of Jewish Americans, since "there might not have been a modern Broadway without the "Asiatic horde" of comedians, gossip columnists, songwriters, and singers that grew out of the ghetto, whether it was on the Lower East Side, Harlem (a Jewish ghetto before it was a black one), Newark, or Washington, D.C." Likewise, in the analysis of Aaron Kula, director of The Klezmer Company, ...the Jewish experience has always been best expressed by music, and Broadway has always been an integral part of the Jewish American experience... The difference is that one can expand the definition of "Jewish Broadway" to include an interdisciplinary roadway with a wide range of artistic activities packed onto one avenue—theatre, opera, symphony, ballet, publishing companies, choirs, synagogues and more. This vibrant landscape reflects the life, times and creative output of the Jewish American artist. In the 19th and early 20th centuries the European operetta, a precursor the musical, often featured the work of Jewish composers such as Paul Abraham, Leo Ascher, Edmund Eysler, Leo Fall, Bruno Granichstaedten, Jacques Offenbach, Emmerich Kalman, Sigmund Romberg, Oscar Straus and Rudolf Friml; the latter four eventually moved to the United States and produced their works on the New York stage. One of the librettists for Bizet's Carmen (not an operetta proper but rather a work of the earlier Opéra comique form) was the Jewish Ludovic Halévy, niece of composer Fromental Halévy (Bizet himself was not Jewish but he married the elder Halevy's daughter, many have suspected that he was the descendant of Jewish converts to Christianity, and others have noticed Jewish-sounding intervals in his music). The Viennese librettist Victor Leon summarized the connection of Jewish composers and writers with the form of operetta: "The audience for operetta wants to laugh beneath tears—and that is exactly what Jews have been doing for the last two thousand years since the destruction of Jerusalem". Another factor in the evolution of musical theatre was vaudeville, and during the early 20th century the form was explored and expanded by Jewish comedians and actors such as Jack Benny, Fanny Brice, Eddie Cantor, The Marx Brothers, Anna Held, Al Jolson, Molly Picon, Sophie Tucker and Ed Wynn. During the period when Broadway was monopolized by revues and similar entertainments, Jewish producer Florenz Ziegfeld dominated the theatrical scene with his Follies. By 1910 Jews (the vast majority of them immigrants from Eastern Europe) already composed a quarter of the population of New York City, and almost immediately Jewish artists and intellectuals began to show their influence on the cultural life of that city, and through time, the country as a whole. Likewise, while the modern musical can best be described as a fusion of operetta, earlier American entertainment and African-American culture and music, as well as Jewish culture and music, the actual authors of the first "book musicals" were the Jewish Jerome Kern, Oscar Hammerstein II, George and Ira Gershwin, George S. Kaufman and Morrie Ryskind. From that time until the 1980s a vast majority of successful musical theatre composers, lyricists, and book-writers were Jewish (a notable exception is the Protestant Cole Porter, who acknowledged that the reason he was so successful on Broadway was that he wrote what he called "Jewish music"). Rodgers and Hammerstein, Frank Loesser, Lerner and Loewe, Stephen Sondheim, Leonard Bernstein, Stephen Schwartz, Kander and Ebb and dozens of others during the "Golden Age" of musical theatre were Jewish. Since the Tony Award for Best Original Score was instituted in 1947, approximately 70% of nominated scores and 60% of winning scores were by Jewish composers. Of successful British and French musical writers both in the West End and Broadway, Claude-Michel Schönberg and Lionel Bart are Jewish, among others. One explanation of the affinity of Jewish composers and playwrights to the musical is that "traditional Jewish religious music was most often led by a single singer, a cantor while Christians emphasize choral singing." Many of these writers used the musical to explore issues relating to assimilation, the acceptance of the outsider in society, the racial situation in the United States, the overcoming of obstacles through perseverance, and other topics pertinent to Jewish Americans and Western Jews in general, often using subtle and disguised stories to get this point across. For example, Kern, Rodgers, Hammerstein, the Gershwins, Harold Arlen and Yip Harburg wrote musicals and operas aiming to normalize societal toleration of minorities and urging racial harmony; these works included Show Boat, Porgy and Bess, Finian's Rainbow, South Pacific and The King and I. Towards the end of Golden Age, writers also began to openly and overtly tackle Jewish subjects and issues, such as Fiddler on the Roof and Rags; Bart's Blitz! also tackles relations between Jews and Gentiles. Jason Robert Brown and Alfred Uhry's Parade is a sensitive exploration of both antisemitism and historical American racism. The original concept that became West Side Story was set on the Lower East Side during Easter-Passover celebrations; the rival gangs were to be Jewish and Italian Catholic. The ranks of prominent Jewish producers, directors, designers and performers include Boris Aronson, David Belasco, Joel Grey, the Minskoff family, Zero Mostel, Joseph Papp, Mandy Patinkin, the Nederlander family, Harold Prince, Max Reinhardt, Jerome Robbins, the Shubert family and Julie Taymor. Jewish playwrights have also contributed to non-musical drama and theatre, both Broadway and regional. Edna Ferber, Moss Hart, Lillian Hellman, Arthur Miller and Neil Simon are only some of the prominent Jewish playwrights in American theatrical history. Approximately 34% of the plays and musicals that have won the Pulitzer Prize for Drama were written and composed by Jewish Americans. The Association for Jewish Theater is a contemporary organization that includes both American and international theaters that focus on theater with Jewish content. It has also expanded to include Jewish playwrights. The earliest known Hebrew language drama was written around 1550 by a Jewish-Italian writer from Mantua. A few works were written by rabbis and Kabbalists in 17th-century Amsterdam, where Jews were relatively free from persecution and had both flourishing religious and secular Jewish cultures. All of these early Hebrew plays were about Biblical or mystical subjects, often in the form of Talmudic parables. During the post-Emancipation period in 19th-century Europe, many Jews translated great European plays such as those by Shakespeare, Molière and Schiller, giving the characters Jewish names and transplanting the plot and setting to within a Jewish context. Modern Hebrew theatre and drama, however, began with the development of Modern Hebrew in Europe (the first Hebrew theatrical professional performance was in Moscow in 1918) and was "closely linked with the Jewish national renaissance movement of the twentieth century. The historical awareness and the sense of primacy which accompanied the Hebrew theatre in its early years dictated the course of its artistic and aesthetic development". These traditions were soon transplanted to Israel. Playwrights such as Natan Alterman, Hayyim Nahman Bialik, Leah Goldberg, Ephraim Kishon, Hanoch Levin, Aharon Megged, Moshe Shamir, Avraham Shlonsky, Yehoshua Sobol and A. B. Yehoshua have written Hebrew-language plays. Themes that are obviously common in these works are the Holocaust, the Arab–Israeli conflict, the meaning of Jewishness, and contemporary secular-religious tensions within Jewish Israel. The most well-known Hebrew theatre company and Israel's national theatre is the Habima (meaning "the stage" in Hebrew), which was formed in 1913 in Lithuania, and re-established in 1917 in Russia; another prominent Israeli theatre company is the Cameri Theatre, which is "Israel's first and leading repertory theatre". The first theatrical event by Mountain Jews took place in December 1903, when Asaf Agarunov, a teacher and a Zionist, staged a story by Naum Shoykovich, translated from Hebrew, "The Burn for Burn," and staged it in honor of schoolteacher Nagdimuna ben Simona's (Shimunov) wedding. In 1918, a drama studio was opened in Derbent, Soviet Union headed by Rabbi Yashaiyo Rabinovich. In 1935, the first Soviet Union theatre opened in Derbent, which included three troupes – Russian, Mountain Jews and Turk. It was based on drama circles, which were led by Manashir and Khanum Shalumov. Initially, in the circle, men played the female roles. Later, women began to take part in the theatre. In 1939, the Judeo-Tat theatre was the winner of the festival of theatres in Dagestan. During World War II, most of the actors were drafted into the army. Many theatre actors died in the war. In 1943, the theatre resumed its work, and in 1948 it was closed. The official reason was its unprofitability. In the 1960s, the theatre resumed its activities and experienced its second heyday. The actress, Akhso Ilyaguevna Shalumova (1909–1985), "Honored Artist of the Dagestan ASSR" returned to the theatre. She played the role of (Juhuri:Шими Дербенди) - Shimi Derbendi's wife - Shahnugor, based on the stories of writer Hizgil Avshalumov. In the 1970s, the People's Judeo-Tat theatre was organized. For many years, its director was Abram Avdalimov, "Honored Cultural Worker of the Dagestan ASSR," singer, actor and playwright. His successor was Roman Izyaev, who was awarded the Order of the Badge of Honour for his meritorious service. In the 1990s, the Judeo-Tat theatre experienced another crisis: it rarely held performances and did not have any premieres. Only in 2000, when it became a municipal theater, was it able to resume its activity. From 2000 to 2002, the theatre was headed by actor and musician Raziil Semenovich Ilyaguev (1945–2016), "Honored Worker of Culture of the Republic of Dagestan." For the next two years the theatre was headed by Alesya Isakova. In 2004, Lev Yakovlevich Manakhimov (1950–2021), "Honored Artist of the Republic of Dagestan," became the artistic director of the theatre. After the death of Manakhimov, Boris Yudaev became the head of the theatre. Cinema In the era when Yiddish theatre was still a major force in the world of theatre, over 100 films were made in Yiddish. Many are now lost. Prominent films included Shulamith (1931), the first Yiddish musical on film His Wife's Lover (1931), A Daughter of Her People (1932), the anti-Nazi film The Wandering Jew (1933), The Yiddish King Lear (1934), Shir Hashirim (1935), the biggest Yiddish film hit of all time Yidl Mitn Fidl (1936), Where Is My Child? (1937), Green Fields (1937), Dybuk (1937), The Singing Blacksmith (1938), Tevya (1939), Mirele Efros (1939), Lang ist der Weg (1948), and God, Man and Devil (1950). The roster of Jewish entrepreneurs in the English-language American film industry is legendary: Samuel Goldwyn, Louis B. Mayer, the Warner Brothers, David O. Selznick, Marcus Loew, and Adolph Zukor, Fox to name just a few, and continuing into recent times with such industry giants as super-agent Michael Ovitz, Michael Eisner, Lew Wasserman, Jeffrey Katzenberg, Steven Spielberg, and David Geffen. However, few of these brought a specifically Jewish sensibility either to the art of film or, with the sometime exception of Spielberg, to their choice of subject matter. The historian Eric Hobsbawm described the situation as follows: It would be ... pointless to look for consciously Jewish elements in the songs of Irving Berlin or the Hollywood movies of the era of the great studios, all of which were run by immigrant Jews: their object, in which they succeeded, was precisely to make songs or films which found a specific expression for 100 per cent Americanness. A more specifically Jewish sensibility can be seen in the films of the Marx Brothers, Mel Brooks, or Woody Allen; other examples of specifically Jewish films from the Hollywood film industry are the Barbra Streisand vehicle Yentl (1983), or John Frankenheimer's The Fixer (1968). More recently, Call Me By Your Name (2017) can be given as an example of a movie with Jewish sensibility. Jewish film festivals are nowadays conducted in many major cities around the world as vehicles of introducing such films to wider audiences, including among others the Boston JFF, San Francisco JFF, Jerusalem JFF, etc. Radio and television The first radio chains, the Radio Corporation of America and the Columbia Broadcasting System, were created by the Jewish American David Sarnoff and William S. Paley, respectively. These Jewish innovators were also among the first producers of televisions, both black-and-white and color. Among the Jewish immigrant communities of America there was also a thriving Yiddish language radio, with its "golden age" from the 1930s to the 1950s. Although there is little specifically Jewish television in the United States (National Jewish Television, largely religious, broadcasts only three hours a week), Jews have been involved in American television from its earliest days. From Sid Caesar and Milton Berle to Joan Rivers, Gilda Radner, and Andy Kaufman to Billy Crystal to Jerry Seinfeld, Jewish stand-up comedians have been icons of American television. Other Jews that held a prominent role in early radio and television were Eddie Cantor, Al Jolson, Jack Benny, Walter Winchell and David Susskind. More figures are Larry King, Michael Savage and Howard Stern. In the analysis of Paul Johnson, "The Broadway musical, radio and TV were all examples of a fundamental principle in Jewish diaspora history: Jews opening up a completely new field in business and culture, a tabula rasa on which to set their mark, before other interests had a chance to take possession, erect guild or professional fortifications and deny them entry." One of the first televised situation comedies, The Goldbergs was set in a specifically Jewish milieu in the Bronx. While the overt Jewish milieu of The Goldbergs was unusual for an American television series, there were a few other examples, such as Brooklyn Bridge (1991–1993) and Bridget Loves Bernie. Jews have also played an enormous role among the creators and writers of television comedies: Woody Allen, Mel Brooks, Selma Diamond, Larry Gelbart, Carl Reiner, and Neil Simon all wrote for Sid Caesar; Reiner's son Rob Reiner worked with Norman Lear on All in the Family (which often engaged antisemitism and other issues of prejudice); Larry David and Jerry Seinfeld created the hit sitcom Seinfeld; Lorne Michaels, Al Franken, Rosie Shuster, and Alan Zweibel of Saturday Night Live breathed new life into the variety show in the 1970s. More recently, American Jews have been instrumental to "novelistic" television series such as The Wire and The Sopranos. Variously acclaimed as one of the greatest television series of all time, The Wire was created by David Simon. Simon also served as executive producer, head writer, and show runner. Matthew Weiner produced the fifth and sixth seasons of The Sopranos and later created Mad Men. More remarkable contributors are David Benioff and D. B. Weiss, creators of Game of Thrones TV series; Ron Leavitt co-creator of Married... with Children; Damon Lindelof and J. J. Abrams, co-creators of Lost; David Crane and Marta Kauffman, creators of Friends; Tim Kring creator of Heroes; Sydney Newman co-creator of Doctor Who; Darren Star, creator Sex and the City and Melrose Place; Aaron Spelling, co-creator of Beverly Hills, 90210; Chuck Lorre, co-creator of The Big Bang Theory and Two and a Half Men; Gideon Raff, creator of Prisoners of War which Homeland is based on; Aaron Ruben and Sheldon Leonard co-creators of The Andy Griffith Show; Don Hewitt creator of 60 Minutes; Garry Shandling, co-creator of The Larry Sanders Show; Ed. Weinberger, co-creator of The Cosby Show; David Milch, creator of Deadwood; Steven Levitan, co-creator of Modern Family; Dick Wolf, creator of Law & Order; David Shore, creator House; Max Mutchnick and David Kohan creators of Will & Grace; Adam Horowitz and Edward Kitsis creators of Once Upon a Time (TV Series). There is also a significant role of Jews in acting by actors such as Sarah Jessica Parker, William Shatner, Leonard Nimoy, Mila Kunis, Zac Efron, Hank Azaria, David Duchovny, Fred Savage, Zach Braff, Noah Wyle, Adam Brody, Katey Sagal, Sarah Michelle Gellar, Alyson Hannigan, Michelle Trachtenberg, David Schwimmer, Lisa Kudrow and Mayim Bialik. Music Jewish musical contributions also tend to reflect the cultures of the countries in which Jews live, the most notable examples being classical and popular music in the United States and Europe. Some music, however, is unique to particular Jewish communities, such as Israeli music, Israeli folk music, Klezmer, Sephardic and Ladino music, and Mizrahi music. Before Emancipation, virtually all Jewish music in Europe was sacred music, with the exception of the performances of klezmorim during weddings and other occasions. The result was a lack of a Jewish presence in European classical music until the 19th century, with a very few exceptions, normally enabled by specific aristocratic protection, such as Salamone Rossi and Claude Daquin (the work of the former is considered the beginning of "Jewish art music"). After Jews were admitted to mainstream society in England (gradually after their return in the 17th century), France, Austria-Hungary, the German Empire, and Russia (in that order), the Jewish contribution to the European music scene steadily increased, but in the form of mainstream European music, not specifically Jewish music. Notable examples of Jewish Romantic composers (by country) are Charles-Valentin Alkan, Paul Dukas and Fromental Halevy from France, Josef Dessauer, Karl Goldmark and Gustav Mahler from Bohemia (most Austrian Jews during this time were native not to what is today Austria but the outer provinces of the Empire), Felix Mendelssohn and Giacomo Meyerbeer from Germany, and Anton and Nikolai Rubinstein from Russia. Singers included John Braham and Giuditta Pasta. There were very many notable Jewish violin and pianist virtuosi, including Joseph Joachim, Ferdinand David, Carl Tausig, Henri Herz, Leopold Auer, Jascha Heifetz, and Ignaz Moscheles. During the 20th century the number of Jewish composers and notable instrumentalists increased, as did their geographical distribution. Sample Jewish 20th-century composers include Arnold Schoenberg and Alexander von Zemlinsky from Austria, Hanns Eisler and Kurt Weill from Germany, Viktor Ullmann and Jaromír Weinberger from Bohemia and later the Czech Republic (the former perished at the Auschwitz extermination camps), George Gershwin and Aaron Copland from the United States, Darius Milhaud and Alexandre Tansman from France, Alfred Schnittke and Lera Auerbach from Russia, Lalo Schifrin and Mario Davidovsky from Argentina and Paul Ben-Haim and Shulamit Ran from Israel. There are some genres and forms of classical music that Jewish composers have been associated with, including notably during the Romantic period French Grand Opera. The most prolific composers of this genre included Giacomo Meyerbeer, Fromental Halévy, and the later Jacques Offenbach; Halevy's La Juive was based on Scribe's libretto very loosely connected to the Jewish experience. While orchestral and operatic music works by Jewish composers would in general be considered secular, many Jewish (as well as non-Jewish) composers have incorporated Jewish themes and motives into their music. Sometimes this is done covertly, such as the klezmer band music that many critics and observers believe lies in the third movement of Mahler's Symphony No. 1, and this type of Jewish reference was most common during the 19th century when openly displaying one's Jewishness would most likely hamper a Jew's chances at assimilation. During the 20th century, however, many Jewish composers wrote music with direct Jewish references and themes, e.g. David Amram (Symphony – "Songs of the Soul"), Leonard Bernstein (Kaddish Symphony, Chichester Psalms), Ernest Bloch (Schelomo), Arnold Schoenberg, Mario Castelnuovo-Tedesco (Violin Concerto no. 2) Kurt Weill (The Eternal Road) and Hugo Weisgall (Psalm of the Instant Dove). In the late twentieth century, prominent composers like Morton Feldman, Gyorgy Ligeti or Alfred Schnittke gave significant contributions to the history of contemporary music. The great songwriters and lyricists of American traditional popular music and jazz standards were predominantly Jewish, including Harold Arlen, Jerome Kern, George Gershwin, Frank Loesser, Richard Rodgers and Irving Berlin. Popular music as of today for the Jewish World at large mainly stems from Israeli Music, more specifically Mizrahi Music. Popular Jewish artists today include Omer Adam, Noa Kirel, Avior Malasa, A-WA, Eden Alene, Eyal Golan, Debbie Friedman, Barbra Streisand and others. Dance Deriving from Biblical traditions, Jewish dance has long been used by Jews as a medium for the expression of joy and other communal emotions. Each Jewish diasporic community developed its own dance traditions for wedding celebrations and other distinguished events. For Ashkenazi Jews in Eastern Europe, for example, dances, whose names corresponded to the different forms of klezmer music that were played, were an obvious staple of the wedding ceremony of the shtetl. Jewish dances both were influenced by surrounding Gentile traditions and Jewish sources preserved over time. "Nevertheless the Jews practiced a corporeal expressive language that was highly differentiated from that of the non-Jewish peoples of their neighborhood, mainly through motions of the hands and arms, with more intricate legwork by the younger men." In general, however, in most religiously traditional communities, members of the opposite sex dancing together or dancing at times other than at these events was frowned upon. Sport Historically, Jews were often seen as unathletic. However, sport has played a role in integrating the Jewish diaspora into its local societies. For example, in the United States, the Jewish presence in baseball was important during a major wave of immigration in the early 20th century, and sport was used to shape the assimilation and community formation of both American and British Jews. Jews have dominated chess. Humor Jewish humor is the long tradition of humor in Judaism dating back to the Torah and the Midrash, but generally refers to the more recent stream of verbal, frequently self-deprecating and often anecdotal humor originating in Europe. Jewish humor took root in the United States over the last hundred years, beginning with vaudeville[citation needed], and continuing through radio, stand-up, film, and television. A significant number of American comedians have been or are Jewish.[citation needed] Notable Jewish-American comedians include Woody Allen, Jerry Seinfeld, Larry David, Sammy Davis Jr, Rachel Dratch, Gilbert Gottfried, Ilana Glazer, Jan Murray, Julie Klausner, Don Rickles, Andy Samberg, Gene Wilder, Groucho Marx, Gianmarco Soresi, Ben Schwartz, and many others. Visual arts Despite fears by early religious communities of art being used for idolatrous purposes, Jewish sacred art is recorded in the Tanakh and extends throughout Jewish Antiquity and the Middle Ages. The Tabernacle and the two Temples in Jerusalem form the first known examples of "Jewish art". During the first centuries of the Common Era, Jewish religious art also was created in regions surrounding the Mediterranean such as Syria and Greece, including frescoes on the walls of synagogues, of which the Dura Europas Synagogue was the only survivor, prior to its destruction by ISIL in 2017, as well as the Jewish catacombs in Rome. A number of luxury pieces of gold glass from the later Roman period have Jewish motifs. Several Hellenistic-style floor mosaics have also been excavated in synagogues from late antiquity in Israel and Palestine, especially of the signs of the Zodiac, which was apparently acceptable in a low-status position on the floor. Some, such as that at Naaran, show evidence of a reaction against images of living creatures around 600 CE. The decoration of sarcophagi and walls at the cave cemetery at Beit She'arim shows a mixture of Jewish and Hellenistic motifs. Middle Age Rabbinical and Kabbalistic literature also contain textual and graphic art, most famously illuminated haggadahs such as the Sarajevo Haggadah, and other manuscripts like the Nuremberg Mahzor. Some of these were illustrated by Jewish artists and some by Christians; equally some Jewish artists and craftsmen in various media worked on Christian commissions. Outside of Europe, Yemenite Jewish silversmiths developed a distinctive style of finely wrought silver that is admired for its artistry. Johnson again summarizes this sudden change from a limited participation by Jews in visual art (as in many other arts) to a large movement by them into this branch of European cultural life: Again, the arrival of the Jewish artist was a strange phenomenon. It is true that, over the centuries, there had been many animals (though few humans) depicted in Jewish art: lions on Torah curtains, owls on Judaic coins, animals on the Capernaum capitals, birds on the rim of the fountain-basis in the 5th century Naro synagogue in Tunis; there were carved animals, too, on timber synagogues in eastern Europe – indeed the Jewish wood-carver was the prototype of the modern Jewish plastic artist. A book of Yiddish folk-ornament, printed at Vitebsk in 1920, was similar to Chagall's own bestiary. But the resistance of pious Jews to portraying the living human image was still strong at the beginning of the 20th century. There were few Jewish secular artists in Europe prior to the Emancipation that spread throughout Europe with the Napoleonic conquests. There were exceptions, and Salomon Adler was a prominent portrait painter in 18th-century Milan. The delay in participation in the visual arts parallels the lack of Jewish participation in European classical music until the nineteenth century, and which was progressively overcome with the rise of Modernism in the 20th century. There were many Jewish artists in the 19th century, but Jewish artistic activity boomed during the end of World War I. The Jewish artistic Renaissance has its roots in the 1901 Fifth Zionist Congress, which included an art exhibition featuring Jewish artists E.M. Lilien and Hermann Struck. The exhibition helped legitimize art as an expression of Jewish culture. According to Nadine Nieszawer, "Until 1905, Jews were always plunged into their books but from the first Russian Revolution, they became emancipated, committed themselves in politics and became artists. A real Jewish cultural rebirth". Individual Jews figured in the modern artistic movements of Europe— With the exception of those living in isolated Jewish communities, most Jews listed here as contributing to secular Jewish culture also participated in the cultures of the peoples they lived with and nations they lived in. In most cases, however, the work and lives of these people did not exist in two distinct cultural spheres but rather in one that incorporated elements of both. During the early 20th century, Jews figured particularly prominently in the École de Paris centered in the Montparnasse movement (including Chaim Soutine, Marc Chagall, Jules Pascin, Yitzhak Frenkel Frenel and Michel Kikoine), and after World War II among the abstract expressionists: Alexander Bogen, Helen Frankenthaler, Adolph Gottlieb, Philip Guston, Al Held, Lee Krasner, Barnett Newman, Milton Resnick, Jack Tworkov, Mark Rothko, and Louis Schanker, as well as among Contemporary artists, Modernists and Postmodernists. Many Russian Jews were prominent in the art of scenic design, particularly the aforementioned Chagall and Aronson, as well as the revolutionary Léon Bakst, who like the other two also painted. One Mexican Jewish artist was Pedro Friedeberg; historians disagree as to whether Frida Kahlo's father was Jewish or Lutheran. A prominent Slovak artist Dominik Skutecký was also Jewish. Among major artists Chagall may be the most specifically Jewish in his themes. But as art fades into graphic design, Jewish names and themes become more prominent: Leonard Baskin, Al Hirschfeld, Peter Max, Ben Shahn, Art Spiegelman and Saul Steinberg. The collage artist Wallace Berman's engagement with Hebrew reflected the Beat Generation's wider exploration of esoteric spiritual practices such as Zen, palm reading, astrology, kabbalah and psychedelic drugs. Born on Staten Island, Berman moved to Los Angeles where the Hebrew letters on storefront windows and in Yiddish-language newspapers fascinated him. According to historian Richard Candida Smith, "Berman's interest in the Hebrew alphabet and its functions in Jewish mysticism was part of an effort to reclaim his ethnic identity." In 1989, the painter R.B. Kitaj published his "First Diasporist Manifesto", a short book in which he analysed how his art was based on his alienation as a Jew born in Cleveland, Ohio and living in London. In 2007, a second illustrated stream of consciousness book followed, "The Second Diasporist Manifesto." Jews have also played a very important role in media other than painting; their involvement in sculpture came rather later, perhaps due to lingering feelings against "graven images". But there were many notable Jewish sculptors in the later 19th and 20th centuries, including Moses Jacob Ezekiel (American, d 1917), Sir Jacob Epstein (American-British, d 1959), Ossip Zadkine (French, d 1967) Naum Gabo (Russian, d 1977), Oscar Nemon (Croatian, d 1985), Louise Nevelson (American, d 1988), Herbert Ferber (American, d 1991). 1893–1943 In photography some notable figures are André Kertész, Robert Frank, Helmut Newton, Garry Winogrand, Cindy Sherman, Steve Lehman, and Adi Nes; in installation art and street art some notable figures are Sigalit Landau, Dede, and Michal Rovner. Comics, cartoons, and animation Graphic art, as expressed in the art of comics, has been a key field for Jewish artists as well. In the Golden and Silver ages of American comic books, the Jewish role was overwhelming and a large number of the medium's foremost creators have been Jewish. Max Gaines was a pioneering figure in the creation of the modern comic book when in 1935 he published the first one called Famous Funnies. In 1939, he founded, with Jack Liebowitz and Harry Donenfeld, All-American Publications (the AA Group). The publication is known for the creation of several superheroes such as the original Atom, Flash, Green Lantern, Hawkman, and Wonder Woman. Donenfeld and Liebowitz were also the owners of National Allied Publications which distributed Detective Comics and Action Comics. That company was also a precursor of DC Comics. In 1939, the pulp magazine publisher Martin Goodman formed Timely Publications, a company to be known, since the 1960s, as Marvel Comics. At Marvel, Artists such as Stan Lee, Jack Kirby, Larry Lieber and Joe Simon created a large variety of characters and cultural icons including Spider-Man, Hulk, Captain America, Iron Man, Thor, Daredevil, and the teams Fantastic Four, Avengers, X-Men (including many of its characters) and S.H.I.E.L.D.. Stan Lee attributed the Jewish role in comics to the Jewish culture. At DC Comics Jewish role was significant as well; the character of Superman, which was created by the Jewish artists Joe Shuster and Jerry Siegel, is partly based on the biblical figure of Samson. It was also suggested the Superman is partly influenced by Moses, and other Jewish elements. More at DC Comics are Bob Kane, Bill Finger and Martin Nodell, creators of Green Lantern, Batman and many related characters as Robin, The Joker, Riddler, Scarecrow and Catwoman; Gil Kane, co-creator of Atom and Iron Fist. Many of those involved in the later ages of comics are also Jewish, such as Julius Schwartz, Joe Kubert, Jenette Kahn, Len Wein, Peter David, Neil Gaiman, Chris Claremont and Brian Michael Bendis. There is also a large number of Jewish characters among comics superheroes such as Magneto, Quicksilver, Kitty Pryde, The Thing, Sasquatch, Sabra, Ragman, Legion, and Moon Knight, of whom were and are influenced by events in Jewish history and elements of Jewish life. In 1944, Max Gaines founded EC Comics. The company is known for specializing in horror fiction, crime fiction, satire, military fiction and science fiction from the 1940s through the mid-1950s, notably the Tales from the Crypt series, The Haunt of Fear, The Vault of Horror, Crime SuspenStories and Shock SuspenStories. Jewish artists that are associated with the publisher include Al Feldstein, Dave Berg, and Jack Kamen. Will Eisner was an American cartoonist and was known as one of the earliest cartoonists to work in the American comic book industry. He is the creator of the Spirit comics series and the graphic novel A Contract with God. The Eisner Award was named in his honor, and is given to recognize achievements each year in the comics medium. In 1952, William Gaines and Harvey Kurtzman founded Mad, an American humor magazine. It was widely imitated and influential, affecting satirical media as well as the cultural landscape of the 20th century, with editor Al Feldstein increasing readership to more than two million during its 1970s circulation peak. Other known cartoonists are Lee Falk creator of The Phantom and Mandrake the Magician; The Hebrew comics of Michael Netzer creator of Uri-On and Uri Fink creator of Zbeng!; William Steig, creator of Shrek!; Daniel Clowes, creator of Eightball; Art Spiegelman creator of graphic novel Maus and Raw (with Françoise Mouly). In animation, there were many Jewish animators: Genndy Tartakovsky is the creator of several animation TV series such as Dexter's Laboratory and Samurai Jack; Matt Stone co-creator of South Park; David Hilberman, who helped animate Bambi and Snow White and the Seven Dwarfs; Friz Freleng, Looney Tunes; C. H. Greenblatt, Chowder; and Harvey Beaks; Ralph Bakshi, Fritz the Cat, Mighty Mouse: The New Adventures, Wizards, The Lord of the Rings, Heavy Traffic, Coonskin, Hey Good Lookin', Fire and Ice, and Cool World; Alex Hirsch, creator of Gravity Falls; Dave Fleischer and Lou Fleischer, founders of Fleischer Studios; Max Fleischer, animation of Betty Boop, Popeye and Superman; Rebecca Sugar, creator of Steven Universe. Several companies producing animation were founded by Jews, such as DreamWorks, which its products include Shrek, Madagascar, Kung Fu Panda and The Prince of Egypt; Warner Bros., whose animation division is known for cartoons such as Looney Tunes, Tiny Toon Adventures, Animaniacs, Pinky and the Brain and Freakazoid! . Cuisine Jewish cooking combines the food of many cultures in which Jews have lived, including Middle Eastern, Mediterranean, Spanish, German and Eastern European styles of cooking, all influenced by the need for food to be kosher. Thus, Jewish foods like bagels, hummus, stuffed cabbage, and blintzes are all influenced by the culinary preferences of communities in which Jews have settled. The amalgam of these foods, plus uniquely Jewish contributions like tzimmis, cholent, Malawach and Matzah balls, make up a variety of Jewish cuisine. Philo-Semitism Philo-Semitism (also spelled philosemitism) or Judeophilia is an interest in, respect for and an appreciation, or in some cases a fetishization, of Jewish people, their history, and their culture and the influence of Judaism, particularly on the part of a gentile. Within the Jewish community, philo-Semitism includes an interest in Jewish culture and a love of things that are considered Jewish. Very few Jews live in East Asian countries, but Jews are viewed in an especially positive light in some of them, partly owing to their shared wartime experiences during the Second World War. Examples include South Korea and China. In general, Jews are positively stereotyped as being intelligent, business savvy and committed to family values and responsibilities, but in the Western world, the first of the two aforementioned stereotypes more frequently have the negatively interpreted equivalents of guile and greed. In South Korean primary schools, students are required to read the Talmud. See also References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Houthi_movement] | [TOKENS: 12510] |
Contents Houthis The Houthis,[d] officially known as Ansar Allah,[e] is a Zaydi[a] revivalist and Islamist political and military organization that emerged from Yemen in the 1990s. It is predominantly made up of Zaydis, whose namesake leadership is drawn largely from the al-Houthi family. The group has been a central player in Yemen's civil war, drawing widespread international condemnation for its human rights abuses, including targeting civilians and using child soldiers. The movement is designated as a terrorist organization by some countries. The Houthis are backed by Iran, and they are widely considered part of the Iranian-led "Axis of Resistance". Under the leadership of Zaydi religious leader Hussein al-Houthi, the Houthis emerged as an opposition movement to Yemen president Ali Abdullah Saleh, whom they accused of corruption and being backed by Saudi Arabia and the United States. In 2003, influenced by the Lebanese Shia political and military organization Hezbollah, the Houthis adopted their official slogan against the United States, Israel, and the Jews. Al-Houthi resisted Saleh's order for his arrest, and was afterwards killed by the Yemeni military in Saada in 2004, sparking the Houthi insurgency. Since then, the movement has been mostly led by his brother Abdul-Malik al-Houthi. The organization took part in the Yemeni Revolution of 2011 by participating in street protests and coordinating with other Yemeni opposition groups. They joined Yemen's National Dialogue Conference but later rejected the 2011 reconciliation deal. In late 2014, the Houthis repaired their relationship with Saleh, and with his help they took control of the capital city. The takeover prompted a Saudi-led military intervention to restore the internationally recognized government, leading to an ongoing civil war which included missile and drone attacks against Saudi Arabia and its ally the United Arab Emirates. Following the outbreak of the Gaza war, the Houthis began to fire missiles at Israel and to attack ships off Yemen's coast in the Red Sea, which they say is in solidarity with the Palestinians and aiming to facilitate entry of humanitarian aid into the Gaza Strip. The Houthi movement attracts followers in Yemen by portraying themselves as fighting for economic development and the end of the political marginalization of Zaydi Shias, as well as by promoting regional political–religious issues in its media. The Houthis have a complex relationship with Yemen's Sunnis; the movement has discriminated against Sunnis but has also allied with and recruited them. The Houthis aim to govern all of Yemen and support external movements against the United States, Israel, and Saudi Arabia. Because of the Houthis' ideological background, the conflict in Yemen is widely seen as a front of the Iran–Saudi Arabia proxy war. History According to Ahmed Addaghashi, a professor at Sanaa University, the Houthis began as a moderate theological movement that preached tolerance and held a broad-minded view of all the Yemeni peoples. Their first organization, "the Believing Youth" (BY), was founded in 1992 in Saada Governorate: 1008 by either Mohammed al-Houthi,: 98 or his brother Hussein al-Houthi. The Believing Youth established school clubs and summer camps: 98 in order to "promote a Zaydi revival" in Saada. By 1994–95, between 15,000 and 20,000 students had attended BY summer camps. The religious material included lectures by Mohammed Hussein Fadhlallah (a Lebanese Shia scholar) and Hassan Nasrallah (Secretary General of Hezbollah).: 99 The formation of the Houthi organisations has been described by Adam Baron of the European Council on Foreign Relations as a reaction to foreign intervention. Their views include shoring up Zaydi support against the perceived threat of Saudi-influenced ideologies in Yemen and a general condemnation of the former Yemeni government's alliance with the United States, which, along with complaints regarding the government's corruption and the marginalisation of much of the Houthis' home areas in Saada, constituted the group's key grievances. Although Hussein al-Houthi, who was killed in 2004, had no official relation with Believing Youth (BY), according to Zaid, he contributed to the radicalisation of some Zaydis after the 2003 invasion of Iraq. BY-affiliated youth adopted anti-American and anti-Israel slogans, which they chanted in the Al Saleh Mosque in Sanaa after Friday prayers. According to Zaid, the followers of Houthi's insistence on chanting the slogans attracted the authorities' attention, further increasing government worries over the extent of the Houthi movement's influence. "The security authorities thought that if today the Houthis chanted 'Death to America', tomorrow they could be chanting 'Death to the president [of Yemen]'". In 2004, 800 BY supporters were arrested in Sanaa. President Ali Abdullah Saleh then invited Hussein al-Houthi to a meeting in Sanaa, but Hussein declined. On 18 June, Saleh sent government forces to arrest Hussein. Hussein responded by launching an insurgency against the central government but was killed on 10 September. The insurgency continued intermittently until a ceasefire agreement was reached in 2010. During this prolonged conflict, the Yemeni army and air force were used to suppress the Houthi rebellion in northern Yemen. The Saudis joined these anti-Houthi campaigns, but the Houthis won against both Saleh and the Saudi army. According to the Brookings Institution, this particularly humiliated the Saudis, who spent tens of billions of dollars on their military. The Houthis participated in the 2011 Yemeni Revolution, as well as the ensuing National Dialogue Conference (NDC). However, they rejected the provisions of the November 2011 Gulf Cooperation Council deal on the ground that "it divide[d] Yemen into poor and wealthy regions" and also in response to the assassination of their representative at NDC. As the revolution went on, Houthis gained control of greater territory. By 9 November 2011, Houthis were said to be in control of two Yemeni governorates (Saada and Al Jawf) and close to taking over a third governorate (Hajjah), which would enable them to launch a direct assault on the Yemeni capital of Sanaa. In May 2012, it was reported that the Houthis controlled a majority of Saada, Al Jawf, and Hajjah governorates; they had also gained access to the Red Sea and started erecting barricades north of Sanaa in preparation for more conflict. By September 2014, Houthis were said to control parts of the Yemeni capital, Sanaa, including government buildings and a radio station. While Houthi control expanded to the rest of Sanaa, as well as other towns such as Rada', this control was strongly challenged by Al-Qaeda. The Gulf States believed that the Houthis had accepted aid from Iran while Saudi Arabia was aiding their Yemeni rivals. On 20 January 2015, Houthi rebels seized the presidential palace in the capital. President Abdrabbuh Mansur Hadi was in the presidential palace during the takeover but was not harmed. The movement officially took control of the Yemeni government on 6 February, dissolving parliament and declaring its Revolutionary Committee to be the acting authority in Yemen. On 20 March the al-Badr and al-Hashoosh mosques came under suicide attack during midday prayers, and the Islamic State quickly claimed responsibility. The blasts killed 142 Houthi worshippers and wounded more than 351, making it the deadliest terrorist attack in Yemen's history. On 27 March 2015, in response to perceived Houthi threats to Sunni factions in the region, Saudi Arabia along with Bahrain, Qatar, Kuwait, UAE, Egypt, Jordan, Morocco, and Sudan led a gulf coalition airstrike in Yemen. The military coalition included the United States which helped in planning of airstrikes, as well as logistical and intelligence support. The US Navy has actively participated in the Saudi-led naval blockade of Houthi-controlled territory in Yemen, which humanitarian organizations argue has been the main contributing factor to the outbreak of famine in Yemen. The four-month long Battle of Aden (2015) occurred between 25 March 2015 and 22 July. According to a 2015 September report by Esquire magazine, the Houthis, once the outliers, are now one of the most stable and organised social and political movements in Yemen. The power vacuum created by Yemen's uncertain transitional period has drawn more supporters to the Houthis. Many of the formerly powerful parties, now disorganised with an unclear vision, have fallen out of favour with the public, making the Houthis—under their newly branded Ansar Allah name—all the more attractive. Houthi spokesperson Mohamed Abdel Salam stated that his group had spotted messages between the UAE and Saleh three months before his death. He told Al-Jazeera that there was communication between Saleh, UAE and a number of other countries such as Russia and Jordan through encrypted messages. The alliance between Saleh and the Houthi broke down in late 2017, with armed clashes occurring in Sanaa from 28 November. Saleh declared the split in a televised statement on 2 December, calling on his supporters to take back the country and expressed openness to a dialogue with the Saudi-led coalition. On 4 December 2017, Saleh's house in Sanaa was assaulted by fighters of the Houthi movement, according to residents. Saleh was killed by the Houthis on the same day. In January 2021, the United States designated the Houthis a terrorist organization, creating fears of an aid shortage in Yemen, but this stance was reversed a month later after Joe Biden became president. On 17 January 2022, Houthi missile and drone attacks on UAE industrial targets set fuel trucks on fire and killed three foreign workers. This was the first specific attack to which the Houthi admitted, and the first to result in deaths. A response led by Saudi Arabia included a 21 January air strike on a detention centre in Yemen, resulting in at least 70 deaths. Following the outbreak of the Gaza war, the Houthis began to fire missiles at Israel and to attack ships off Yemen's coast in the Red Sea, which they say is in solidarity with the Palestinians and aiming to facilitate entry of humanitarian aid into the Gaza Strip. On 31 October Houthi forces launched ballistic missiles at Israel, which were shot down by Israel's Arrow missile defense system. Israeli officials claimed that this was the first ever combat to occur in space. In order to end the attacks in the Red Sea, the Houthis demanded a ceasefire in Gaza and an end to Israel's blockade of the Gaza Strip. In January 2024, the United States and the United Kingdom conducted airstrikes against multiple Houthi targets in Yemen, and the United States designated the Houthi as a Specially Designated Global Terrorist (SDGT). Ideology The Houthi movement follows a mixed ideology with religious, Yemeni nationalist, and big tent-populist elements, imitating Hezbollah. Outsiders have argued that the group's ideological tenets are often vague and self-contradictory and that many of its slogans do not accurately reflect its aims. According to American historian Bernard Haykel, the movement's founder, Hussein al-Houthi, was influenced by a variety of religious traditions and political ideologies, making it difficult to fit him or his followers into existing categories. The Houthis have portrayed themselves as a national resistance force, defending all Yemenis from outside aggression and influences, as champions against corruption, chaos, and extremism, and as representatives for the interests of marginalized tribal groups and the Zaydi sect. Haykel argues that the Houthi movement has two central religious-ideological tenets. The first is the "Quranic Way", which encompasses the belief that the Quran does not allow for interpretation and contains everything needed to improve Muslim society. The second is the belief in the absolute, divine right of Ahl al-Bayt (the Prophet's descendants) to rule, a belief attributed to Jaroudism, a fundamentalist offshoot of Zaydism. The group has also exploited the popular discontent over corruption and the reduction of government subsidies. According to a February 2015 Newsweek report, Houthis are fighting "for things that all Yemenis crave: government accountability, the end to corruption, regular utilities, fair fuel prices, job opportunities for ordinary Yemenis and the end of Western influence". In forming alliances, the Houthi movement has been at times opportunistic, partnering with countries it later declared as enemies, including the United States. The influx of various individual and tribal interests has somewhat diluted the movement's original vision over time. Around the time of the third Saada war (November 2005–February 2006), the conflict was increasingly waged "along the lines of prevalent tribal feuds", which distorted the initial socio-economic, religious and ideological aims of Hussein al-Houthi.: 187 In general, the Houthi movement has centered its belief system on the Zaydi branch of Islam,[a] a sect almost exclusively present in Yemen. Zaydis form about 25% of the population, with Sunnis comprising the other 75%. Zaydi-led governments ruled Yemen for a thousand years up until 1962. The Houthi movement has often advocated for Zaydi revivalism in Yemen since its foundation. Although the group has framed its struggle in religious terms and put great importance on its Zaydi roots, the Houthis are not an exclusively Zaydi group. They have rejected their portrayal by others as a faction that is purportedly only interested in Zaydi-related issues. They have not publicly advocated for the restoration of the old Zaydi imamate, although analysts have argued that they might plan to restore it in the future. Most Yemenis have a low opinion of the old imamate, and Hussein al-Houthi also did not advocate the imamate's restoration. Instead, he proposed a "Guiding Eminence" (alam al-huda): an individual descended from the Prophet who would act as a "universal leader for the world". However, he never defined this position's prerogatives or how they should be appointed. The movement has also recruited and allied with Sunni Muslims; according to researcher Ahmed Nagi, several themes of the Houthi ideology "such as Muslim unity, prophetic lineages, and opposition to corruption [...] allowed the Houthis to mobilize not only northern Zaydis, but also inhabitants of predominantly Shafi'i areas." However, the group is known to have discriminated against Sunni Muslims as well, closing Sunni mosques and primarily placing Zaydis in leadership positions in Houthi-controlled areas. The Houthis lost significant support among Sunni tribes after killing ex-President Saleh. Many Zaydis also oppose the Houthis, regarding them as Iranian proxies and the Houthis' form of Zaydi revivalism as an attempt to "establish Shiite rule in the north of Yemen". In addition, Haykel argued that the Houthis follow "a highly politicised, revolutionary, and intentionally simplistic, even primitivist interpretation of [Zaydism]'s teachings". Their view of Islam is largely based on the teachings of Hussein al-Houthi, collected after his death in a book titled Malazim (Fascicles), a work treated by Houthis as more important than older Zaydi theological traditions, resulting in repeated disputes with established Zaydi religious leaders. The Malazim reflects a number of different religious and ideological influences, including Khomeinism and revolutionary Sunni Islamist movements such as the Muslim Brotherhood. Hussein al-Houthi believed that the "last exemplary" Zaydi scholar and leader was Al-Hadi ila'l-Haqq Yahya; later Zaydi imams were regarded as having deviated from the original form of Islam. The Houthis' belief in the "Quranic Way" also includes the rejection of tafsir (Quranic interpretations) as being derivative and divisive, meaning that they have a low opinion of most existing Islamic theological and juridical schools, including Zaydi traditionalists based in Sanaa with whom they often clash. The Houthis claim that their actions are to fight against the alleged expansion of Salafism in Yemen and for the defence of their community from discrimination. The position of Saudi-backed Salafis and other Sunni groups in Yemen had steadily increased throughout the Republican era, as did the position of Sheikhs who sometimes cooperated with these Salafi groups for pragmatic reasons. The Salafis, who enjoyed considerable support from the Saleh regime, reportedly pursued an aggressive "policy of provocation" towards the Zaydis who inhabited the surrounding area, often accusing them of apostasy.: 106–112 In the years before the rise of the Houthi movement, state-supported Salafis had harassed Zaydis and destroyed Zaydi sites (most notably cemeteries) in Yemen.: 106–112 After their rise to power in 2014, the Houthis consequently "crushed" the Salafi community in Saada Governorate and mostly eliminated the al-Qaeda presence in the areas under their control; the Houthis view al-Qaeda as "Salafi jihadists" and thus "mortal enemies". On the other side, between 2014 and 2019, the Houthi leadership have signed multiple co-existence agreements with the Salafi community; pursuing Shia-Salafi reconciliation. The Yemeni government has often accused the Houthis of collaborating with al-Qaeda to undermine its control of southern Yemen. In general, the Houthis' political ideology has gradually shifted from "heavily religious mobilisation and activism under Husayn to the more assertive and statesmanlike rhetoric under Abdulmalik", its current leader. With strong support from Houthis from the predominantly Zaydi northern tribes, the Houthi movement has often been described as a tribalist or monarchist faction in opposition to republicanism. Regardless, they have managed to rally many people outside of their traditional bases to their cause and have become a major nationalist force. When armed conflict for the first time erupted back in 2004 between the Yemeni government and Houthis, the President Ali Abdullah Saleh accused the Houthis and other Islamic opposition parties of trying to overthrow the government and the republican system. However, Houthi leaders, for their part, rejected the accusation by saying that they had never rejected the president or the republican system but were only defending themselves against government attacks on their community. After their takeover of northern Yemen in 2014, the Houthis remained committed to republicanism and continued to celebrate republican holidays. The Houthis have an ambivalent stance on the possible transformation of Yemen into a federation or the separation into two fully independent countries to solve the country's crisis. Though not opposed to these plans per se, they have declined any plans that would, in their eyes, marginalize the northern tribes politically. Meanwhile, their opponents have asserted that the Houthis desire to institute Zaydi religious law, destabilising the government and stirring anti-American sentiment. In contrast, Hassan al-Homran, a former Houthi spokesperson, has said that "Ansar Allah supports the establishment of a civil state in Yemen. We want to build a striving modern democracy. Our goals are to fulfil our people's democratic aspirations in keeping with the Arab Spring movement." In an interview with Yemen Times, Hussein al-Bukhari, a Houthi insider, said that Houthis' preferable political system is a republic with elections where women can also hold political positions, and that they do not seek to form a cleric-led government after the model of Islamic Republic of Iran, for "we cannot apply this system in Yemen because the followers of the Shafi (Sunni) doctrine are bigger in number than the Zaydis". In 2018, the Houthi leadership proposed the establishment of a non-partisan transitional government composed of technocrats. Ali Akbar Velayati, International Affairs Advisor to Iranian Supreme Leader Ayatollah Ali Khamenei, stated in October 2014 that "We are hopeful that Ansar-Allah has the same role in Yemen as Hezbollah has in eradicating the terrorists in Lebanon". Mohammed al-Houthi criticized the Trump-brokered Abraham Accords between Israel and the United Arab Emirates as "betrayal" against the Palestinians and the cause of pan-Arabism. The Houthis' treatment of women and their restrictions on the arts have been a subject of debate. On one side, the movement has stated that it defends women's rights to vote and take public offices, and some feminists have fled from government-held areas into Houthi territories as the latter at least disempower more radical jihadists. The Houthis field their own women security force, as well as a Girl Scouts wing. However, it has also been reported that Houthis harass women and restrict their freedom of movement and expression. In regards to culture, the Houthis try to spread their views through propaganda using mainstream media, social media, and poetry as well as the "Houthification" of the education system to "instil Huthi values and mobilise the youth to join the fight against the coalition forces". However, the Houthis have been inconsistent in regards how to deal with forms of artistic expression which they disapprove of. The movement has allowed radio stations to continue broadcasting music and content that the Houthis view as too Western, but also banned certain songs and harassed artists such as wedding musicians. In one instance, which generated much publicity, Houthi police officers stated that music could be played at a wedding party if loudspeakers did not broadcast it. The main wedding singer was arrested when the party guests did not conform to this demand. Journalist Robert F. Worth stated that "many secular-minded Yemenis seem unsure whether to view the Houthis as oppressors or potential allies." In general, the Houthis' policies are often decided on a local basis, and high-ranking Houthi officials are frequently incapable of checking regional officers' powers, making the treatment of civilians dependent on the area. The Sarkha (Arabic: الصرخة, lit. 'The scream / The collective outcry') is the political slogan of the Houthis, a Zaydi-Shia revivalist political and military organization in Yemen, that reads "God is great, Death to America, Death to Israel, Curse on the Jews, Victory to Islam" on a vertical banner of Arabic text. It is often printed on a white background, with the Islamic statements coloured green and the statements about the group's enemies appear in a red font resembling barbed wire. The Houthis have been accused of expelling or restricting members of the rural Yemeni Jewish community, which had about 50 remaining members. Reports of abuse include Houthi supporters bullying or attacking the country's Jews. Houthi officials have denied any involvement in the harassment, asserting that under Houthi control, Jews in Yemen would be able to live and operate freely as any other Yemeni citizen. "Our problems are with Zionism and the occupation of Palestine, but Jews here have nothing to fear," said Fadl Abu Taleb, a spokesman for the Houthis. Despite insistence by Houthi leaders that the movement is not sectarian, a Yemeni Jewish rabbi has reportedly said that many Jews remain terrified by the movement's slogan. As a result, Yemeni Jews reportedly retain a negative sentiment towards the Houthis, who they allege have committed persecutions against them. According to Israeli Druze politician Ayoob Kara, Houthi militants had given an ultimatum telling Jews to "convert to Islam or leave Yemen". In March 2016, a UAE-based newspaper reported that one of the Yemeni Jews who emigrated to Israel in 2016 was fighting with the Houthis. In the same month, a Kuwaiti newspaper, al-Watan, reported that a Yemeni Jew named Haroun al-Bouhi was killed in Najran while fighting with the Houthis against Saudi Arabia. The Kuwaiti newspaper added that the Yemeni Jews had a good relationship with Ali Abdullah Saleh, who was at that time allied with the Houthis and was fighting on different fronts with them. Al-Houthi has said through his fascicles: "Arab countries and all Islamic countries will not be safe from Jews except through their eradication and the elimination of their entity." A New York Times journalist reported being asked why they were speaking to a "dirty Jew" and that the Jews in the village were unable to communicate with their neighbors. The Houthis have been accused of detaining, torturing, arresting, and holding incommunicado Baháʼí Faith members on charges of espionage and apostasy, which are punishable by death. Houthi leader Abdel-Malek al-Houthi has targeted Baháʼís in public speeches, and accused the followers of Baháʼí Faith of being "satanic" and agents for the western countries, citing a 2013 fatwa issued by Iran's supreme leader. Membership and ranks There is a difference between the al-Houthi family: 102 and the Houthi movement. The movement was called by their opponents and foreign media "Houthis". The name came from the surname of the early leader of the movement, Hussein al-Houthi, who died in 2004. Membership of the group had between 1,000 and 3,000 fighters as of 2005 and between 2,000 and 10,000 fighters as of 2009. In 2010, the Yemen Post claimed that they had over 100,000 fighters. According to Houthi expert Ahmed Al-Bahri, by 2010, the Houthis had a total of 100,000–120,000 followers, including both armed fighters and unarmed loyalists. As of 2015, the group is reported to have attracted new supporters from outside their traditional demographics. Activism and tactics During their campaigns against both the Saleh and Hadi governments, Houthis used civil disobedience. Following the Yemeni government's decision on 13 July 2014 to increase fuel prices, Houthi leaders succeeded in organising massive rallies in the capital Sanaa to protest the decision and to demand resignation of the incumbent government of Abd Rabbuh Mansur Hadi for "state-corruption". These protests developed into the 2014–2015 phase of the insurgency. Similarly, following 2015 Saudi-led airstrikes against Houthis which claimed civilians lives, Yemenis responded to the Abdul-Malik al-Houthi's call and took to streets of the capital, Sanaa, in tens of thousands to voice their anger at the Saudi invasion. The movement's expressed goals include combating economic underdevelopment and political marginalization in Yemen while seeking greater autonomy for Houthi-majority regions of the country. One of its spokespeople, Mohammed al-Houthi, claimed in 2018 that he supports a democratic republic in Yemen. The Houthis have made fighting corruption the centerpiece of their political program. The Houthis have also held a number of mass gatherings since the revolution. On 24 January 2013, thousands gathered in Dahiyan, Sa'dah and Heziez, just outside Sanaa, to celebrate Mawlid al-Nabi, the birth of Mohammed. A similar event took place on 13 January 2014 at the main sports' stadium in Sanaa. On this occasion, men and women were completely segregated: men filled the open-air stadium and football field in the centre, guided by appointed Houthi safety officials wearing bright vests and matching hats; women poured into the adjacent indoor stadium, led inside by security women distinguishable only by their purple sashes and matching hats. The indoor stadium held at least five thousand women—ten times as many attendees as the 2013 gathering. The Houthis are said to have "a huge and well-oiled propaganda machine". They have established "a formidable media arm" with the Lebanese Hezbollah's technical support. The format and content of the group's leader, Abdul-Malik al-Houthi's televised speeches are said to have been modeled after those of Hezbollah's Secretary General, Hassan Nasrallah. Following the peaceful youth uprising in 2011, the group launched its official TV channel, Almasirah. The group operates up to 25 print and electronic newspapers, along with various online news services. One of the most versatile form of Houthi mass media are the zawamil, a genre of primarily tribal oral poetry embedded in Yemen's social fabric. The zamil, rooted in cultural tradition, has been weaponised by the Houthis as a tool of propaganda and remains one of the most popular and rapidly growing platforms of Houthi propaganda, sung by popular vocalists like Issa al-Laith and disseminated through various social media platforms including YouTube, Twitter and Telegram. The Spectator describes Houthi zawamil as its most successful part of their propaganda, stressing the movement's claimed virtues of piety, bravery and poverty in comparison with the corruption, wealth and hypocrisy of their adversaries, the Saudi-led coalition, and Arab states allied to Israel. The Houthis use radio as an effective tool for spreading influence, often seizing stations and confiscating equipment from outlets that fail to comply with their broadcast restrictions. In 2019, a Yemeni radio station aligned with the group raised 73.5 million Yemeni rials ($132,000) in a fundraising campaign for the Lebanese militant group Hezbollah. In 2009, U.S. Embassy sources have reported that Houthis used increasingly more sophisticated tactics and strategies in their conflict with the government as they gained more experience, and that they fought with religious fervor. Armed strength The Houthis exert partial control over the Yemeni Air Force, and possess key air bases such as Al-Dailami and Al Hudaydah. The Houthi air force consists of a single Northrop F-5 fighter jet, which was seized from the Yemeni government. The group also flew a Soviet-era Mikoyan MiG-29 during a military parade over Sanaa in 2023. They also operate Mil Mi-17 helicopters seized during the civil war. Since established in 2003, Hezbollah's Unit 3800 has provided the Houthis training and strategic assistance. Iran's Quds Force has smuggled weapons to the Houthis since 2009, mainly using dhows and small fishing boats. It began smuggling missile components as well by 2015. Between 2004 and 2010, the Houthis frequently looted Yemeni government weapons caches, which included Scud and OTR-21 Tochka missiles obtained during the 1994 Yemeni civil war, due to weak government control over its arsenal. In 2017, the UN reported that about 70% of the Yemeni military's weapons were likely lost. In March 2015, the Saudi-led coalition claimed that it had destroyed most of the Houthis' missile capabilities during an air campaign, however Houthi missile attacks persisted. The Houthis, with technological help from Iran, began work on new missile variants by mid-2016, and by September they introduced the Burkan-1, a variant of the Scud with extended range. The following month, they introduced the Burkan-2H, which had a separating warhead. These weapons enabled the Houthis to strike targets deep in Saudi Arabia, including the capital Riyadh. After examining debris from the Burkan-2H, the UN concluded in December 2017 that the missile was supplied by Iran. Late in 2015, Houthis announced the local production of short-range ballistic missile Qaher-1 on Al-Masirah TV. On 19 May 2017 Saudi Arabia intercepted a Houthi-fired ballistic missile targeting a deserted area south of the Saudi capital and most populous city Riyadh. The Houthi militias have captured dozens of tanks and masses of heavy weaponry from the Yemeni Armed Forces. In February 2017, the Houthis revealed their drone program. Between mid-2018 and 2019, long-range ballistic missile attacks decreased in frequency, while the Houthis increasingly began using unmanned aerial vehicles (UAVs) and artillery. UAVs were used for both attacks on military and civilian targets and reconnaissance. The Qasef UAV, which can carry up to 30 kilograms (66 lb) of explosives, had been used in over a dozen Houthi attacks since 2016. The Sammad-2, Sammad-3, and Qasif-2K suicide drones were unveiled in 2019. Houthi drone attacks peaked in 2021, with many targeting Saudi Arabia. In June 2019, the Saudi-led coalition stated that the Houthis had launched 226 ballistic missiles during the insurgency so far. The 2019 Abqaiq–Khurais attack targeted the Saudi Aramco oil processing facilities at Abqaiq and Khurais in eastern Saudi Arabia on 14 September 2019. The Houthi movement claimed responsibility, though the United States has asserted that Iran was behind the attack. Iranian President Hassan Rouhani said that "Yemeni people are exercising their legitimate right of defence ... the attacks were a reciprocal response to aggression against Yemen for years." The Houthis unveiled the Palestine-2 missile in June 2024, which closely resembled the Iranian Fattah-1 and Kheibar Shekan missiles. According to the Houthis, the missile was locally made, a claim rejected by defense analysts. In September, the missile was fired at Israel and landed in an open area near Ben Gurion Airport, traveling a distance of 2,000 kilometres (1,200 mi) in 11 minutes. The Houthis claimed that the rocket used two stage solid fuel, had a range of 2,150 kilometres (1,340 mi), and had a maximum speed of Mach 16. Israel and the US denied that the missile was hypersonic. In course of the Yemeni Civil War, the Houthis developed tactics to combat their opponents' navies. At first, their anti-ship operations were unsophisticated and limited to rocket-propelled grenades being shot at vessels close to the shore. In the fight to secure the port city of Aden in 2015, the Yemeni Navy was largely destroyed, including all missile-carrying vessels. A number of smaller patrol craft, landing craft, and Mi-14 and Ka-28 ASW helicopters did survive. Their existence under Houthi control would be brief, as the majority of them were destroyed in air attacks during the Saudi-led intervention in Yemen in 2015. As a result, the Houthis were left with AShMs (anti-ship missiles) stored ashore, but no launchers, and a smattering of small patrol ships. These, along with a number of locally manufactured small craft and miscellaneous vessels, were to form the foundation of the new naval warfare capabilities. Soon after the Houthis took over Yemen in 2015, Iran sought to strengthen the Houthis' naval capabilities, allowing the Houthis, and thus Iran, to intercept Coalition shipping off the Red Sea coast, by providing additional AShMs and constructing truck-based launchers that could easily be hidden after a launch. Iran also anchored the MV Saviz intelligence vessel, disguised as a regular cargo vessel, off the coast of Eritrea, that provided intelligence and updates on Coalition ship movements to the Houthis. The Saviz served in this capacity until it was damaged in an Israeli limpet mine attack in April 2021, when it was replaced by the MV Behshad. The Behshad, like the Saviz, is based on a cargo ship. Meanwhile, in Yemen, the Houthis, presumably with the assistance of Iranian engineers, converted a number of 10-meter-long patrol craft donated by the UAE to the Yemeni Coast Guard in the early 2010s into WBIEDs (water born improvised explosive devices). In 2017, one of these was used to attack the Saudi frigate Al Madinah. In the years since, three more WBIED designs have been built: the Tawfan-1, Tawfan-2, and Tawfan-3. 15 different types of naval mines were also produced. These are being increasingly deployed in the Red Sea, but have yet to be successful against naval vessels. The delivery of 120 km-ranged Noor and 200 km-ranged Qader AShMs, 300 km-ranged Khalij Fars ASBMs, and Fajr-4CL and "Al-Bahr Al-Ahmar" anti-ship rockets by Iran, which were unveiled during a 2022 Houthi parade, was arguably the most significant escalation in support. They combine long range, low cost, and high mobility with various types of guidance to create a weapon well-suited to the Houthi Navy. Though the Houthis' ASBM arsenal has yet to be tested, the Houthi Navy has had notable success with AShMs. On 1 October 2016, it was able to hit the UAE Navy's HSV-2 Swift hybrid catamaran with a single C-801/C-802 AShM fired from a shore battery. Although the ship managed to stay afloat, the damage was so severe that it had to be decommissioned. The US Navy then sent two destroyers and an amphibious transport dock to the area to ensure that shipping could continue unabated. These vessels were then attacked with AShMs on three separate occasions, with no success. Though these attacks demonstrated the Houthis' limited ability to threaten vessels in Yemen's surrounding seas, the threat posed by them has since evolved significantly. Armed with a variety of anti-ship ballistic missiles and rockets that can be notoriously difficult to intercept and cover large areas, the next round of maritime clashes with the navies of the United Arab Emirates, Saudi Arabia, and the United States could have a completely different outcome. The Houthis have also hinted at using their extensive arsenal of loitering munitions against commercial shipping in the Red Sea, a tactic similar to recent Iranian tactics in the Persian Gulf. Patrol boats were fitted with anti-tank guided missiles, about 30 coast-watcher stations were set up, disguised "spy dhows" were constructed, and the maritime radar of docked ships used to create targeting solutions for attacks. One of the most notable features of the Houthis' naval arsenal became its remote-controlled drone boats which carry explosives and ram enemy warships. Among these, the self-guiding Shark-33 explosive drone boats originated as patrol boats of the old Yemeni coast guard. In addition, the Houthis have begun to train combat divers on the Zuqar and Bawardi islands. Alleged foreign support Former Yemeni president Ali Abdullah Saleh had accused the Houthis of having ties to external backers, in particular the Iranian government; Saleh stated in an interview with The New York Times, The real reason they received unofficial support from Iran was because they repeat same slogan that is raised by Iran – death to America, death to Israel. We have another source for such accusations. The Iranian media repeats statements of support for these [Houthi] elements. They are all trying to take revenge against the USA on Yemeni territories. Such backing has been reported by diplomatic correspondents of major news outlets (e.g., Patrick Wintour of The Guardian), and has been the reported perspective of Yemeni governmental leaders militarily and politically opposing Houthi efforts (e.g., as of 2017, the UN-recognized, deposed Yemeni President Abd Rabbu Mansour Hadi, who referred to the "Houthi rebels... as 'Iranian militias'". The Houthis in turn accused the Saleh government of being backed by Saudi Arabia and of using Al-Qaeda to repress them. Under the next President Hadi, Gulf Arab states accused Iran of backing the Houthis financially and militarily, though Iran denied this, and they were themselves backers of President Hadi. Despite confirming statements by Iranian and Yemeni officials in regards to Iranian support in the form of trainers, weaponry, and money, the Houthis denied reception of substantial financial or arm support from Iran. Joost Hiltermann of Foreign Policy wrote that whatever little material support the Houthis may have received from Iran, the intelligence and military support by US and UK for the Saudi Arabian-led coalition exceed that by many factors. In April 2015, the United States National Security Council spokesperson Bernadette Meehan remarked that "It remains our assessment that Iran does not exert command and control over the Houthis in Yemen". Joost Hiltermann wrote that Iran does not control the Houthis' decision-making as evidenced by Houthis' flat rejection of Iran's demand not to take over Sanaa in 2015. Thomas Juneau, writing in the journal, International Affairs, states that even though Iran's support for Houthis has increased since 2014, it remains far too limited to have a significant impact in the balance of power in Yemen. The Quincy Institute for Responsible Statecraft argues that Teheran's influence over the movement has been "greatly exaggerated" by "the Saudis, their coalition partners (mainly the United Arab Emirates), and their [lobbyists] in Washington." Similarly, academics such as Marieke Brandt and Charles Schmitz have stated that the allegation that the Houthis are merely an Iranian proxy force has its roots in political narratives by Saleh, Saudi Arabia, the United States and other anti-Houthi forces.: 146, 204–205 While the Houthis have praised post-Islamic Revolution Iran for its opposition to American and Israeli imperialism in the Middle East, they have also criticized Iranian political and religious doctrine, including Iran's state religion of Twelver Shi'ism. A December 2009 cable between Sanaa and various intelligence agencies in the US diplomatic cables leak states that US State Dept. analysts believed the Houthis obtained weapons from the Yemeni black market and corrupt members of the Yemenis Republican Guard. On the edition of 8 April 2015 of PBS Newshour, Secretary of State John Kerry stated that the US knew Iran was providing military support to the Houthi rebels in Yemen, adding that Washington "is not going to stand by while the region is destabilised". Phillip Smyth of the Washington Institute for Near East Policy told Business Insider that Iran views Shia groups in the Middle East as "integral elements to the Islamic Revolutionary Guard Corps (IRGC)". Smyth claimed that there is a strong bond between Iran and the Houthi uprising working to overthrow the government in Yemen. According to Smyth, in many cases Houthi leaders go to Iran for ideological and religious education, and Iranian and Hezbollah leaders have been spotted on the ground advising the Houthi troops, and these Iranian advisers are likely responsible for training the Houthis to use the type of sophisticated guided missiles fired at the US Navy. To some commentators (e.g., Alex Lockie of Business Insider), Iran's support for the revolt in Yemen is "a good way to bleed the Saudis", a recognized regional and ideological rival of Iran. Essentially, from that perspective, Iran is backing the Houthis to fight against a Saudi-led coalition of Gulf States whose aim is to maintain control of Yemen. The discord has led some commentators to fear that further confrontations may lead to an all-out Sunni-Shia war.[page needed] In early 2013, photographs released by the Yemeni government show the United States Navy and Yemen's security forces seizing a class of "either modern Chinese- or Iranian-made" shoulder-fired, heat-seeking anti-aircraft missiles "in their standard packaging", missiles "not publicly known to have been out of state control", raising concerns of Iran's arming of the rebels. In April 2016, the U.S. Navy intercepted a large Iranian arms shipment, seizing thousands of AK-47 rifles, rocket-propelled grenade launchers, and 0.50-caliber machine guns, a shipment described as likely headed to Yemen by the Pentagon. Based on 2019 reporting from The Jerusalem Post, the Houthis have also repeatedly used a drone nearly identical to Iran Aircraft Manufacturing Industrial Company's Ababil-T drone in strikes against Saudi Arabia. In late October 2023, Israel stated that it had intercepted a "surface-to-surface long-range ballistic missile and two cruise missiles that were fired by the Houthi rebels in Yemen"; per reporting from Axios.com, this "was Israel's first-ever operational use of the Arrow system for intercepting ballistic missiles since the war began". The continuing interceptions and seizures of weapons at sea, attributed to Iranian origins, is a matter tracked by the United States Institute of Peace. In 2013, an Iranian vessel was seized and discovered to be carrying Katyusha rockets, heat-seeking surface-to-air missiles, RPG-7s, Iranian-made night vision goggles and artillery systems that track land and navy targets 40 km away. That was en route to the Houthis. In March 2017, Qasem Soleimani, the head of Iran's Quds Force, met with Iran's Islamic Revolutionary Guard Corps (IRGC) to look for ways to what was described as "empowering" the Houthis. Soleimani was quoted as saying, "At this meeting, they agreed to increase the amount of help, through training, arms and financial support." Despite the Iranian government, and Houthis both officially denying Iranian support for the group. Brigadier General Ahmad Asiri, the spokesman of the Saudi-led coalition told Reuters that evidence of Iranian support was manifested in the Houthi use of Kornet anti-tank guided missiles which had never been in use with the Yemeni military or with the Houthis and that the arrival of Kornet missiles had only come at a later time. In the same month the IRGC had altered the routes used in transporting equipment to the Houthis by spreading out shipments to smaller vessels in Kuwaiti territorial waters in order to avoid naval patrols in the Gulf of Oman due to sanctions imposed, shipments reportedly included parts of missiles, launchers, and drugs. In May 2018, the United States imposed sanctions on Iran's IRGC, which was also listed as a designated terrorist organization by the US over its role in providing support for the Houthis, including help with manufacturing ballistic missiles used in attacks targeting cities and oil fields in Saudi Arabia. In August 2018, despite previous Iranian denial of military support for the Houthis, IRGC commander Nasser Shabani was quoted by the Iranian Fars News Agency as saying, "We (IRGC) told Yemenis [Houthi rebels] to strike two Saudi oil tankers, and they did it", on 7 August 2018. In response to Shabani's account, the IRGC released a statement saying that the quote was a "Western lie" and that Shabani was a retired commander, despite no actual reports of his retirement after 37 years in the IRGC, and media linked to the Iranian government confirming he was still enlisted with the IRGC. Furthermore, while the Houthis and the Iranian government have previously denied any military affiliation, Iranian supreme leader Ali Khamenei announced his "spiritual" support of the movement in a personal meeting with the Houthi spokesperson Mohammed Abdul Salam in Tehran, in the midst of ongoing conflicts in Aden in 2019. In 2024, commanders from IRGC and Hezbollah were reported to be actively involved on the ground in Yemen, overseeing and directing Houthi attacks on Red Sea shipping, according to a report by Reuters. In 2024, July United States targeted new sanctions focusing on IRGC ties with the group. The Houthis dismissed the sanctions as pathetic and powerless. In 2024, Israel also placed sanctions on the Houthis. In August 2018, Reuters reported that a confidential United Nations investigation had found the North Korean government had failed to discontinue its nuclear and missile delivery programs, and in conjunction, was "cooperating militarily with Syria" and was "trying to sell weapons to Yemen's Houthis". In August 2019, the South Korean National Intelligence Service had tracked the Scuds missiles (used to attack Saudi Arabia) back to North Korea. In January 2024, South Korea's Yonhap News Agency reported that North Korea had evidently shipped weapons to Houthis via Iran, based on the writings in Hangul script that were found on missiles launched towards Israel. North Korea considers the Houthis as a "resistance force". In March 2025, after the airstrikes by the US Air Force commenced, North Korean Ambassador to Egypt, Ma Dong-hee, who is also accredited to Yemen, condemned the attacks on the Houthis as a threat to regional and global order. It was noted by Newsweek in July 2024 that the Houthis were in possession of Russian-made P-800 Oniks missiles, and that the transfer had likely occurred via Syria and Iran. In July 2024, The Wall Street Journal reported that US officials saw increasing indications that Russia was considering arming the Houthis with advanced anti-ship missiles via Iranian smuggling routes in response to US support for Ukraine during Russia's invasion. However, it did not follow through due to pushback by the US and Saudi Arabia. In August 2024, Middle East Eye, citing a US official, reported that personnel of Russia's GRU were stationed in Houthi-controlled parts of Yemen to assist the militia's attacks on merchant ships. In October, The Wall Street Journal reported that Russia was supplying the Houthis with geospatial intelligence to target Western ships. Two China based companies were sanctioned by America in 2024 for providing "dual-use materials and components needed to manufacture, maintain, and deploy an arsenal of advanced missiles and unmanned aerial vehicles (UAVs) against U.S. and allied interests." A report by the Foundation for Defense of Democracies stated that the Houthis were using weapons made in China for their attacks on shipping in the Red Sea in exchange for Chinese ships having safe passage through the Sea. Another report from Israel's i24 News stated that China provided the Houthis with "advanced components and guidance equipment" for their missiles. The Institute for the Study of War reported that the Houthis supplement their weaponry through additional arms and dual use components sourced from Russia or China. For example, Yemeni border customs seized 800 Chinese-made drone propellers in a shipment bound for the Houthis, and in August 2024 had also purchased Hydrogen Fuel Cylinders from Chinese suppliers which aimed to increase the range and payloads of the Houthis' drones. According to The Wall Street Journal, the Houthis sent a group from Saada to Beijing to study Mandarin and manage the supply of drones and missile guidance systems from China and Hong Kong to Yemen. According to the United States Department of State, Chinese state-owned Chang Guang Satellite Technology Corporation has provided geospatial intelligence to the Houthis to target U.S. warships in the Red Sea. In 2024 a UN report states that Al-Shabaab and Houthis had a relationship that was "transactional or opportunistic, and not ideological", while a 2025 report states that their relationship was deepening and posed a threat to regional security. According to the Africa Center for Strategic Studies, al Shabaab is provided with weapons and training while also receiving assistance in expanding the criminal enterprises that fund their operations. While the Houthis benefit by expanding their influence, strengthening Anti-American forces in the region, and weakening pro-American forces in the region. Additionally both sides assist each other in smuggling operations. Al Qaeda Arabia Province (AQAP) and Houthis had previously fought, but since 2022 they have had a ceasefire. As part of this ceasefire they have cooperated in attacks against Yemeni government, provided safe havens for each other in their territories, and cooperating in security and intelligence. Human rights violations According to the Panel of Experts on Yemen established pursuant to Security Council resolution 2140, the Houthis have carried out a wide range of human rights violations, including violations of international humanitarian law and abuse of women and children. Children as young as 13 have been arrested for "indecent acts" for alleged homosexual orientation or "political cases" when their families do not comply with Houthi ideology or regulations. Minors share cells with adult prisoners, and according to unspecified reports that the Panel has deemed "credible", boys held in Al-Shahid Al-Ahmar police station in Sana'a are systematically raped. Aside from the Panel of Experts, London-based Arabic newspaper Asharq Al-Awsat alleges that the Houthis have revived slavery in Yemen. Houthis have been accused of violations of international humanitarian law such as using child soldiers, shelling civilian areas, forced evacuations and executions. According to Human Rights Watch, Houthis intensified their recruitment of children in 2015. The UNICEF mentioned that children with the Houthis and other armed groups in Yemen comprise up to a third of all fighters in Yemen. Human Rights Watch has further accused Houthi forces of using landmines in Yemen's third-largest city of Taizz which has caused many civilian casualties and prevent the return of families displaced by the fighting. HRW has also accused the Houthis of interfering with the work of Yemen's human rights advocates and organizations. In 2009, HRW researcher Christoph Wilcke said that although the Republic of Yemen Government accused the Houthis of using civilians as human shields, HRW did not have enough evidence to conclude that the Houthis were intentionally doing so. Nonetheless, Wilcke stated there may have been cases HRW was not able to document. Akram Al Walidi, one of four journalists detained by the Houthis on spying charges and then released in April 2023 as part of a prisoner exchange deal between the former and the internationally recognized government of Yemen, said he felt like the four were human shields after the Houthis moved them to one of their military camps at Sanaa in October 2020 since it was an expected target of Saudi airstrikes. According to the Human Rights Watch, the Houthis also use hostage taking as a tactic to generate profit. Human Rights Watch documented 16 cases in which Houthi authorities held people unlawfully, in large part to extort money from relatives or to exchange them for people held by opposing forces. The United Nations World Food Programme has accused the Houthis of diverting food aid and illegally removing food lorries from distribution areas, with rations sold on the open market or given to those not entitled to it. The WFP has also warned that aid could be suspended to areas of Yemen under the control of Houthi rebels due to "obstructive and uncooperative" Houthi leaders that have hampered the independent selection of beneficiaries. WFP spokesman Herve Verhoosel stated "The continued blocking by some within the Houthi leadership of the biometric registration ... is undermining an essential process that would allow us to independently verify that food is reaching ... people on the brink of famine". The WFP has warned that "unless progress is made on previous agreements we will have to implement a phased suspension of aid". The Norwegian Refugee Council has stated that they share the WPF frustrations and reiterate to the Houthis to allow humanitarian agencies to distribute food. The United Nations Human Rights Council published a report covering the period July 2019 to June 2020, which contained evidence of the Houthis' recruitment of boys as young as seven years old and the recruitment of 34 girls aged between 13 and 17 years of age, to act as spies, recruiters of other children, guards, medics, and members of a female fighting force. Twelve girls suffered sexual violence, arranged marriages, and child marriages as a result of their recruitment. Under Houthi controlled areas women have been blocked from travelling without a mahram (male guardian) even for essential healthcare. This also affected humanitarian operations by the United Nations in Yemen forcing female staff to office jobs. The Houthis use allegations of prostitution as a tool for public defamation against Yemeni women including those in the diaspora engaged in politics, civil society or human rights activism alongside threats to individuals and families. Women in detention are sexually assaulted and have been subjected to virginity tests and are often blocked from access to essential goods. Torture of female detainees is also carried out by the Zaynabiyat, the Female police wing of the Houthis. UN Panel of Experts on Yemen discovered instances of Houthi rape of female detainees to "purify" them, as a punishment, or to coerce confessions. The Panel documented cases where Houthis forced detained women to become sex slaves that also collect information for the Houthis. Documented instances include in 2021 where a female detainee was forced to have sexual intercourse with multiple men at Houthi detention centers as part of her preparation to be forced as a sex slave for important clients while also obtaining information. The Panel also received information of another detainee who was forced to become a prostitute to gather information for the Houthis in return for their release and another similar instance had been documented in 2019. This have also resulted in women who had been detained by Houthis being ostracized by society and one instance where the woman was killed by her family for bringing shame upon the family. Anadolu Agency reported of Yemen-based rights groups documenting 1,181 violations against women committed by Houthis from 2017 to 2020. Yemeni activist Samira Abdullah al-Houry was held in a Houthi jail for three months and gave numerous interviews after her release on alleged torture and rape by Houthi guards. Her testimony contributed to UN Security Council sanctions being imposed on two Houthi security officials in February 2021. It was later alleged that she admitted some of her testimony was untrue and she had embellished claims at the request of Saudi officials. According to Amnesty International on 9 February 2024, two Houthi-run courts in Yemen sentenced 48 individuals either to death, flogging or prison over charges related to same-sex conduct in the past month. According to Human Rights Watch, Houthi militias have "beaten, raped, and tortured detained migrants and asylum seekers from the Horn of Africa." UN experts have warned that female migrants face sexual violence, forced labor, and forced drug trafficking by smugglers who collaborate with the Houthi-controlled Yemen Immigration, Passport and Nationality Authority (IPNA). Governance According to a 2009 leaked US Embassy cable, Houthis have reportedly established courts and prisons in areas they control. They impose their own laws on local residents, demand protection money, and dispense rough justice by ordering executions. AP's reporter, Ahmad al-Haj argued that the Houthis were winning hearts and minds by providing security in areas long neglected by the Yemeni government while limiting the arbitrary and abusive power of influential sheikhs. According to the Civic Democratic Foundation, Houthis help resolve conflicts between tribes and reduce the number of revenge killings in areas they control. The US ambassador believed that the reports that explain Houthi role as arbitrating local disputes were likely. Public opinion A survey conducted in 2024 by the Sanaa Center for Strategic Studies found that only 8% of Yemenis in Houthi-controlled areas had a positive view of the Houthi movement, compared to 3% in both government-controlled areas and contested areas. Conversely, 20%, 34%, and 39% in these areas, respectively, expressed negative views. Sanctions In January 2024, the US and UK imposed sanctions on key Houthi figures, including the defense minister, in response to the Houthi attacks on international shipping in the Red Sea that escalated in November 2023. The new sanctions were imposed in addition to the existing sanctions against 11 Houthi individuals and 2 entities, which remained in force. On 28 April 2025, the U.S. Treasury Department sanctioned three shipping companies for their role in delivering oil products to the Houthis. The deliveries took place via the Houthi-controlled port of Ras Isa. United States attacks on Yemen The U.S. launched a military campaign against Yemen in mid-March 2025, which it said was directed at Houthi military and strategic targets. The Houthis said women and children were killed in the attacks. The Houthis, backed by Iran, state that their operations, which have affected global trade, are in solidarity with Palestinians in Gaza. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Manifest_destiny] | [TOKENS: 11988] |
Contents Manifest destiny Page version status This is an accepted version of this page Manifest destiny was the expansionist belief in the 19th-century United States that American settlers were destined to expand westward across North America, and that this belief was both obvious ("manifest") and certain ("destiny"). The belief is rooted in American exceptionalism, romantic nationalism, and white nationalism, implying the inevitable spread of republicanism and the American way. It is one of the earliest expressions of American imperialism. According to historian William Earl Weeks, there were three basic tenets behind the concept: Manifest destiny remained heavily divisive in politics, causing constant conflict with regard to slavery in these new states and territories. It is also associated with the expansion of European settlers onto the territories of Indigenous Americans and the annexation of lands to the west of the United States borders at the time on the continent. The concept became one of several major campaign issues during the 1844 presidential election, where the Democratic Party won and the phrase "Manifest Destiny" was coined within a year. The concept of manifest destiny was used by Democrats to justify the 1846 Oregon boundary dispute and the 1845 annexation of Texas as a slave state, culminating in the 1846 Mexican–American War. In contrast, the large majority of Whigs and prominent Republicans (such as Abraham Lincoln and Ulysses S. Grant) rejected the concept and campaigned against these actions. By 1843, former U.S. president John Quincy Adams, originally a major supporter of the concept underlying manifest destiny, had changed his mind and repudiated expansionism because it meant the expansion of slavery in Texas. Ulysses S. Grant served in and condemned the Mexican–American War, declaring it "one of the most unjust ever waged by a stronger against a weaker nation". After the American Civil War, the U.S. acquired Alaska in 1867. In the 1890s, Republican president William McKinley annexed Hawaii, the Philippines, Puerto Rico, Guam, and American Samoa. The 1898 Spanish–American War was controversial, and imperialism became a major issue in the 1900 United States presidential election. Historian Daniel Walker Howe summarizes that "American imperialism did not represent an American consensus; it provoked bitter dissent within the national polity". Context There was never a set of principles defining manifest destiny; it was always a general idea rather than a specific policy made with a motto. Ill-defined but keenly felt, manifest destiny was an expression of conviction in the morality and value of expansionism that complemented other popular ideas of the era, including American exceptionalism and Romantic nationalism. Andrew Jackson, who spoke of "extending the area of freedom", typified the conflation of America's potential greatness, the nation's budding sense of Romantic self-identity, and its expansion. Yet Jackson was not the only president to elaborate on the principles underlying manifest destiny. Owing in part to the lack of a definitive narrative outlining its rationale, proponents offered divergent or seemingly conflicting viewpoints. While many writers focused primarily upon American expansionism, be it into Mexico or across the Pacific, others saw the term as a call to example. Without an agreed-upon interpretation, much less an elaborated political philosophy, these conflicting views of America's destiny were never resolved. This variety of possible meanings was summed up by Ernest Lee Tuveson: "A vast complex of ideas, policies, and actions is comprehended under the phrase 'Manifest Destiny'. They are not, as we should expect, all compatible, nor do they come from any one source." Etymology Most historians credit the conservative newspaper editor and future propagandist for the Confederacy, John O'Sullivan, with coining the term manifest destiny in 1845. However, other historians suggest the unsigned editorial titled "Annexation" in which it first appeared was written by journalist and annexation advocate Jane Cazneau. O'Sullivan was an influential advocate for Jacksonian democracy, described by Julian Hawthorne as "always full of grand and world-embracing schemes". O'Sullivan wrote an article in 1839 that, while not using the term "manifest destiny", did predict a "divine destiny" for the United States based upon values such as equality, rights of conscience, and personal enfranchisement "to establish on earth the moral dignity and salvation of man". This destiny was not explicitly territorial, but O'Sullivan predicted that the United States would be one of a "Union of many Republics" sharing those values. Six years later, in 1845, O'Sullivan wrote another essay titled "Annexation" in the Democratic Review, in which he first used the phrase manifest destiny. In this article, he urged the U.S. to annex the Republic of Texas, not only because Texas desired this, but because it was "our manifest destiny to overspread the continent allotted by Providence for the free development of our yearly multiplying millions". Overcoming Whig opposition, Democrats annexed Texas in 1845. O'Sullivan's first usage of the phrase "manifest destiny" attracted little attention. O'Sullivan's second use of the phrase became extremely influential. On December 27, 1845, in his newspaper the New York Morning News, O'Sullivan addressed the ongoing boundary dispute with Britain. O'Sullivan argued that the United States had the right to claim "the whole of Oregon": And that claim is by the right of our manifest destiny to overspread and to possess the whole of the continent which Providence has given us for the development of the great experiment of liberty and federated self-government entrusted to us. That is, O'Sullivan believed that Providence had given the United States a mission to spread republican democracy ("the great experiment of liberty"). Because the British government would not spread democracy, thought O'Sullivan, British claims to the territory should be overruled. O'Sullivan believed that manifest destiny was a moral ideal (a "higher law") that superseded other considerations. O'Sullivan's original conception of manifest destiny was not a call for territorial expansion by force. He believed that the expansion of the United States would happen without the direction of the U.S. government or the involvement of the military. After Americans immigrated to new regions, they would set up new democratic governments, and then seek admission to the United States, as Texas had done. In 1845, O'Sullivan predicted that California would follow this pattern next, and that even Canada would eventually request annexation as well. He was critical of the Mexican–American War in 1846, although he came to believe that the outcome would be beneficial to both countries. Ironically, O'Sullivan's term became popular only after it was criticized by Whig opponents of the Polk administration. Whigs denounced manifest destiny, arguing, "that the designers and supporters of schemes of conquest, to be carried on by this government, are engaged in treason to our Constitution and Declaration of Rights, giving aid and comfort to the enemies of republicanism, in that they are advocating and preaching the doctrine of the right of conquest". On January 3, 1846, in a speech Representative Robert Winthrop used the term for the first time in Congress stating: There is one element in our title [to Oregon], however, which I confess that I have not named, and to which I may not have done entire justice. I mean that new revelation of right which has been designated as the right of our manifest destiny to spread over this whole continent. It has been openly avowed in a leading Administration journal that this, after all, is our best and strongest title-one so clear, so pre-eminent, and so indisputable, that if Great Britain had all our other titles in addition to her own, they would weigh nothing against it. The right of our manifest destiny! There is a right for a new chapter in the law of nations; or rather, in the special laws of our own country; for I suppose the right of a manifest destiny to spread will not be admitted to exist in any nation except the universal Yankee nation! " Winthrop was the first in a long line of critics who suggested that advocates of manifest destiny were citing "Divine Providence" for justification of actions that were motivated by chauvinism and self-interest. Despite this criticism, expansionists embraced the phrase, which caught on so quickly that its origin was soon forgotten. Themes and influences Historian Frederick Merk wrote in 1963 that the concept of manifest destiny was born out of "a sense of mission to redeem the Old World by high example ... generated by the potentialities of a new earth for building a new heaven". Merk also states that manifest destiny was a heavily contested concept within the nation: From the outset Manifest Destiny—vast in program, in its sense of continentalism—was slight in support. It lacked national, sectional, or party following commensurate with its magnitude. The reason was it did not reflect the national spirit. The thesis that it embodied nationalism, found in much historical writing, is backed by little real supporting evidence. A possible influence is racial predominance, namely the idea that the American Anglo-Saxon race was "separate, innately superior" and "destined to bring good government, commercial prosperity and Christianity to the American continents and the world". Author Reginald Horsman wrote in 1981 that this view also held that "inferior races were doomed to subordinate status or extinction" and that this was used to justify "the enslavement of the blacks and the expulsion and possible extermination of the Indians". The origin of the first theme, later known as American exceptionalism, was often traced to America's Puritan heritage, particularly John Winthrop's famous "City upon a Hill" sermon of 1630, in which he called for the establishment of a virtuous community that would be a shining example to the Old World. In his influential 1776 pamphlet Common Sense, Thomas Paine echoed this notion, arguing that the American Revolution provided an opportunity to create a new, better society: We have it in our power to begin the world over again. A situation, similar to the present, hath not happened since the days of Noah until now. The birthday of a new world is at hand... Many Americans agreed with Paine, and came to believe that the United States' virtue was a result of its special experiment in freedom and democracy. Thomas Jefferson, in a letter to James Monroe, wrote, "it is impossible not to look forward to distant times when our rapid multiplication will expand itself beyond those limits, and cover the whole northern, if not the southern continent." To Americans in the decades that followed their proclaimed freedom for mankind, embodied in the Declaration of Independence, could only be described as the inauguration of "a new time scale" because the world would look back and define history as events that took place before, and after, the Declaration of Independence. It followed that Americans owed to the world an obligation to expand and preserve these beliefs. The second theme's origination is less precise. A popular expression of America's mission was elaborated by President Abraham Lincoln's description in his December 1, 1862, message to Congress. He described the United States as "the last, best hope of Earth". The "mission" of the United States was further elaborated during Lincoln's Gettysburg Address, in which he interpreted the American Civil War as a struggle to determine if any nation with democratic ideals could survive; this has been called by historian Robert Johannsen "the most enduring statement of America's Manifest Destiny and mission". The third theme can be viewed as a natural outgrowth of the belief that God had a direct influence in the foundation and further actions of the United States. Political scientist and historian Clinton Rossiter described this view as summing "that God, at the proper stage in the march of history, called forth certain hardy souls from the old and privilege-ridden nations ... and that in bestowing his grace He also bestowed a peculiar responsibility". Americans presupposed that they were not only divinely elected to maintain the North American continent, but also to "spread abroad the fundamental principles stated in the Bill of Rights". In many cases, this meant neighboring colonial holdings and countries were seen as obstacles rather than the destiny God had provided the United States. Faragher's 1997 analysis of the political polarization between the Democratic Party and the Whig Party is that: Most Democrats were wholehearted supporters of expansion, whereas many Whigs (especially in the North) were opposed. Whigs welcomed most of the changes wrought by industrialization but advocated strong government policies that would guide growth and development within the country's existing boundaries; they feared (correctly) that expansion raised a contentious issue, the extension of slavery to the territories. On the other hand, many Democrats feared industrialization the Whigs welcomed... For many Democrats, the answer to the nation's social ills was to continue to follow Thomas Jefferson's vision of establishing agriculture in the new territories to counterbalance industrialization. Two Native American writers have recently tried to link some of the themes of manifest destiny to the original ideology of the 15th-century decree of the Doctrine of Christian Discovery. Nick Estes (a Lakota) links the 15th-century Catholic doctrine of distinguishing Christians from non-Christians in the expansion of European nations. Estes and international jurist Tonya Gonnella Frichner (of the Onondaga Nation) further link the doctrine of discovery to Johnson v. McIntosh and frame their arguments on the correlation between manifest destiny and Doctrine of Christian Discovery by using the statement made by Chief Justice John Marshall during the case, as he "spelled out the rights of the United states to Indigenous lands" and drew upon the Doctrine of Christian Discovery for his statement. Marshall ruled that "indigenous peoples possess 'occupancy' rights, meaning their lands could be taken by the powers of 'discovery'". Frichner explains that "The newly formed United States needed to manufacture an American Indian political identity and concept of Indian land that would open the way for united states and westward colonial expansion." In this way, manifest destiny was inspired by the original European colonization of the Americas, and it excuses U.S. violence against Indigenous Nations. According to historian Dorceta Taylor: "Minorities are not usually chronicled as explorers or environmental activists, yet the historical records show that they were a part of expeditions, resided and worked on the frontier, founded towns, and were educators and entrepreneurs. In short, people of color were very important actors in westward expansion." The desire for trade with China and other Asian countries was another ground for expansionism, with Americans seeing prospects of westward contact with Asia as fulfilling long-held Western hopes of finding new routes to Asia, and perceiving the Pacific as less unruly and dominated by Old World conflicts than the Atlantic and therefore a more inviting area for the new nation to expand its influence in. Debate over Manifest destiny With the Louisiana Purchase in 1803, which doubled the size of the United States, Thomas Jefferson set the stage for the continental expansion of the United States. Many began to see this as the beginning of a new providential mission: If the United States was successful as a "shining city upon a hill", people in other countries would seek to establish their own democratic republics. Not all Americans or their political leaders believed that the United States was a divinely favored nation, or thought that it ought to expand. For example, many Whigs opposed territorial expansion based on the Democratic claim that the United States was destined to serve as a virtuous example to the rest of the world, and also had a divine obligation to spread its superordinate political system and a way of life throughout North American continent. Many in the Whig party "were fearful of spreading out too widely", and they "adhered to the concentration of national authority in a limited area". In July 1848, Alexander Stephens denounced President Polk's expansionist interpretation of America's future as "mendacious". In the mid‑19th century, expansionism, especially southward toward Cuba, also faced opposition from those Americans who were trying to abolish slavery. As more territory was added to the United States in the following decades, "extending the area of freedom" in the minds of Southerners also meant extending the institution of slavery. That is why slavery became one of the central issues in the continental expansion of the United States before the Civil War. Before and during the Civil War, both sides claimed that America's destiny was rightfully their own. Abraham Lincoln opposed anti-immigrant nativism, and the imperialism of manifest destiny as both unjust and unreasonable. He objected to the Mexican war and believed each of these disordered forms of patriotism threatened the inseparable moral and fraternal bonds of liberty and union that he sought to perpetuate through a patriotic love of country guided by wisdom and critical self-awareness. Lincoln's "Eulogy to Henry Clay", June 6, 1852, provides the most cogent expression of his reflective patriotism. Ulysses S. Grant served in the war with Mexico and later wrote: I was bitterly opposed to the measure [to annex Texas], and to this day regard the war [with Mexico] which resulted as one of the most unjust ever waged by a stronger against a weaker nation. It was an instance of a republic following the bad example of European monarchies, in not considering justice in their desire to acquire additional territory... The Southern rebellion was largely the outgrowth of the Mexican war. Nations, like individuals, are punished for their transgressions. We got our punishment in the most sanguinary and expensive war of modern times. Era of expansion The phrase "manifest destiny" is most associated with the territorial expansion of the United States from 1803 to 1900. However, the Vermont Republic joined the United States in 1791, the territory of American Samoa grew larger in 1904 and 1925, and the U.S. acquired what is now the United States Virgin Islands in 1917 and what was the United Nations Trust Territory of the Pacific Islands in 1947. Of that Trust Territory, the Northern Mariana Islands joined the United States in 1986, while the Federated States of Micronesia, Republic of the Marshall Islands, and Palau became independent states in a Compact of Free Association with the U.S. Some scholars limit the "manifest destiny" period to solely North American continental expansion from the Louisiana Purchase to the acquisition of Alaska in 1867, sometimes called the "age of manifest destiny". During this time, the United States expanded to the Pacific Ocean—"from sea to shining sea"—largely defining the borders of the continental United States as they are today. In the 1890s, the United States expanded into Polynesia and Asia with the annexation of the Republic of Hawaii, the Philippines, Guam, and American Samoa. One of the goals of the War of 1812 was to threaten to annex the British colony of Lower Canada as a bargaining chip to force the British to abandon support for the various Native American tribes residing there. The result of this overoptimism was a series of defeats in 1812 in part due to the wide use of poorly trained state militias rather than regular troops. The American victories at the Battle of Lake Erie and the Battle of the Thames in 1813 ended the Indian raids and removed the main reason for threatening annexation. To end the War of 1812, John Quincy Adams, Henry Clay and Albert Gallatin (former treasury secretary and a leading expert on Indians) and the other American diplomats negotiated the Treaty of Ghent in 1814 with Britain. They rejected the British plan to set up an Indian state in U.S. territory south of the Great Lakes. They explained the American policy toward acquisition of Indian lands: The United States, while intending never to acquire lands from the Indians otherwise than peaceably, and with their free consent, are fully determined, in that manner, progressively, and in proportion as their growing population may require, to reclaim from the state of nature, and to bring into cultivation every portion of the territory contained within their acknowledged boundaries. In thus providing for the support of millions of civilized beings, they will not violate any dictate of justice or of humanity; for they will not only give to the few thousand savages scattered over that territory an ample equivalent for any right they may surrender, but will always leave them the possession of lands more than they can cultivate, and more than adequate to their subsistence, comfort, and enjoyment, by cultivation. If this be a spirit of aggrandizement, the undersigned are prepared to admit, in that sense, its existence; but they must deny that it affords the slightest proof of an intention not to respect the boundaries between them and European nations, or of a desire to encroach upon the territories of Great Britain... They will not suppose that that Government will avow, as the basis of their policy towards the United States a system of arresting their natural growth within their own territories, for the sake of preserving a perpetual desert for savages. A shocked Henry Goulburn, one of the British negotiators at Ghent, remarked, after coming to understand the American position on taking the Indians' land: Till I came here, I had no idea of the fixed determination which there is in the heart of every American to extirpate the Indians and appropriate their territory. The 19th-century belief that the United States would eventually encompass all of North America is known as "continentalism". An early proponent of this idea, John Quincy Adams became a leading figure in U.S. expansion between the Louisiana Purchase in 1803 and the Polk administration in the 1840s. In 1811, Adams wrote to his father: The whole continent of North America appears to be destined by Divine Providence to be peopled by one nation, speaking one language, professing one general system of religious and political principles, and accustomed to one general tenor of social usages and customs. For the common happiness of them all, for their peace and prosperity, I believe it is indispensable that they should be associated in one federal Union. Adams did much to further this idea. He orchestrated the Treaty of 1818, which established the border between British North America and the United States as far west as the Rocky Mountains, and provided for the joint occupation of the region known in American history as the Oregon Country and in British and Canadian history as the New Caledonia and Columbia Districts. He negotiated the Transcontinental Treaty in 1819, transferring Florida from Spain to the United States and extending the U.S. border with Spanish Mexico all the way to the Pacific Ocean. And he formulated the Monroe Doctrine of 1823, which warned Europe that the Western Hemisphere was no longer open for European colonization. The Monroe Doctrine and "manifest destiny" formed a closely related nexus of principles: historian Walter McDougall calls manifest destiny a corollary of the Monroe Doctrine, because while the Monroe Doctrine did not specify expansion, expansion was necessary in order to enforce the doctrine. Concerns in the United States that European powers were seeking to acquire colonies or greater influence in North America led to calls for expansion in order to prevent this. In his influential 1935 study of manifest destiny, done in conjunction with the Walter Hines Page School of International Relations, Albert Weinberg wrote: "the expansionism of the [1830s] arose as a defensive effort to forestall the encroachment of Europe in North America". Manifest destiny played an important role in the development of the transcontinental railroad.[when?] The transcontinental railroad system is often used in manifest destiny imagery like John Gast's painting, American Progress, where multiple locomotives are seen traveling west. According to academic Dina Gilio-Whitaker, "the transcontinental railroads not only enabled [U.S. control over the continent] but also accelerated it exponentially." Historian Boyd Cothran says that "modern transportation development and abundant resource exploitation gave rise to an appropriation of indigenous land, [and] resources." Manifest destiny played its most important role in the Oregon boundary dispute between the United States and Britain, when the phrase "manifest destiny" originated. The Anglo-American Convention of 1818 had provided for the joint occupation of the Oregon Country, and thousands of Americans migrated there in the 1840s over the Oregon Trail. The British rejected a proposal by U.S. President John Tyler (in office 1841–1845) to divide the region along the 49th parallel, and instead proposed a boundary line farther south, along the Columbia River, which would have made most of what later became the state of Washington part of their colonies in North America. Advocates of manifest destiny protested and called for the annexation of the entire Oregon Country up to the Alaska line (54°40ʹ N). Presidential candidate Polk used this popular outcry to his advantage, and the Democrats called for the annexation of "All Oregon" in the 1844 U.S. presidential election. As president, Polk sought compromise and renewed the earlier offer to divide the territory in half along the 49th parallel, to the dismay of the most ardent advocates of manifest destiny. When the British refused the offer, American expansionists responded with slogans such as "The whole of Oregon or none" and "Fifty-four forty or fight", referring to the northern border of the region. (The latter slogan is often mistakenly described as having been a part of the 1844 presidential campaign.) When Polk moved to terminate the joint occupation agreement, the British finally agreed in early 1846 to divide the region along the 49th parallel, leaving the lower Columbia basin as part of the United States. In order to ensure that Britain retained all of Vancouver Island and the southern Gulf Islands, however, it was agreed that the border would swing south around that area. The Oregon Treaty of 1846 formally settled the dispute; Polk's administration succeeded in selling the treaty to Congress because the United States was about to begin the Mexican–American War, and the president and others argued it would be foolish to also fight the British Empire.[citation needed] Despite the earlier clamor for "All Oregon", the Oregon Treaty was popular in the United States and was easily ratified by the Senate. The most fervent advocates of manifest destiny had not prevailed along the northern border because, according to Reginald Stuart, "the compass of manifest destiny pointed west and southwest, not north, despite the use of the term 'continentalism'". In 1869, American historian Frances Fuller Victor published Manifest Destiny in the West in the Overland Monthly, arguing that the efforts of early American fur traders and missionaries presaged American control of Oregon. She concluded the article as follows: It was an oversight on the part of the United States, the giving up the island of Quadra and Vancouver, on the settlement of the boundary question. Yet, "what is to be, will be", as some realist has it; and we look for the restoration of that picturesque and rocky atom of our former territory as inevitable. Manifest destiny played an important role in the expansion of Texas and American relationship with Mexico. In 1836, the Republic of Texas declared independence from Mexico and, after the Texas Revolution, sought to join the United States as a new state. This was an idealized process of expansion that had been advocated from Jefferson to O'Sullivan: newly democratic and independent states would request entry into the United States, rather than the United States extending its government over people who did not want it. The annexation of Texas was attacked by anti-slavery spokesmen because it would add another slave state to the Union. Presidents Andrew Jackson and Martin Van Buren declined Texas's offer to join the United States in part because the slavery issue threatened to divide the Democratic Party. Before the election of 1844, Whig candidate Henry Clay and the presumed Democratic candidate, former president, Van Buren, both declared themselves opposed to the annexation of Texas, each hoping to keep the troublesome topic from becoming a campaign issue. This unexpectedly led to Van Buren being dropped by the Democrats in favor of Polk, who favored annexation. Polk tied the Texas annexation question with the Oregon dispute, thus providing a sort of regional compromise on expansion. (Expansionists in the North were more inclined to promote the occupation of Oregon, while Southern expansionists focused primarily on the annexation of Texas.) Although elected by a very slim margin, Polk proceeded as if his victory had been a mandate for expansion. After the election of Polk, but before he took office, Congress approved the annexation of Texas. Polk moved to occupy a portion of Texas that had declared independence from Mexico in 1836, but was still claimed by Mexico. This paved the way for the outbreak of the Mexican–American War on April 24, 1846. With American successes on the battlefield, by the summer of 1847, there were calls for the annexation of "All Mexico", particularly among Eastern Democrats, who argued that bringing Mexico into the Union was the best way to ensure future peace in the region. This was a controversial proposition for two reasons. First, idealistic advocates of manifest destiny like O'Sullivan had always maintained that the laws of the United States should not be imposed on people against their will. The annexation of "All Mexico" would be a violation of this principle. And secondly, the annexation of Mexico was controversial because it would mean extending U.S. citizenship to millions of Mexicans, who belonged to a racially mixed population and adhered primarily to Roman Catholicism. Senator John C. Calhoun of South Carolina, who had approved of the annexation of Texas, was opposed to the annexation of Mexico, as well as the "mission" aspect of manifest destiny, for racial reasons. He made these views clear in a speech to Congress on January 4, 1848: We have never dreamt of incorporating into our Union any but the Caucasian race—the free white race. To incorporate Mexico, would be the very first instance of the kind, of incorporating an Indian race; for more than half of the Mexicans are Indians, and the other is composed chiefly of mixed tribes. I protest against such a union as that! Ours, sir, is the Government of a white race.... We are anxious to force free government on all; and I see that it has been urged ... that it is the mission of this country to spread civil and religious liberty over all the world, and especially over this continent. It is a great mistake. This debate brought to the forefront one of the contradictions of manifest destiny: on the one hand, while identitarian ideas inherent in manifest destiny suggested that Mexicans, as non-whites, would present a threat to white racial integrity and thus were not qualified to become Americans, the "mission" component of manifest destiny suggested that Mexicans would be improved (or "regenerated", as it was then described) by bringing them into American democracy. Identitarianism was used to promote manifest destiny, but, as in the case of Calhoun and the resistance to the "All Mexico" movement, identitarianism was also used to oppose manifest destiny. Conversely, proponents of annexation of "All Mexico" regarded it as an anti-slavery measure. The controversy was eventually ended by the Mexican Cession, which added the territories of Alta California and Nuevo México to the United States, both more sparsely populated than the rest of Mexico. Like the "All Oregon" movement, the "All Mexico" movement quickly abated.[citation needed] Historian Frederick Merk, in Manifest Destiny and Mission in American History: A Reinterpretation (1963), argued that the failure of the "All Oregon" and "All Mexico" movements indicates that manifest destiny had not been as popular as historians have traditionally portrayed it to have been. Merk wrote that, while belief in the beneficent mission of democracy was central to American history, aggressive "continentalism" were aberrations supported by only a minority of Americans, all of them Democrats. Some Democrats were also opposed; the Democrats of Louisiana opposed annexation of Mexico, while those in Mississippi supported it. These events related to the Mexican–American War and had an effect on the American people living in the Southern Plains at the time. A case study by David Beyreis depicts these effects through the operations of a fur trading and Indian trading business named Bent, St. Vrain and Company during the period. The telling of this company shows that the idea of Manifest Destiny was not unanimously loved by all Americans and did not always benefit Americans. The case study goes on to show that this company could have ceased to exist in the name of territorial expansion. After the Mexican–American War ended in 1848, disagreements over the expansion of slavery made further annexation by conquest too divisive to be official government policy. Some, such as John Quitman, Governor of Mississippi, offered what public support they could. In one memorable case, Quitman simply explained that the state of Mississippi had "lost" its state arsenal, which began showing up in the hands of filibusters. Yet these isolated cases only solidified opposition in the North as many Northerners were increasingly opposed to what they believed to be efforts by Southern slave owners—and their friends in the North—to expand slavery through filibustering. Sarah P. Remond on January 24, 1859, delivered an impassioned speech at Warrington, England, that the connection between filibustering and slave power was clear proof of "the mass of corruption that underlay the whole system of American government". The Wilmot Proviso and the continued "Slave Power" narratives thereafter, indicated the degree to which manifest destiny had become part of the sectional controversy. Without official government support, the most radical advocates of manifest destiny increasingly turned to military filibustering. Originally filibuster had come from the Dutch vrijbuiter and referred to buccaneers in the West Indies that preyed on Spanish commerce. While there had been some filibustering expeditions into Canada in the late 1830s, only by mid-century did filibuster become a definitive term. By then, declared the New-York Daily Times "the fever of Fillibusterism is on our country. Her pulse beats like a hammer at the wrist, and there's a very high color on her face." Millard Fillmore's second annual message to Congress, submitted in December 1851, gave double the amount of space to filibustering activities than the brewing sectional conflict. The eagerness of the filibusters, and the public to support them, had an international hue. Clay's son, a diplomat in Portugal, reported that the invasion created a sensation in Lisbon. Although they were illegal, filibustering operations in the late 1840s and early 1850s were romanticized in the United States. The Democratic Party's national platform included a plank that specifically endorsed William Walker's filibustering in Nicaragua. Wealthy American expansionists financed dozens of expeditions, usually based out of New Orleans, New York, and San Francisco. The primary target of manifest destiny's filibusters was Latin America but there were isolated incidents elsewhere. Mexico was a favorite target of organizations devoted to filibustering, like the Knights of the Golden Circle. William Walker got his start as a filibuster in an ill-advised attempt to separate the Mexican states Sonora and Baja California. Narciso López, a near second in fame and success, spent his efforts trying to secure Cuba from the Spanish Empire. The United States had long been interested in acquiring Cuba from the declining Spanish Empire. As with Texas, Oregon, and California, American policy makers were concerned that Cuba would fall into British hands, which, according to the thinking of the Monroe Doctrine, would constitute a threat to the interests of the United States. Prompted by O'Sullivan, President Polk offered to buy Cuba from Spain in 1848 for $100 million. Polk feared that filibustering would hurt his effort to buy the island, and so he informed the Spanish of an attempt by the Cuban filibuster López to seize Cuba by force and annex it to the United States, foiling the plot. Spain declined to sell the island, which ended Polk's efforts to acquire Cuba. O'Sullivan eventually landed in legal trouble. Filibustering continued to be a major concern for presidents after Polk. Whigs presidents Zachary Taylor and Millard Fillmore tried to suppress the expeditions. When the Democrats recaptured the White House in 1852 with the election of Franklin Pierce, a filibustering effort by John A. Quitman to acquire Cuba received the tentative support of the president. Pierce backed off and instead renewed the offer to buy the island, this time for $130 million. When the public learned of the Ostend Manifesto in 1854, which argued that the United States could seize Cuba by force if Spain refused to sell, this effectively killed the effort to acquire the island. The public now linked expansion with slavery; if manifest destiny had once enjoyed widespread popular approval, this was no longer true. Filibusters like William Walker continued to garner headlines in the late 1850s but to little effect. Expansionism was among the various issues that played a role in the coming of the war. With the divisive question of the expansion of slavery, Northerners and Southerners, in effect, were coming to define manifest destiny in different ways, undermining nationalism as a unifying force. According to Frederick Merk, "The doctrine of Manifest Destiny, which in the 1840s had seemed Heaven-sent, proved to have been a bomb wrapped up in idealism." The filibusterism of the era even opened itself up to some mockery among the headlines. In 1854, a San Francisco newspaper published a satirical poem called "Filibustering Ethics". This poem features two characters, Captain Robb and Farmer Cobb. Captain Robb makes claim to Farmer Cobb's land arguing that Robb deserves the land because he is Anglo-Saxon, has weapons to "blow out" Cobb's brains, and nobody has heard of Cobb, so what right does Cobb have to claim the land. Cobb argues that Robb doesn't need his land because Robb already has more land than he knows what to do with. Due to threats of violence, Cobb surrenders his land and leaves grumbling that "might should be the rule of right among enlightened nations." The Homestead Act of 1862 encouraged 600,000 families to settle the West by giving them land (usually 160 acres) almost free. Over the course of 123 years, 200 million claims were made and over 270 million acres were settled, accounting for 10% of the land in the U.S. They had to live on and improve the land for five years. Before the American Civil War, Southern leaders opposed the Homestead Acts because they feared it would lead to more free states and free territories. After the mass resignation of Southern senators and representatives at the beginning of the war, Congress was subsequently able to pass the Homestead Act. In some areas, the Homestead Act resulted in the direct removal of Indigenous communities. According to American historian Roxanne Dunbar-Ortiz, all five nations of the "Five Civilized Tribes" signed treaties with the Confederacy and initially supported them in hopes of dividing and weakening the U.S. so that they could remain on their land. The United States Army, led by prominent Civil War generals such as William Tecumseh Sherman, Philip Sheridan, and George Armstrong Custer, waged wars on "non-treaty Indians" who continued to live on land that had already been ceded to the U.S. through treaty. Homesteaders and other settlers soon followed and took possession of the land for farms and mining. Occasionally, white settlers would move ahead of the U.S. Army, into land that had not yet been settled by the United States, causing conflict with the Native people who still resided there. According to Anglo-American historian Julius Wilm, while the U.S. government did not approve of settlers moving ahead of the Army, Indian Affairs officials did believe "the move of frontier whites into the proximity of contested territory—be they homesteaders or parties interested in other pursuits—necessitated the removal of Indigenous nations." According to historian Hannah Anderson, the Homestead Act also led to environmental degradation. While it succeeded in settling and farming the land, the Act failed to preserve the land. Continuous plowing of the topsoil made the soil vulnerable to erosion and wind, as well as stripping the nutrients from the ground. This deforestation and erosion would play a key role in the Dust Bowl in the 1930s. Intense logging caused a decrease in much of the forests and hunting harmed many of the native animal populations, including the bison, whose population was reduced to a few hundreds. In 1859, Reuben Davis, a member of the House of Representatives from Mississippi, articulated one of the most expansive visions of manifest destiny on record: We may expand so as to include the whole world. Mexico, Central America, South America, Cuba, the West India Islands, and even England and France [we] might annex without inconvenience... allowing them with their local Legislatures to regulate their local affairs in their own way. And this, Sir, is the mission of this Republic and its ultimate destiny. As the Civil War faded into history, the term manifest destiny experienced a brief revival. Protestant missionary Josiah Strong, in his best-seller of 1885, Our Country, argued that the future was devolved upon America since it had perfected the ideals of civil liberty, "a pure spiritual Christianity", and concluded, "My plea is not, Save America for America's sake, but, Save America for the world's sake." In the 1892 U.S. presidential election, the Republican Party platform proclaimed: "We reaffirm our approval of the Monroe doctrine and believe in the achievement of the manifest destiny of the Republic in its broadest sense." What was meant by "manifest destiny" in this context was not clearly defined, particularly since the Republicans lost the election. In the 1896 election, the Republicans recaptured the White House and held on to it for the next 16 years. During that time, manifest destiny was cited to promote overseas expansion. Whether this version of manifest destiny was consistent with the continental expansionism of the 1840s was debated at the time, and long afterwards. For example, when President William McKinley advocated annexation of the Republic of Hawaii in 1898, he said that "We need Hawaii just as much and a good deal more than we did California. It is manifest destiny." On the other hand, former President Grover Cleveland, a Democrat who had blocked the annexation of Hawaii during his administration, wrote that McKinley's annexation of the territory was a "perversion of our national destiny". Historians continued that debate; some have interpreted American acquisition of other Pacific island groups in the 1890s as an extension of manifest destiny across the Pacific Ocean. Others have regarded it as the antithesis of manifest destiny and merely imperialism. In 1898, the United States intervened in the Cuban insurrection and launched the Spanish–American War to force Spain out. According to the terms of the Treaty of Paris, Spain relinquished sovereignty over Cuba and ceded the Philippine Islands, Puerto Rico, and Guam to the United States. The terms of cession for the Philippines involved a payment of the sum of $20 million by the United States to Spain. The treaty was highly contentious and denounced by William Jennings Bryan, who tried to make it a central issue in the 1900 election, which he lost to McKinley. The Teller Amendment, passed unanimously by the U.S. Senate before the war, which proclaimed Cuba "free and independent", forestalled annexation of the island. The Platt Amendment (1902) then established Cuba as a virtual protectorate of the United States. The United States, German Empire, and United Kingdom participated in the Tripartite Convention of 1899 at the end of the Second Samoan Civil War, resulting in the formal partition of the Samoan archipelago into a German colony and the U.S. territory of what is now called American Samoa. The United States annexed Tutuila in 1900, Manu'a in 1904, and Swains Island in 1925. The eastern Samoan islands became a territory of the United States. The western islands, by far the greater landmass, became known as German Samoa, after Britain gave up all claims to Samoa and in return accepted the termination of German rights in Tonga and certain areas in the Solomon Islands and West Africa. Forerunners to the Tripartite Convention of 1899 were the Washington Conference of 1887, the Treaty of Berlin of 1889, and the Anglo-German Agreement on Samoa of 1899. The following year, the U.S. formally annexed its portion, a smaller group of eastern islands, one of which contains the noted harbor of Pago Pago. After the United States Navy took possession of eastern Samoa for the United States government, the existing coaling station at Pago Pago Bay was expanded into a full naval station, known as United States Naval Station Tutuila and commanded by a commandant. The Navy secured a Deed of Cession of Tutuila in 1900 and a Deed of Cession of Manuʻa in 1904 on behalf of the U.S. government. The last sovereign of Manuʻa, the Tui Manuʻa Elisala, signed a Deed of Cession of Manuʻa following a series of U.S. naval trials, known as the "Trial of the Ipu", in Pago Pago, Taʻu, and aboard a Pacific Squadron gunboat. The territory became known as the U.S. Naval Station Tutuila. On July 17, 1911, the U.S. Naval Station Tutuila, which was composed of Tutuila, Aunuʻu and Manuʻa, was officially renamed American Samoa. People of Manuʻa had been unhappy since they were left out of the name "Naval Station Tutuila". In May 1911, Governor William Michael Crose authored a letter to the Secretary of the Navy conveying the sentiments of Manuʻa. The department responded that the people should choose a name for their new territory. The traditional leaders chose "American Samoa", and, on July 7, 1911, the solicitor general of the Navy authorized the governor to proclaim it as the name for the new territory.: 209 The acquisition of Hawaii, the Philippines, Puerto Rico, Guam, and American Samoa marked a new chapter in U.S. history. Traditionally, territories were acquired by the United States for the purpose of becoming new states on equal footing with already existing states. These islands were acquired as colonies rather than prospective states. The process was validated by the Insular Cases. The Supreme Court ruled that full constitutional rights did not automatically extend to all areas under American control. The Philippines became independent in 1946 and Hawaii became a state in 1959, but Puerto Rico, Guam, and American Samoa remain territories. According to Frederick Merk, these colonial acquisitions marked a break from the original intention of manifest destiny. Previously, "Manifest Destiny had contained a principle so fundamental that a Calhoun and an O'Sullivan could agree on it—that a people not capable of rising to statehood should never be annexed. That was the principle thrown overboard by the imperialism of 1899." Albert J. Beveridge maintained the contrary at his September 25, 1900, speech in the Auditorium, at Chicago. He declared that the current desire for Cuba and the other acquired territories was identical to the views expressed by Washington, Jefferson and Marshall. Moreover, "the sovereignty of the Stars and Stripes can be nothing but a blessing to any people and to any land." The nascent revolutionary government, desirous of independence, resisted the United States in the Philippine–American War in 1899; it won no support from any government anywhere and collapsed when its leader was captured. William Jennings Bryan denounced the war and any form of future overseas expansion, writing, "'Destiny' is not as manifest as it was a few weeks ago." In 1917, all Puerto Ricans were made full American citizens via the Jones Act, which also provided for a popularly elected legislature and a bill of rights, and authorized the election of a Resident Commissioner who has a voice (but no vote) in Congress. In 1934, the Tydings–McDuffie Act put the Philippines on a path to independence, which was realized in 1946 with the Treaty of Manila. The Guam Organic Act of 1950 established Guam alongside Puerto Rico as an unincorporated unorganized territory of the United States, provided for the structure of the island's civilian government, and granted the people U.S. citizenship. In 2025, Donald Trump became the first president to use the phrase "manifest destiny" during an inaugural address, declaring an extension of American influence "into the stars" with ambitions to plant the U.S. flag on Mars. Since being elected, at various points, Trump has suggested annexing Canada, Greenland, and the Panama Canal, to invade Venezuela and Mexico, and to take over the Gaza Strip. Impact on Native Americans Manifest destiny had serious consequences for Native Americans, since continental expansion implicitly meant the occupation and annexation of Native American land. This ultimately led to confrontations and wars with several groups of native peoples via Indian removal. The United States continued the European practice of recognizing only limited land rights of Indigenous peoples. In a policy formulated largely by Henry Knox, Secretary of War in the Washington Administration, the U.S. government sought to expand into the west through the purchase of Native American land in treaties. Only the federal government could purchase Indian lands, and this was done through treaties with tribal leaders. Whether a tribe actually had a decision-making structure capable of making a treaty was a controversial issue. The national policy was for the Indians to join American society and become "civilized", which meant no more wars with neighboring tribes or raids on white settlers or travelers, and a shift from hunting to farming and ranching. Advocates of civilization programs believed that the process of settling native tribes would greatly reduce the amount of land needed by the Native Americans, making more land available for homesteading by white Americans. Thomas Jefferson believed that, while the Indigenous people of America were intellectual equals to whites, they had to assimilate to and live like the whites or inevitably be pushed aside by them. According to historian Jeffrey Ostler, Jefferson advocated for the extermination of Indigenous people once he believed assimilation was no longer possible. On February 27, 1803, Jefferson wrote in a letter to William Henry Harrison: "but this letter being unofficial, & private, I may with safety give you a more extensive view of our policy respecting the Indians... Our system is to live in perpetual peace with the Indians, to cultivate an affectionate attachment from them, by everything just & liberal which we can do for them within the bounds of reason, and by giving them effectual protection against wrongs from our own people. The decrease of game rendering their subsistence by hunting insufficient, we wish to draw them to agriculture, to spinning & weaving... when they withdraw themselves to the culture of a small piece of land, they will perceive how useless to them are their extensive forests, and will be willing to pare them off from time to time in exchange for necessaries for their farms & families. At our trading houses too we mean to sell so low as merely to repay us cost and charges so as neither to lessen or enlarge our capital. this is what private traders cannot do, for they must gain; they will consequently retire from the competition, & we shall thus get clear of this pest without giving offence or umbrage to the Indians. in this way our settlements will gradually circumbscribe & approach the Indians, & they will in time either incorporate with us as citizens of the U.S. or remove beyond the Mississippi." Noted by Law Scholar and professor Robert J. Miller, Thomas Jefferson "Understood and utilized the Doctrine of Discovery [aka Manifest destiny] through his political careers and was heavily involved in using the Doctrine against Indian tribes." Jefferson was "often immersed in Indian affairs through his legal and political careers" and "was also well acquainted with the process Virginia governments had historically used to extinguish Indian [land] titles". Jefferson used this knowledge to make the Louisiana purchase in 1803, aided in the construction of the Indian Removal Policy, and laid the ground work for removing Native American tribes further and further into eventual small reservation territories. The idea of "Indian removal" gained traction in the context of manifest destiny and, with Jefferson as one of the main political voices on the subject, accumulated advocates who believed that American Indians would be better off moving away from white settlers. The removal effort was further solidified through policy by Andrew Jackson when he signed the Indian Removal Act in 1830. In his First Annual Message to Congress in 1829, Jackson stated with regard to removal: I suggest for your consideration the propriety of setting apart an ample district west of the Mississippi, and without the limits of any state or territory now formed, to be guaranteed to the Indian tribes as long as they shall occupy it, each tribe having a distinct control over the portion designated for its use. There they may be secured in the enjoyment of governments of their own choice, subject to no other control from the United States than such as may be necessary to preserve peace on the frontier and between the several tribes. There the benevolent may endeavor to teach them the arts of civilization, and, by promoting union and harmony among them, to raise up an interesting commonwealth, destined to perpetuate the race and to attest the humanity and justice of this government." Following the forced removal of many Indigenous Peoples, Americans increasingly believed that Native American ways of life would eventually disappear as the United States expanded. Humanitarian advocates of removal believed that American Indians would be better off moving away from whites. As historian Reginald Horsman argued in his influential study Race and Manifest Destiny, racial rhetoric increased during the era of manifest destiny. Americans increasingly believed that Native American ways of life would "fade away" as the United States expanded. As an example, this idea was reflected in the work of one of America's first great historians, Francis Parkman, whose landmark book The Conspiracy of Pontiac was published in 1851. Parkman wrote that after the French defeat in the French and Indian War, Indians were "destined to melt and vanish before the advancing waves of Anglo-American power, which now rolled westward unchecked and unopposed". Parkman emphasized that the collapse of Indian power in the late 18th century had been swift and was a past event. Legacy and consequences The belief in an American mission to promote and defend democracy throughout the world, as expounded by Jefferson and his "Empire of Liberty", and continued by Lincoln, Wilson and George W. Bush, continues to have an influence on American political ideology. Under Douglas MacArthur, the Americans "were imbued with a sense of manifest destiny," says historian John Dower. After the turn of the 19th century to the 20th century, the phrase manifest destiny declined in usage, as territorial expansion ceased to be promoted as being a part of America's "destiny". Under President Theodore Roosevelt, the role of the United States in the New World was defined, in the 1904 Roosevelt Corollary to the Monroe Doctrine, as being an "international police power" to secure American interests in the Western Hemisphere. Roosevelt's corollary contained an explicit rejection of territorial expansion. In the past, manifest destiny had been seen as necessary to enforce the Monroe Doctrine in the Western Hemisphere, but now expansionism had been replaced by interventionism as a core value associated with the doctrine. President Wilson continued the policy of interventionism in the Americas, and attempted to redefine both manifest destiny and America's "mission" on a broader, worldwide scale. Wilson led the United States into World War I with the argument that "The world must be made safe for democracy." In his 1920 message to Congress after the war, Wilson stated: ... I think we all realize that the day has come when Democracy is being put upon its final test. The Old World is just now suffering from a wanton rejection of the principle of democracy and a substitution of the principle of autocracy as asserted in the name, but without the authority and sanction, of the multitude. This is the time of all others when Democracy should prove its purity and its spiritual power to prevail. It is surely the manifest destiny of the United States to lead in the attempt to make this spirit prevail. This was the only time a president had used the phrase "manifest destiny" in his annual address. Wilson's version of manifest destiny was a rejection of expansionism and an endorsement (in principle) of self-determination, emphasizing that the United States had a mission to be a world leader for the cause of democracy. This U.S. vision of itself as the leader of the "Free World" would grow stronger in the 20th century after the end of World War II, although rarely would it be described as "manifest destiny", as Wilson had done. "Manifest destiny" is sometimes used by critics of U.S. foreign policy to characterize interventions in the Middle East and elsewhere. In this usage, "manifest destiny" is interpreted as the underlying cause of what is denounced by some as "American imperialism". A more positive-sounding phrase devised by scholars at the end of the 20th century is "nation building", and State Department official Karin Von Hippel notes that the U.S. has "been involved in nation-building and promoting democracy since the middle of the 19th century and 'Manifest Destiny'". Criticisms Critics have condemned manifest destiny as an ideology used to justify dispossession and genocide against indigenous peoples. Critics argue it resulted in the forceful settler-colonial displacement of Indigenous Americans in order to carry out colonial expansion. Critics at the time of the country's growing desire for expansion doubted the country's ability to rule such an extensive empire. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Cholesterol] | [TOKENS: 6755] |
Contents Cholesterol Cholesterol is the principal sterol of all animals, distributed in body tissues, especially the brain and spinal cord, and in animal fats and oils. Cholesterol is biosynthesized by all animal cells and is an essential structural and signaling component of animal cell membranes. In vertebrates, hepatic cells typically produce the greatest amounts. In the brain, astrocytes produce cholesterol and transport it to neurons. It is absent among prokaryotes (bacteria and archaea), although there are some exceptions, such as Mycoplasma, which require cholesterol for growth. Cholesterol also serves as a precursor for the biosynthesis of steroid hormones, bile acid, and vitamin D. Elevated levels of cholesterol in the blood, especially when bound to low-density lipoprotein (LDL, often referred to as "bad cholesterol"), may increase the risk of cardiovascular disease. François Poulletier de la Salle first identified cholesterol in solid form in gallstones in 1769. In 1815, chemist Michel Eugène Chevreul named the compound "cholesterine". Etymology The word cholesterol comes from Ancient Greek chole- 'bile' and stereos 'solid', followed by the chemical suffix -ol for an alcohol. Physiology Cholesterol is essential for all animal life. While most cells are capable of synthesizing it, the majority of cholesterol is ingested or synthesized by hepatocytes and transported in the blood to peripheral cells. The levels of cholesterol in peripheral tissues are dictated by a balance of uptake and export. Under normal conditions, brain cholesterol is separate from peripheral cholesterol, i.e., the dietary and hepatic cholesterol do not cross the blood brain barrier. Rather, astrocytes produce and distribute cholesterol in the brain. De novo synthesis, both in astrocytes and hepatocytes, occurs by a complex 37-step process. This begins with the mevalonate or HMG-CoA reductase pathway, the target of statin drugs, which encompasses the first 18 steps. This is followed by 19 additional steps to convert the resulting lanosterol into cholesterol. A human male weighing 68 kg (150 lb) normally synthesizes about 1 gram (1,000 mg) of cholesterol per day, and his body contains about 35 g, mostly contained within the cell membranes.[citation needed] Typical daily cholesterol dietary intake for a man in the United States is 307 mg. Most ingested cholesterol is esterified, which causes it to be poorly absorbed by the gut. The body also compensates for absorption of ingested cholesterol by reducing its own cholesterol synthesis. For these reasons, cholesterol in food, seven to ten hours after ingestion, has little, if any effect on concentrations of cholesterol in the blood. Conversely, in rats, blood cholesterol is inversely correlated with cholesterol consumption: the more cholesterol a rat eats the lower the blood cholesterol. During the first seven hours after ingestion of cholesterol, as absorbed fats are being distributed around the body within extracellular water by the various lipoproteins (which transport all fats in the water outside cells), the concentrations increase. Plants make cholesterol in very small amounts. In larger quantities they produce phytosterols, chemically similar substances that compete with cholesterol for reabsorption in the intestinal tract, thus potentially reducing cholesterol reabsorption. When intestinal lining cells absorb phytosterols, in place of cholesterol, they usually excrete the phytosterol molecules back into the GI tract, an important protective mechanism. The intake of naturally occurring phytosterols, which encompass plant sterols and stanols, ranges between ≈200–300 mg/day depending on eating habits. Specially designed vegetarian experimental diets have been produced yielding upwards of 700 mg/day. Cholesterol is present in varying degrees in all animal cell membranes but is absent in prokaryotes. It is required to build and maintain membranes and modulates membrane fluidity over the range of physiological temperatures. The hydroxyl group of each cholesterol molecule interacts with water molecules surrounding the membrane, as do the polar heads of the membrane phospholipids and sphingolipids, while the bulky steroid and the hydrocarbon chain are embedded in the membrane, alongside the nonpolar fatty-acid chain of the other lipids. Through the interaction with the phospholipid fatty-acid chains, cholesterol increases membrane packing, which both alters membrane fluidity and maintains membrane integrity so that animal cells do not need to build cell walls (like plants and most bacteria). The membrane remains stable and durable without being rigid, allowing animal cells to change shape and animals to move.[citation needed] The structure of the tetracyclic ring of cholesterol contributes to the fluidity of the cell membrane, as the molecule is in a trans conformation, making all but the side chain of cholesterol rigid and planar. In this structural role, cholesterol also reduces the permeability of the plasma membrane to neutral solutes, hydrogen ions, and sodium ions. Cholesterol regulates the biological process of substrate presentation and the enzymes that use substrate presentation as a mechanism of their activation. Phospholipase D2 (PLD2) is a well-defined example of an enzyme activated by substrate presentation. The enzyme is palmitoylated causing the enzyme to traffic to cholesterol dependent lipid domains sometimes called "lipid rafts". The substrate of phospholipase D is phosphatidylcholine (PC) which is unsaturated and is of low abundance in lipid rafts. PC localizes to the disordered region of the cell along with the polyunsaturated lipid phosphatidylinositol 4,5-bisphosphate (PIP2). PLD2 has a PIP2 binding domain. When PIP2 concentration in the membrane increases, PLD2 leaves the cholesterol-dependent domains and binds to PIP2 where it then gains access to its substrate PC and commences catalysis based on substrate presentation.[citation needed] Cholesterol is implicated in cell signaling processes, assisting in the formation of lipid rafts in the plasma membrane, which brings receptor proteins in close proximity with high concentrations of second messenger molecules. In multiple layers, cholesterol and phospholipids (both electrical insulators) can facilitate speed of transmission of electrical impulses along nerve tissue. For many neuron fibers, a myelin sheath, rich in cholesterol since it is derived from compacted layers of Schwann cell or oligodendrocyte membranes, provides insulation for more efficient conduction of impulses.Demyelination (loss of myelin) is believed to be part of the basis for multiple sclerosis. Cholesterol binds to and affects the gating of a number of ion channels such as the nicotinic acetylcholine receptor, GABAA receptor, and the inward-rectifier potassium channel. Cholesterol activates the estrogen-related receptor alpha (ERRα) and may be the endogenous ligand for the receptor. The constitutively active nature of the receptor may be explained by the fact that cholesterol is ubiquitous in the body. Inhibition of ERRα signaling by reduction of cholesterol production has been identified as a key mediator of the effects of statins and bisphosphonates on bone, muscle, and macrophages. On the basis of these findings, it has been suggested that the ERRα should be de-orphanized and classified as a receptor for cholesterol. Within cells, cholesterol is a precursor molecule for several biochemical pathways. For example, it is the precursor molecule for the synthesis of vitamin D in the calcium metabolism and all steroid hormones, including the adrenal gland hormones cortisol and aldosterone, as well as the sex hormones progesterone, estrogens, and testosterone, and their derivatives. The stratum corneum is the outermost layer of the epidermis. It is composed of terminally differentiated and enucleated corneocytes that reside within a lipid matrix, like "bricks and mortar." Together with ceramides and free fatty acids, cholesterol forms the lipid mortar, a water-impermeable barrier that prevents evaporative water loss. As a rule of thumb, the epidermal lipid matrix is composed of an equimolar mixture of ceramides (≈50% by weight), cholesterol (≈25% by weight), and free fatty acids (≈15% by weight), with smaller quantities of other lipids also present. Cholesterol sulfate reaches its highest concentration in the granular layer of the epidermis. Steroid sulfate sulfatase then decreases its concentration in the stratum corneum, the outermost layer of the epidermis. The relative abundance of cholesterol sulfate in the epidermis varies across different body sites with the heel of the foot having the lowest concentration. Cholesterol is recycled in the body. The liver excretes cholesterol into biliary fluids, which are then stored in the gallbladder from where they are excreted in a non-esterified form (via bile) into the digestive tract. Typically, about 50% of the excreted cholesterol is reabsorbed by the small intestine back into the bloodstream. Biosynthesis and regulation Almost all animal tissues synthesize cholesterol from acetyl-CoA. All animal cells (with some exceptions within the invertebrates) manufacture cholesterol, for both membrane structure and other uses, with relative production rates varying by cell type and organ function. About 80% of total daily cholesterol production occurs in the liver and the intestines; other sites of higher synthesis rates include the brain, the adrenal glands, and the reproductive organs. Synthesis within the body starts with the mevalonate pathway where two molecules of acetyl-CoA condense to form acetoacetyl-CoA. This is followed by a second condensation between acetyl-CoA and acetoacetyl-CoA to form 3-hydroxy-3-methylglutaryl CoA (HMG-CoA). This molecule is then reduced to mevalonate by the enzyme HMG-CoA reductase. Production of mevalonate is the rate-limiting and irreversible step in cholesterol synthesis and is the site of action for statins (a class of cholesterol-lowering drugs).[citation needed] Mevalonate is finally converted to isopentenyl pyrophosphate (IPP) through two phosphorylation steps and one decarboxylation step that requires ATP. Three molecules of isopentenyl pyrophosphate condense to form farnesyl pyrophosphate through the action of geranyl transferase. Two molecules of farnesyl pyrophosphate then condense to form squalene by the action of squalene synthase in the endoplasmic reticulum. Oxidosqualene cyclase then cyclizes squalene to form lanosterol. Finally, lanosterol is converted to cholesterol via either of two pathways, the Bloch pathway, or the Kandutsch-Russell pathway. The final 19 steps to cholesterol contain NADPH and oxygen to help oxidize methyl groups for the removal of carbons, mutases to move alkene groups, and NADH to help reduce ketones. Konrad Bloch and Feodor Lynen shared the Nobel Prize in Physiology or Medicine in 1964 for their discoveries concerning some of the mechanisms and methods of regulation of cholesterol and fatty acid metabolism. Biosynthesis of cholesterol is directly regulated by the cholesterol levels present, though the homeostatic mechanisms involved are only partly understood. A higher intake of food leads to a net decrease in endogenous production, whereas a lower intake of food has the opposite effect. The main regulatory mechanism is the sensing of intracellular cholesterol in the endoplasmic reticulum by the protein SREBP (sterol regulatory element-binding protein 1 and 2). In the presence of cholesterol, SREBP is bound to two other proteins: SCAP (SREBP cleavage-activating protein) and INSIG-1. When cholesterol levels fall, INSIG-1 dissociates from the SREBP-SCAP complex, which allows the complex to migrate to the Golgi apparatus. Here SREBP is cleaved by S1P and S2P (site-1 protease and site-2 protease), two enzymes that are activated by SCAP when cholesterol levels are low.[citation needed] The cleaved SREBP then migrates to the nucleus and acts as a transcription factor to bind to the sterol regulatory element (SRE), which stimulates the transcription of many genes. Among these are the low-density lipoprotein (LDL) receptor and HMG-CoA reductase. The LDL receptor scavenges circulating LDL from the bloodstream, whereas HMG-CoA reductase leads to an increase in endogenous production of cholesterol. A large part of this signaling pathway was clarified by Dr. Michael S. Brown and Dr. Joseph L. Goldstein in the 1970s. In 1985, they received the Nobel Prize in Physiology or Medicine for their work. Their subsequent work shows how the SREBP pathway regulates the expression of many genes that control lipid formation and metabolism and body fuel allocation.[citation needed] Cholesterol synthesis can be turned off when cholesterol levels are high. HMG-CoA reductase contains both a cytosolic domain (responsible for its catalytic function) and a membrane domain which senses signals for its degradation. Increasing concentrations of cholesterol (and other sterols) cause a change in this domain's oligomerization state, making it more susceptible to destruction by the proteasome. This enzyme's activity can also be reduced by phosphorylation by an AMP-activated protein kinase. Because this kinase is activated by AMP, which is produced when ATP is hydrolyzed, it follows that cholesterol synthesis is halted when ATP levels are low. As an isolated molecule, cholesterol is only minimally soluble in water, or hydrophilic. Because of this, it dissolves in blood at exceedingly small concentrations. To be transported effectively, cholesterol is instead packaged within lipoproteins, complex discoidal particles with exterior amphiphilic proteins and lipids, whose outward-facing surfaces are water-soluble and inward-facing surfaces are lipid-soluble. This allows it to travel through the blood via emulsification. Unbound cholesterol, being amphipathic, is transported in the monolayer surface of the lipoprotein particle along with phospholipids and proteins. Cholesterol esters bound to fatty acid, on the other hand, are transported within the fatty hydrophobic core of the lipoprotein, along with triglyceride. There are several types of lipoproteins in the blood. In order of increasing density, they are chylomicrons, very-low-density lipoprotein (VLDL), intermediate-density lipoprotein (IDL), low-density lipoprotein (LDL), and high-density lipoprotein (HDL). Lower protein/lipid ratios make for less dense lipoproteins. Cholesterol within different lipoproteins is identical, although some are carried as their native "free" alcohol form (the cholesterol-OH group facing the water surrounding the particles), while others as fatty acyl esters (known also as cholesterol esters) within the particles. Lipoprotein particles are organized by complex apolipoproteins, typically between 80 and 100 different proteins per particle, which can be recognized and bound by specific receptors on cell membranes, directing their lipid payload into specific cells and tissues currently ingesting these fat transport particles. These surface receptors serve as unique molecular signatures, which then help determine fat distribution delivery throughout the body. Chylomicrons, the least dense cholesterol transport particles, contain apolipoprotein B-48, apolipoprotein C, and apolipoprotein E (the principal cholesterol carrier in the brain) in their shells. Chylomicrons carry fats from the intestine to muscle and other tissues in need of fatty acids for energy or fat production. Unused cholesterol remains in more cholesterol-rich chylomicron remnants and is taken up from here to the bloodstream by the liver. VLDL particles are produced by the liver from triacylglycerol and cholesterol not used in the synthesis of bile acids. These particles contain apolipoprotein B100 and apolipoprotein E in their shells and can be degraded by lipoprotein lipase on the artery wall to IDL. This arterial wall cleavage allows absorption of triacylglycerol and increases the concentration of circulating cholesterol. IDL particles are then consumed in two processes: half is metabolized by HTGL and taken up by the LDL receptor on the liver cell surfaces, while the other half continues to lose triacylglycerols in the bloodstream until they become cholesterol-laden LDL particles. LDL particles are the major blood cholesterol carriers. Each one contains approximately 1,500 molecules of cholesterol ester. LDL particle shells contain just one molecule of apolipoprotein B100, recognized by LDL receptors in peripheral tissues. Upon binding of apolipoprotein B100, many LDL receptors concentrate in clathrin-coated pits. Both LDL and its receptor form vesicles within a cell via endocytosis. These vesicles then fuse with a lysosome, where the lysosomal acid lipase enzyme hydrolyzes the cholesterol esters. The cholesterol can then be used for membrane biosynthesis or esterified and stored within the cell, so as to not interfere with the cell membranes. LDL receptors are used up during cholesterol absorption, and its synthesis is regulated by SREBP, the same protein that controls the synthesis of cholesterol de novo, according to its presence inside the cell. A cell with abundant cholesterol will have its LDL receptor synthesis blocked, to prevent new cholesterol in LDL particles from being taken up. Conversely, LDL receptor synthesis proceeds when a cell is deficient in cholesterol. When this process becomes unregulated, LDL particles without receptors begin to appear in the blood. These LDL particles are oxidized and taken up by macrophages, which become engorged and form foam cells. These foam cells often become trapped in the walls of blood vessels and contribute to atherosclerotic plaque formation. Differences in cholesterol homeostasis affect the development of early atherosclerosis (carotid intima-media thickness). These plaques are the main causes of heart attacks, strokes, and other serious medical problems, leading to the association of so-called LDL cholesterol (actually a lipoprotein) with the term "bad" cholesterol. HDL particles are thought to transport cholesterol back to the liver, either for excretion or for other tissues that synthesize hormones, in a process known as reverse cholesterol transport (RCT). Large numbers of HDL particles correlates with better health outcomes, whereas low numbers of HDL particles is associated with atheromatous disease progression in the arteries. Cholesterol is susceptible to oxidation and easily forms oxygenated derivatives called oxysterols. These can be formed by three different mechanisms: autoxidation, secondary oxidation to lipid peroxidation, and cholesterol-metabolizing enzyme oxidation. A great interest in oxysterols arose when they were shown to exert inhibitory actions on cholesterol biosynthesis. This finding became known as the "oxysterol hypothesis". Additional roles for oxysterols in human physiology include their participation in bile acid biosynthesis, function as transport forms of cholesterol, and regulation of gene transcription. In biochemical experiments, radiolabelled forms of cholesterol, such as tritiated-cholesterol, are used. These derivatives undergo degradation upon storage, and it is essential to purify cholesterol prior to use. Cholesterol can be purified using small Sephadex LH-20 columns. Cholesterol is oxidized by the liver into a variety of bile acids. These, in turn, are conjugated with glycine, taurine, glucuronic acid, or sulfate. A mixture of conjugated and nonconjugated bile acids, along with cholesterol itself, is excreted from the liver into the bile. Approximately 95% of the bile acids are reabsorbed from the intestines, and the remainder are lost in the feces. The excretion and reabsorption of bile acids forms the basis of the enterohepatic circulation, which is essential for the digestion and absorption of dietary fats. Under certain circumstances, when more concentrated, as in the gallbladder, cholesterol crystallises and is the major constituent of most gallstones (lecithin and bilirubin gallstones also occur, but less frequently). Every day, up to one gram of cholesterol enters the colon. This cholesterol originates from the diet, bile, and desquamated intestinal cells, and it can be metabolized by the colonic bacteria. Cholesterol is converted mainly into coprostanol, a nonabsorbable sterol that is excreted in the feces.[citation needed] Although cholesterol is a steroid generally associated with mammals, the human pathogen Mycobacterium tuberculosis is able to completely degrade this molecule and contains a large number of genes that are regulated by its presence. Many of these cholesterol-regulated genes are homologues of fatty acid β-oxidation genes, which have evolved in such a way as to bind large steroid substrates like cholesterol. Dietary sources Animal fats are complex mixtures of triglycerides, with lesser amounts of both the phospholipids and cholesterol molecules from which all animal (and human) cell membranes are constructed. Since all animal cells manufacture cholesterol, all animal-based foods contain cholesterol in varying amounts. Major dietary sources of cholesterol include red meat, egg yolks and whole eggs, liver, kidney, giblets, fish oil, shellfish, and butter. Human breast milk also contains significant quantities of cholesterol. Plant cells synthesize cholesterol as a precursor for other compounds, such as phytosterols and steroidal glycoalkaloids, with cholesterol remaining in plant foods only in minor amounts or absent. Some plant foods, such as avocado, flax seeds and peanuts, contain phytosterols, which compete with cholesterol for absorption in the intestines and reduce the absorption of both dietary and bile cholesterol. A typical diet contributes in the order of 0.2 grams of phytosterols, not enough to have a significant impact on blocking cholesterol absorption. The intake of phytosterols can be supplemented through the use of phytosterol-containing functional foods or dietary supplements that are recognized as having potential to reduce levels of LDL-cholesterol. In 2015, the scientific advisory panel of U.S. Department of Health and Human Services and U.S. Department of Agriculture for the 2015 iteration of the Dietary Guidelines for Americans dropped the previously recommended limit of consumption of dietary cholesterol to 300 mg per day with a new recommendation to "eat as little dietary cholesterol as possible", thereby acknowledging an association between a diet low in cholesterol and reduced risk of cardiovascular disease. A 2013 report by the American Heart Association and the American College of Cardiology recommended focusing on healthy dietary patterns rather than specific cholesterol limits, as they are hard for clinicians and consumers to implement. They recommend the DASH and Mediterranean diet, both of which are low in cholesterol. A 2017 review by the American Heart Association recommends switching saturated fats for polyunsaturated fats to reduce cardiovascular disease risk. Some supplemental guidelines have recommended doses of phytosterols in the order of 1.6–3.0 grams per day (Health Canada, EFSA, ATP III, FDA). A meta-analysis demonstrated a 12% reduction in LDL-cholesterol at a mean dose of 2.1 grams per day. The benefits of a diet supplemented with phytosterols have also been questioned. Clinical significance According to the lipid hypothesis, elevated levels of cholesterol in the blood lead to atherosclerosis, which may increase the risk of heart attack, stroke, and peripheral artery disease. Since higher blood LDL – especially higher LDL concentrations and smaller LDL particle size – contributes to this process more than the cholesterol content of the HDL particles, LDL particles are often termed "bad cholesterol". High concentrations of functional HDL, which can remove cholesterol from cells and atheromas, offer protection and are commonly referred to as "good cholesterol". These balances are mostly genetically determined but can be changed by body composition, medications, diet, and other factors. A 2007 study demonstrated that blood total cholesterol levels have an exponential effect on cardiovascular and total mortality, with the association more pronounced in younger subjects. Because cardiovascular disease is relatively rare in the younger population, the impact of high cholesterol on health is larger in older people. Elevated levels of the lipoprotein fractions, LDL, IDL and VLDL, rather than the total cholesterol level, correlate with the extent and progress of atherosclerosis. Conversely, the total cholesterol can be within normal limits, yet be made up primarily of small LDL and small HDL particles, under which conditions atheroma growth rates are high. A post hoc analysis of the IDEAL and the EPIC prospective studies found an association between high levels of HDL cholesterol (adjusted for apolipoprotein A-I and apolipoprotein B) and increased risk of cardiovascular disease, casting doubt on the cardioprotective role of "good cholesterol". About one in 250 individuals has a genetic mutation for the LDL cholesterol receptor that causes them to have familial hypercholesterolemia. Inherited high cholesterol can also include genetic mutations in the PCSK9 gene and the gene for apolipoprotein B. Elevated cholesterol levels are treatable by a diet that reduces or eliminates saturated fat, and trans fats, often followed by one of various hypolipidemic agents, such as statins, fibrates, cholesterol absorption inhibitors, monoclonal antibody therapy (PCSK9 inhibitors), nicotinic acid derivatives or bile acid sequestrants. There are several international guidelines on the treatment of hypercholesterolemia. Human trials using HMG-CoA reductase inhibitors, commonly known as statins, have repeatedly confirmed that changing lipoprotein transport patterns from unhealthy to healthier patterns significantly lowers cardiovascular disease event rates, even for people with cholesterol values currently considered low for adults. Studies have shown that reducing LDL cholesterol levels by about 38.7 mg/dL with the use of statins can reduce cardiovascular disease and stroke risk by about 21%. Studies have also found that statins reduce atheroma progression. As a result, people with a history of cardiovascular disease may derive benefit from statins irrespective of their cholesterol levels (total cholesterol below 5.0 mmol/L [193 mg/dL]), and in men without cardiovascular disease, there is benefit from lowering abnormally high cholesterol levels ("primary prevention"). Primary prevention in women was originally practiced only by extension of the findings in studies on men, since, in women, none of the large statin trials conducted prior to 2007 demonstrated a significant reduction in overall mortality or in cardiovascular endpoints. Meta-analyses have demonstrated significant reductions in all-cause and cardiovascular mortality, without significant heterogeneity by sex. The 1987 report of National Cholesterol Education Program, Adult Treatment Panels suggests the total blood cholesterol level should be: < 200 mg/dL normal blood cholesterol, 200–239 mg/dL borderline-high, > 240 mg/dL high cholesterol. The American Heart Association provides a similar set of guidelines for total (fasting) blood cholesterol levels and risk for heart disease: Statins are effective in lowering LDL cholesterol and widely used for primary prevention in people at high risk of cardiovascular disease, as well as in secondary prevention for those who have developed cardiovascular disease. The average global mean total cholesterol for humans has remained at about 4.6 mmol/L (178 mg/dL) for men and women, both crude and age standardized, for nearly 40 years from 1980 to 2018, with some regional variations and reduction of total cholesterol in Western nations. More current testing methods determine LDL ("bad") and HDL ("good") cholesterol separately, allowing cholesterol analysis to be more nuanced. The desirable LDL level is considered to be less than 100 mg/dL (2.6 mmol/L). Total cholesterol is defined as the sum of HDL, LDL, and VLDL. Usually, only the total, HDL, and triglycerides are measured. For cost reasons, the VLDL is usually estimated as one-fifth of the triglycerides, and the LDL is estimated using the Friedewald formula (or a variant): estimated LDL = [total cholesterol] − [total HDL] − [estimated VLDL]. Direct LDL measures are used when triglycerides exceed 400 mg/dL. The estimated VLDL and LDL have more error when triglycerides are above 400 mg/dL. In the Framingham Heart Study, each 10 mg/dL (0.6 mmol/L) increase in total cholesterol levels increased 30-year overall mortality by 5% and CVD mortality by 9%. While subjects over the age of 50 had an 11% increase in overall mortality, and a 14% increase in cardiovascular disease mortality per 1 mg/dL (0.06 mmol/L) year drop in total cholesterol levels. The researchers attributed this phenomenon to a different correlation, whereby the disease itself increases the risk of death, as well as changing a myriad of factors, such as weight loss and the inability to eat, which lower serum cholesterol. This effect was also shown in men of all ages and women over 50 in the Vorarlberg Health Monitoring and Promotion Programme. These groups were more likely to die of cancer, liver diseases, and mental diseases with very low total cholesterol, of 186 mg/dL (10.3 mmol/L) and lower. This result indicates the low-cholesterol effect occurs even among younger respondents, contradicting the previous assessment among cohorts of older people that this is a marker for frailty occurring with age. Abnormally low levels of cholesterol are termed hypocholesterolemia. Research into the causes of this state is relatively limited, but some studies suggest a link with depression, cancer, and cerebral hemorrhage. In general, the low cholesterol levels seem to be a consequence, rather than a cause, of an underlying illness. A genetic defect in cholesterol synthesis causes Smith–Lemli–Opitz syndrome, often associated with low plasma cholesterol levels. Hyperthyroidism, or any other endocrine disturbance that causes upregulation of the LDL receptor, may result in hypocholesterolemia. The American Heart Association recommends testing cholesterol every four to six years for people aged 20 years or older. A separate set of American Heart Association guidelines issued in 2013 indicates that people taking statin medications should have their cholesterol tested 4–12 weeks after their first dose and then every 3–12 months thereafter. For men ages 45 to 65 and women ages 55 to 65, a cholesterol test should be performed every one to two years, and an annual test should be performed for seniors over the age of 65. After 12 hours of fasting, a blood sample is taken by a healthcare professional from an arm vein to measure a lipid profile for a) total cholesterol, b) HDL cholesterol, c) LDL cholesterol, and d) triglycerides. Results may be expressed as "calculated", indicating a calculation of total cholesterol, HDL, and triglycerides. Cholesterol is tested to determine for "normal" or "desirable" levels if a person has a total cholesterol of 5.2 mmol/L or less (200 mg/dL), an HDL value of more than 1 mmol/L (40 mg/dL, "the higher, the better"), an LDL value of less than 2.6 mmol/L (100 mg/dL), and a triglycerides level of less than 1.7 mmol/L (150 mg/dL). Blood cholesterol in people with lifestyle, aging, or cardiovascular risk factors, such as diabetes mellitus, hypertension, family history of coronary artery disease, or angina, are evaluated at different levels. Interactive pathway map Click on genes, proteins and metabolites below to link to respective articles.[§ 1] Cholesteric liquid crystals Some cholesterol derivatives (among other simple cholesteric lipids) are known to generate the cholesteric liquid crystalline phase. The cholesteric phase is, in fact, a chiral nematic phase, and it changes color when its temperature changes. This makes cholesterol derivatives useful for indicating temperature in liquid-crystal display thermometers and in temperature-sensitive paints. Stereoisomers Cholesterol has 256 stereoisomers that arise from its eight stereocenters. Only two of the stereoisomers have biochemical significance: nat-cholesterol and ent-cholesterol (for natural and enantiomer, respectively). The only cholesterol stereoisomer to occur naturally is nat-cholesterol. Additional images See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Meta_Platforms#cite_note-13] | [TOKENS: 8626] |
Contents Meta Platforms Meta Platforms, Inc. (doing business as Meta) is an American multinational technology company headquartered in Menlo Park, California. Meta owns and operates several prominent social media platforms and communication services, including Facebook, Instagram, WhatsApp, Messenger, Threads and Manus. The company also operates an advertising network for its own sites and third parties; as of 2023[update], advertising accounted for 97.8 percent of its total revenue. Meta has been described as a part of Big Tech, which refers to the largest six tech companies in the United States, Alphabet (Google), Amazon, Apple, Meta (Facebook), Microsoft, and Nvidia, which are also the largest companies in the world by market capitalization. The company was originally established in 2004 as TheFacebook, Inc., and was renamed Facebook, Inc. in 2005. In 2021, it rebranded as Meta Platforms, Inc. to reflect a strategic shift toward developing the metaverse—an interconnected digital ecosystem spanning virtual and augmented reality technologies. In 2023, Meta was ranked 31st on the Forbes Global 2000 list of the world's largest public companies. As of 2022, it was the world's third-largest spender on research and development, with R&D expenses totaling US$35.3 billion. History Facebook filed for an initial public offering (IPO) on January 1, 2012. The preliminary prospectus stated that the company sought to raise $5 billion, had 845 million monthly active users, and a website accruing 2.7 billion likes and comments daily. After the IPO, Zuckerberg would retain 22% of the total shares and 57% of the total voting power in Facebook. Underwriters valued the shares at $38 each, valuing the company at $104 billion, the largest valuation yet for a newly public company. On May 16, one day before the IPO, Facebook announced it would sell 25% more shares than originally planned due to high demand. The IPO raised $16 billion, making it the third-largest in US history (slightly ahead of AT&T Mobility and behind only General Motors and Visa). The stock price left the company with a higher market capitalization than all but a few U.S. corporations—surpassing heavyweights such as Amazon, McDonald's, Disney, and Kraft Foods—and made Zuckerberg's stock worth $19 billion. The New York Times stated that the offering overcame questions about Facebook's difficulties in attracting advertisers to transform the company into a "must-own stock". Jimmy Lee of JPMorgan Chase described it as "the next great blue-chip". Writers at TechCrunch, on the other hand, expressed skepticism, stating, "That's a big multiple to live up to, and Facebook will likely need to add bold new revenue streams to justify the mammoth valuation." Trading in the stock, which began on May 18, was delayed that day due to technical problems with the Nasdaq exchange. The stock struggled to stay above the IPO price for most of the day, forcing underwriters to buy back shares to support the price. At the closing bell, shares were valued at $38.23, only $0.23 above the IPO price and down $3.82 from the opening bell value. The opening was widely described by the financial press as a disappointment. The stock set a new record for trading volume of an IPO. On May 25, 2012, the stock ended its first full week of trading at $31.91, a 16.5% decline. On May 22, 2012, regulators from Wall Street's Financial Industry Regulatory Authority announced that they had begun to investigate whether banks underwriting Facebook had improperly shared information only with select clients rather than the general public. Massachusetts Secretary of State William F. Galvin subpoenaed Morgan Stanley over the same issue. The allegations sparked "fury" among some investors and led to the immediate filing of several lawsuits, one of them a class action suit claiming more than $2.5 billion in losses due to the IPO. Bloomberg estimated that retail investors may have lost approximately $630 million on Facebook stock since its debut. S&P Global Ratings added Facebook to its S&P 500 index on December 21, 2013. On May 2, 2014, Zuckerberg announced that the company would be changing its internal motto from "Move fast and break things" to "Move fast with stable infrastructure". The earlier motto had been described as Zuckerberg's "prime directive to his developers and team" in a 2009 interview in Business Insider, in which he also said, "Unless you are breaking stuff, you are not moving fast enough." In November 2016, Facebook announced the Microsoft Windows client of gaming service Facebook Gameroom, formerly Facebook Games Arcade, at the Unity Technologies developers conference. The client allows Facebook users to play "native" games in addition to its web games. The service was closed in June 2021. Lasso was a short-video sharing app from Facebook similar to TikTok that was launched on iOS and Android in 2018 and was aimed at teenagers. On July 2, 2020, Facebook announced that Lasso would be shutting down on July 10. In 2018, the Oculus lead Jason Rubin sent his 50-page vision document titled "The Metaverse" to Facebook's leadership. In the document, Rubin acknowledged that Facebook's virtual reality business had not caught on as expected, despite the hundreds of millions of dollars spent on content for early adopters. He also urged the company to execute fast and invest heavily in the vision, to shut out HTC, Apple, Google and other competitors in the VR space. Regarding other players' participation in the metaverse vision, he called for the company to build the "metaverse" to prevent their competitors from "being in the VR business in a meaningful way at all". In May 2019, Facebook founded Libra Networks, reportedly to develop their own stablecoin cryptocurrency. Later, it was reported that Libra was being supported by financial companies such as Visa, Mastercard, PayPal and Uber. The consortium of companies was expected to pool in $10 million each to fund the launch of the cryptocurrency coin named Libra. Depending on when it would receive approval from the Swiss Financial Market Supervisory authority to operate as a payments service, the Libra Association had planned to launch a limited format cryptocurrency in 2021. Libra was renamed Diem, before being shut down and sold in January 2022 after backlash from Swiss government regulators and the public. During the COVID-19 pandemic, the use of online services, including Facebook, grew globally. Zuckerberg predicted this would be a "permanent acceleration" that would continue after the pandemic. Facebook hired aggressively, growing from 48,268 employees in March 2020 to more than 87,000 by September 2022. Following a period of intense scrutiny and damaging whistleblower leaks, news started to emerge on October 21, 2021 about Facebook's plan to rebrand the company and change its name. In the Q3 2021 earnings call on October 25, Mark Zuckerberg discussed the ongoing criticism of the company's social services and the way it operates, and pointed to the pivoting efforts to building the metaverse – without mentioning the rebranding and the name change. The metaverse vision and the name change from Facebook, Inc. to Meta Platforms was introduced at Facebook Connect on October 28, 2021. Based on Facebook's PR campaign, the name change reflects the company's shifting long term focus of building the metaverse, a digital extension of the physical world by social media, virtual reality and augmented reality features. "Meta" had been registered as a trademark in the United States in 2018 (after an initial filing in 2015) for marketing, advertising, and computer services, by a Canadian company that provided big data analysis of scientific literature. This company was acquired in 2017 by the Chan Zuckerberg Initiative (CZI), a foundation established by Zuckerberg and his wife, Priscilla Chan, and became one of their projects. Following the rebranding announcement, CZI announced that it had already decided to deprioritize the earlier Meta project, thus it would be transferring its rights to the name to Meta Platforms, and the previous project would end in 2022. Soon after the rebranding, in early February 2022, Meta reported a greater-than-expected decline in profits in the fourth quarter of 2021. It reported no growth in monthly users, and indicated it expected revenue growth to stall. It also expected measures taken by Apple Inc. to protect user privacy to cost it some $10 billion in advertisement revenue, an amount equal to roughly 8% of its revenue for 2021. In meeting with Meta staff the day after earnings were reported, Zuckerberg blamed competition for user attention, particularly from video-based apps such as TikTok. The 27% reduction in the company's share price which occurred in reaction to the news eliminated some $230 billion of value from Meta's market capitalization. Bloomberg described the decline as "an epic rout that, in its sheer scale, is unlike anything Wall Street or Silicon Valley has ever seen". Zuckerberg's net worth fell by as much as $31 billion. Zuckerberg owns 13% of Meta, and the holding makes up the bulk of his wealth. According to published reports by Bloomberg on March 30, 2022, Meta turned over data such as phone numbers, physical addresses, and IP addresses to hackers posing as law enforcement officials using forged documents. The law enforcement requests sometimes included forged signatures of real or fictional officials. When asked about the allegations, a Meta representative said, "We review every data request for legal sufficiency and use advanced systems and processes to validate law enforcement requests and detect abuse." In June 2022, Sheryl Sandberg, the chief operating officer of 14 years, announced she would step down that year. Zuckerberg said that Javier Olivan would replace Sandberg, though in a “more traditional” role. In March 2022, Meta (except Meta-owned WhatsApp) and Instagram were banned in Russia and added to the Russian list of terrorist and extremist organizations for alleged Russophobia and hate speech (up to genocidal calls) amid the ongoing Russian invasion of Ukraine. Meta appealed against the ban, but it was upheld by a Moscow court in June of the same year. Also in March 2022, Meta and Italian eyewear giant Luxottica released Ray-Ban Stories, a series of smartglasses which could play music and take pictures. Meta and Luxottica parent company EssilorLuxottica declined to disclose sales on the line of products as of September 2022, though Meta has expressed satisfaction with its customer feedback. In July 2022, Meta saw its first year-on-year revenue decline when its total revenue slipped by 1% to $28.8bn. Analysts and journalists accredited the loss to its advertising business, which has been limited by Apple's app tracking transparency feature and the number of people who have opted not to be tracked by Meta apps. Zuckerberg also accredited the decline to increasing competition from TikTok. On October 27, 2022, Meta's market value dropped to $268 billion, a loss of around $700 billion compared to 2021, and its shares fell by 24%. It lost its spot among the top 20 US companies by market cap, despite reaching the top 5 in the previous year. In November 2022, Meta laid off 11,000 employees, 13% of its workforce. Zuckerberg said the decision to aggressively increase Meta's investments had been a mistake, as he had wrongly predicted that the surge in e-commerce would last beyond the COVID-19 pandemic. He also attributed the decline to increased competition, a global economic downturn and "ads signal loss". Plans to lay off a further 10,000 employees began in April 2023. The layoffs were part of a general downturn in the technology industry, alongside layoffs by companies including Google, Amazon, Tesla, Snap, Twitter and Lyft. Starting from 2022, Meta scrambled to catch up to other tech companies in adopting specialized artificial intelligence hardware and software. It had been using less expensive CPUs instead of GPUs for AI work, but that approach turned out to be less efficient. The company gifted the Inter-university Consortium for Political and Social Research $1.3 million to finance the Social Media Archive's aim to make their data available to social science research. In 2023, Ireland's Data Protection Commissioner imposed a record EUR 1.2 billion fine on Meta for transferring data from Europe to the United States without adequate protections for EU citizens.: 250 In March 2023, Meta announced a new round of layoffs that would cut 10,000 employees and close 5,000 open positions to make the company more efficient. Meta revenue surpassed analyst expectations for the first quarter of 2023 after announcing that it was increasing its focus on AI. On July 6, Meta launched a new app, Threads, a competitor to Twitter. Meta announced its artificial intelligence model Llama 2 in July 2023, available for commercial use via partnerships with major cloud providers like Microsoft. It was the first project to be unveiled out of Meta's generative AI group after it was set up in February. It would not charge access or usage but instead operate with an open-source model to allow Meta to ascertain what improvements need to be made. Prior to this announcement, Meta said it had no plans to release Llama 2 for commercial use. An earlier version of Llama was released to academics. In August 2023, Meta announced its permanent removal of news content from Facebook and Instagram in Canada due to the Online News Act, which requires Canadian news outlets to be compensated for content shared on its platform. The Online News Act was in effect by year-end, but Meta will not participate in the regulatory process. In October 2023, Zuckerberg said that AI would be Meta's biggest investment area in 2024. Meta finished 2023 as one of the best-performing technology stocks of the year, with its share price up 150 percent. Its stock reached an all-time high in January 2024, bringing Meta within 2% of achieving $1 trillion market capitalization. In November 2023 Meta Platforms launched an ad-free service in Europe, allowing subscribers to opt-out of personal data being collected for targeted advertising. A group of 28 European organizations, including Max Schrems' advocacy group NOYB, the Irish Council for Civil Liberties, Wikimedia Europe, and the Electronic Privacy Information Center, signed a 2024 letter to the European Data Protection Board (EDPB) expressing concern that this subscriber model would undermine privacy protections, specifically GDPR data protection standards. Meta removed the Facebook and Instagram accounts of Iran's Supreme Leader Ali Khamenei in February 2024, citing repeated violations of its Dangerous Organizations & Individuals policy. As of March, Meta was under investigation by the FDA for alleged use of their social media platforms to sell illegal drugs. On 16 May 2024, the European Commission began an investigation into Meta over concerns related to child safety. In May 2023, Iraqi social media influencer Esaa Ahmed-Adnan encountered a troubling issue when Instagram removed his posts, citing false copyright violations despite his content being original and free from copyrighted material. He discovered that extortionists were behind these takedowns, offering to restore his content for $3,000 or provide ongoing protection for $1,000 per month. This scam, exploiting Meta’s rights management tools, became widespread in the Middle East, revealing a gap in Meta’s enforcement in developing regions. An Iraqi nonprofit Tech4Peace’s founder, Aws al-Saadi helped Ahmed-Adnan and others, but the restoration process was slow, leading to significant financial losses for many victims, including prominent figures like Ammar al-Hakim. This situation highlighted Meta’s challenges in balancing global growth with effective content moderation and protection. On 16 September 2024, Meta announced it had banned Russian state media outlets from its platforms worldwide due to concerns about "foreign interference activity." This decision followed allegations that RT and its employees funneled $10 million through shell companies to secretly fund influence campaigns on various social media channels. Meta's actions were part of a broader effort to counter Russian covert influence operations, which had intensified since the invasion. At its 2024 Connect conference, Meta presented Orion, its first pair of augmented reality glasses. Though Orion was originally intended to be sold to consumers, the manufacturing process turned out to be too complex and expensive. Instead, the company pivoted to producing a small number of the glasses to be used internally. On 4 October 2024, Meta announced about its new AI model called Movie Gen, capable of generating realistic video and audio clips based on user prompts. Meta stated it would not release Movie Gen for open development, preferring to collaborate directly with content creators and integrate it into its products by the following year. The model was built using a combination of licensed and publicly available datasets. On October 31, 2024, ProPublica published an investigation into deceptive political advertisement scams that sometimes use hundreds of hijacked profiles and facebook pages run by organized networks of scammers. The authors cited spotty enforcement by Meta as a major reason for the extent of the issue. In November 2024, TechCrunch reported that Meta were considering building a $10bn global underwater cable spanning 25,000 miles. In the same month, Meta closed down 2 million accounts on Facebook and Instagram that were linked to scam centers in Myanmar, Laos, Cambodia, the Philippines, and the United Arab Emirates doing pig butchering scams. In December 2024, Meta announced that, beginning February 2025, they would require advertisers to run ads about financial services in Australia to verify information about who are the beneficiary and the payer in a bid to regulate scams. On December 4, 2024, Meta announced it will invest US$10 billion for its largest AI data center in northeast Louisiana, powered by natural gas facilities. On the 11th of that month, Meta experienced a global outage, impacting accounts on all of their social media and messaging applications. Outage reports from DownDetector reached 70,000+ and 100,000+ within minutes for Instagram and Facebook, respectively. In January 2025, Meta announced plans to roll back its diversity, equity, and inclusion (DEI) initiatives, citing shifts in the "legal and policy landscape" in the United States following the 2024 presidential election. The decision followed reports that CEO Mark Zuckerberg sought to align the company more closely with the incoming Trump administration, including changes to content moderation policies and executive leadership. The new content moderation policies continued to bar insults about a person's intellect or mental illness, but made an exception to allow calling LGBTQ people mentally ill because they are gay or transgender. Later that month, Meta agreed to pay $25 million to settle a 2021 lawsuit brought by Donald Trump for suspending his social media accounts after the January 6 riots. Changes to Meta's moderation policies were controversial among its oversight board, with a significant divide in opinion between the board's US conservatives and its global members. In June 2025, Meta Platforms Inc. has decided to make a multibillion-dollar investment into artificial intelligence startup Scale AI. The financing could exceed $10 billion in value which would make it one of the largest private company funding events of all time. In October 2025, it was announced that Meta would be laying off 600 employees in the artificial intelligence unit to perform better and simpler. They referred to their AI unit as "bloated" and are seeking to trim down the department. This mass layoff is going to impact Meta’s AI infrastructure units, Fundamental Artificial Intelligence Research unit (FAIR) and other product-related positions. Mergers and acquisitions Meta has acquired multiple companies (often identified as talent acquisitions). One of its first major acquisitions was in April 2012, when it acquired Instagram for approximately US$1 billion in cash and stock. In October 2013, Facebook, Inc. acquired Onavo, an Israeli mobile web analytics company. In February 2014, Facebook, Inc. announced it would buy mobile messaging company WhatsApp for US$19 billion in cash and stock. The acquisition was completed on October 6. Later that year, Facebook bought Oculus VR for $2.3 billion in cash and stock, which released its first consumer virtual reality headset in 2016. In late November 2019, Facebook, Inc. announced the acquisition of the game developer Beat Games, responsible for developing one of that year's most popular VR games, Beat Saber. In Late 2022, after Facebook Inc rebranded to Meta Platforms Inc, Oculus was rebranded to Meta Quest. In May 2020, Facebook, Inc. announced it had acquired Giphy for a reported cash price of $400 million. It will be integrated with the Instagram team. However, in August 2021, UK's Competition and Markets Authority (CMA) stated that Facebook, Inc. might have to sell Giphy, after an investigation found that the deal between the two companies would harm competition in display advertising market. Facebook, Inc. was fined $70 million by CMA for deliberately failing to report all information regarding the acquisition and the ongoing antitrust investigation. In October 2022, the CMA ruled for a second time that Meta be required to divest Giphy, stating that Meta already controls half of the advertising in the UK. Meta agreed to the sale, though it stated that it disagrees with the decision itself. In May 2023, Giphy was divested to Shutterstock for $53 million. In November 2020, Facebook, Inc. announced that it planned to purchase the customer-service platform and chatbot specialist startup Kustomer to promote companies to use their platform for business. It has been reported that Kustomer valued at slightly over $1 billion. The deal was closed in February 2022 after regulatory approval. In September 2022, Meta acquired Lofelt, a Berlin-based haptic tech startup. In December 2025, it was announced Meta had acquired the AI-wearables startup, Limitless. In the same month, they also acquired another AI startup, Manus AI, for $2 billion. Manus announced in December that its platform had achieved $100mm in recurring revenue just 8 months after its launch and Meta said it will scale the platform to many other businesses. In January 2026, it was announced Meta proposed acquisition of Manus was undergoing preliminary scrutiny by Chinese regulators. The examination concerns the cross-border transfer of artificial intelligence technology developed in China. Lobbying In 2020, Facebook, Inc. spent $19.7 million on lobbying, hiring 79 lobbyists. In 2019, it had spent $16.7 million on lobbying and had a team of 71 lobbyists, up from $12.6 million and 51 lobbyists in 2018. Facebook was the largest spender of lobbying money among the Big Tech companies in 2020. The lobbying team includes top congressional aide John Branscome, who was hired in September 2021, to help the company fend off threats from Democratic lawmakers and the Biden administration. In December 2024, Meta donated $1 million to the inauguration fund for then-President-elect Donald Trump. In 2025, Meta was listed among the donors funding the construction of the White House State Ballroom. Partnerships February 2026, Meta announced a long-term partnership with Nvidia. Censorship In August 2024, Mark Zuckerberg sent a letter to Jim Jordan indicating that during the COVID-19 pandemic the Biden administration repeatedly asked Meta to limit certain COVID-19 content, including humor and satire, on Facebook and Instagram. In 2016 Meta hired Jordana Cutler, formerly an employee at the Israeli Embassy to the United States, as its policy chief for Israel and the Jewish Diaspora. In this role, Cutler pushed for the censorship of accounts belonging to Students for Justice in Palestine chapters in the United States. Critics have said that Cutler's position gives the Israeli government an undue influence over Meta policy, and that few countries have such high levels of contact with Meta policymakers. Following the election of Donald Trump in 2025, various sources noted possible censorship related to the Democratic Party on Instagram and other Meta platforms. In February 2025, a Meta rep flagged journalist Gil Duran's article and other "critiques of tech industry figures" as spam or sensitive content, limiting their reach. In March 2025, Meta attempted to block former employee Sarah Wynn-Williams from promoting or further distributing her memoir, Careless People, that includes allegations of unaddressed sexual harassment in the workplace by senior executives. The New York Times reports that the arbitration is among Meta's most forcible attempts to repudiate a former employee's account of workplace dynamics. Publisher Macmillan reacted to the ruling by the Emergency International Arbitral Tribunal by stating that it will ignore its provisions. As of 15 March 2025[update], hardback and digital versions of Careless People were being offered for sale by major online retailers. From October 2025, Meta began removing and restricting access for accounts related to LGBTQ, reproductive health and abortion information pages on its platforms. Martha Dimitratou, executive director of Repro Uncensored, called Meta's shadow-banning of these issues "One of the biggest waves of censorship we are seeing". Disinformation concerns Since its inception, Meta has been accused of being a host for fake news and misinformation. In the wake of the 2016 United States presidential election, Zuckerberg began to take steps to eliminate the prevalence of fake news, as the platform had been criticized for its potential influence on the outcome of the election. The company initially partnered with ABC News, the Associated Press, FactCheck.org, Snopes and PolitiFact for its fact-checking initiative; as of 2018, it had over 40 fact-checking partners across the world, including The Weekly Standard. A May 2017 review by The Guardian found that the platform's fact-checking initiatives of partnering with third-party fact-checkers and publicly flagging fake news were regularly ineffective, and appeared to be having minimal impact in some cases. In 2018, journalists working as fact-checkers for the company criticized the partnership, stating that it had produced minimal results and that the company had ignored their concerns. In 2024 Meta's decision to continue to disseminate a falsified video of US president Joe Biden, even after it had been proven to be fake, attracted criticism and concern. In January 2025, Meta ended its use of third-party fact-checkers in favor of a user-run community notes system similar to the one used on X. While Zuckerberg supported these changes, saying that the amount of censorship on the platform was excessive, the decision received criticism by fact-checking institutions, stating that the changes would make it more difficult for users to identify misinformation. Meta also faced criticism for weakening its policies on hate speech that were designed to protect minorities and LGBTQ+ individuals from bullying and discrimination. While moving its content review teams from California to Texas, Meta changed their hateful conduct policy to eliminate restrictions on anti-LGBT and anti-immigrant hate speech, as well as explicitly allowing users to accuse LGBT people of being mentally ill or abnormal based on their sexual orientation or gender identity. In January 2025, Meta faced significant criticism for its role in removing LGBTQ+ content from its platforms, amid its broader efforts to address anti-LGBTQ+ hate speech. The removal of LGBTQ+ themes was noted as part of the wider crackdown on content deemed to violate its community guidelines. Meta's content moderation policies, which were designed to combat harmful speech and protect users from discrimination, inadvertently led to the removal or restriction of LGBTQ+ content, particularly posts highlighting LGBTQ+ identities, support, or political issues. According to reports, LGBTQ+ posts, including those that simply celebrated pride or advocated for LGBTQ+ rights, were flagged and removed for reasons that some critics argue were vague or inconsistently applied. Many LGBTQ+ activists and users on Meta's platforms expressed concern that such actions stifled visibility and expression, potentially isolating LGBTQ+ individuals and communities, especially in spaces that were historically important for outreach and support. Lawsuits Numerous lawsuits have been filed against the company, both when it was known as Facebook, Inc., and as Meta Platforms. In March 2020, the Office of the Australian Information Commissioner (OAIC) sued Facebook, for significant and persistent infringements of the rule on privacy involving the Cambridge Analytica fiasco. Every violation of the Privacy Act is subject to a theoretical cumulative liability of $1.7 million. The OAIC estimated that a total of 311,127 Australians had been exposed. On December 8, 2020, the U.S. Federal Trade Commission and 46 states (excluding Alabama, Georgia, South Carolina, and South Dakota), the District of Columbia and the territory of Guam, launched Federal Trade Commission v. Facebook as an antitrust lawsuit against Facebook. The lawsuit concerns Facebook's acquisition of two competitors—Instagram and WhatsApp—and the ensuing monopolistic situation. FTC alleges that Facebook holds monopolistic power in the U.S. social networking market and seeks to force the company to divest from Instagram and WhatsApp to break up the conglomerate. William Kovacic, a former chairman of the Federal Trade Commission, argued the case will be difficult to win as it would require the government to create a counterfactual argument of an internet where the Facebook-WhatsApp-Instagram entity did not exist, and prove that harmed competition or consumers. In November 2025, it was ruled that Meta did not violate antitrust laws and holds no monopoly in the market. On December 24, 2021, a court in Russia fined Meta for $27 million after the company declined to remove unspecified banned content. The fine was reportedly tied to the company's annual revenue in the country. In May 2022, a lawsuit was filed in Kenya against Meta and its local outsourcing company Sama. Allegedly, Meta has poor working conditions in Kenya for workers moderating Facebook posts. According to the lawsuit, 260 screeners were declared redundant with confusing reasoning. The lawsuit seeks financial compensation and an order that outsourced moderators be given the same health benefits and pay scale as Meta employees. In June 2022, 8 lawsuits were filed across the U.S. over the allege that excessive exposure to platforms including Facebook and Instagram has led to attempted or actual suicides, eating disorders and sleeplessness, among other issues. The litigation follows a former Facebook employee's testimony in Congress that the company refused to take responsibility. The company noted that tools have been developed for parents to keep track of their children's activity on Instagram and set time limits, in addition to Meta's "Take a break" reminders. In addition, the company is providing resources specific to eating disorders as well as developing AI to prevent children under the age of 13 signing up for Facebook or Instagram. In June 2022, Meta settled a lawsuit with the US Department of Justice. The lawsuit, which was filed in 2019, alleged that the company enabled housing discrimination through targeted advertising, as it allowed homeowners and landlords to run housing ads excluding people based on sex, race, religion, and other characteristics. The U.S. Department of Justice stated that this was in violation of the Fair Housing Act. Meta was handed a penalty of $115,054 and given until December 31, 2022, to shadow the algorithm tool. In January 2023, Meta was fined €390 million for violations of the European Union General Data Protection Regulation. In May 2023, the European Data Protection Board fined Meta a record €1.2 billion for breaching European Union data privacy laws by transferring personal data of Facebook users to servers in the U.S. In July 2024, Meta agreed to pay the state of Texas US$1.4 billion to settle a lawsuit brought by Texas Attorney General Ken Paxton accusing the company of collecting users' biometric data without consent, setting a record for the largest privacy-related settlement ever obtained by a state attorney general. In October 2024, Meta Platforms faced lawsuits in Japan from 30 plaintiffs who claimed they were defrauded by fake investment ads on Facebook and Instagram, featuring false celebrity endorsements. The plaintiffs are seeking approximately $2.8 million in damages. In April 2025, the Kenyan High Court ruled that a US$2.4 billion lawsuit in which three plaintiffs claim that Facebook inflamed civil violence in Ethiopia in 2021 could proceed. In April 2025, Meta was fined €200 million ($230 million) for breaking the Digital Markets Act, by imposing a “consent or pay” system that forces users to either allow their personal data to be used to target advertisements, or pay a subscription fee for advertising-free versions of Facebook and Instagram. In late April 2025, a case was filed against Meta in Ghana over the alleged psychological distress experienced by content moderators employed to take down disturbing social media content including depictions of murders, extreme violence and child sexual abuse. Meta moved the moderation service to the Ghanaian capital of Accra after legal issues in the previous location Kenya. The new moderation company is Teleperformance, a multinational corporation with a history of worker's rights violation. Reports suggests the conditions are worse here than in the previous Kenyan location, with many workers afraid of speaking out due to fear of returning to conflict zones. Workers reported developing mental illnesses, attempted suicides, and low pay. In 26 January 2026, a New Mexico state court case was filed, suggesting that Mark Zuckerberg approved allowing minors to access artificial intelligence chatbot companions that safety staffers warned were capable of sexual interactions. In 2020, the company UReputation, which had been involved in several cases concerning the management of digital armies[clarification needed], filed a lawsuit against Facebook, accusing it of unlawfully transmitting personal data to third parties. Legal actions were initiated in Tunisia, France, and the United States. In 2025, the United States District court for the Northern District of Georgia approved a discovery procedure, allowing UReputation to access documents and evidence held by Meta. Structure Meta's key management consists of: As of October 2022[update], Meta had 83,553 employees worldwide. As of June 2024[update], Meta's board consisted of the following directors; Meta Platforms is mainly owned by institutional investors, who hold around 80% of all shares. Insiders control the majority of voting shares. The three largest individual investors in 2024 were Mark Zuckerberg, Sheryl Sandberg and Christopher K. Cox. The largest shareholders in late 2024/early 2025 were: Roger McNamee, an early Facebook investor and Zuckerberg's former mentor, said Facebook had "the most centralized decision-making structure I have ever encountered in a large company". Facebook co-founder Chris Hughes has stated that chief executive officer Mark Zuckerberg has too much power, that the company is now a monopoly, and that, as a result, it should be split into multiple smaller companies. In an op-ed in The New York Times, Hughes said he was concerned that Zuckerberg had surrounded himself with a team that did not challenge him, and that it is the U.S. government's job to hold him accountable and curb his "unchecked power". He also said that "Mark's power is unprecedented and un-American." Several U.S. politicians agreed with Hughes. European Union Commissioner for Competition Margrethe Vestager stated that splitting Facebook should be done only as "a remedy of the very last resort", and that it would not solve Facebook's underlying problems. Revenue Facebook ranked No. 34 in the 2020 Fortune 500 list of the largest United States corporations by revenue, with almost $86 billion in revenue most of it coming from advertising. One analysis of 2017 data determined that the company earned US$20.21 per user from advertising. According to New York, since its rebranding, Meta has reportedly lost $500 billion as a result of new privacy measures put in place by companies such as Apple and Google which prevents Meta from gathering users' data. In February 2015, Facebook announced it had reached two million active advertisers, with most of the gain coming from small businesses. An active advertiser was defined as an entity that had advertised on the Facebook platform in the last 28 days. In March 2016, Facebook announced it had reached three million active advertisers with more than 70% from outside the United States. Prices for advertising follow a variable pricing model based on auctioning ad placements, and potential engagement levels of the advertisement itself. Similar to other online advertising platforms like Google and Twitter, targeting of advertisements is one of the chief merits of digital advertising compared to traditional media. Marketing on Meta is employed through two methods based on the viewing habits, likes and shares, and purchasing data of the audience, namely targeted audiences and "look alike" audiences. The U.S. IRS challenged the valuation Facebook used when it transferred IP from the U.S. to Facebook Ireland (now Meta Platforms Ireland) in 2010 (which Facebook Ireland then revalued higher before charging out), as it was building its double Irish tax structure. The case is ongoing and Meta faces a potential fine of $3–5bn. The U.S. Tax Cuts and Jobs Act of 2017 changed Facebook's global tax calculations. Meta Platforms Ireland is subject to the U.S. GILTI tax of 10.5% on global intangible profits (i.e. Irish profits). On the basis that Meta Platforms Ireland Limited is paying some tax, the effective minimum US tax for Facebook Ireland will be circa 11%. In contrast, Meta Platforms Inc. would incur a special IP tax rate of 13.125% (the FDII rate) if its Irish business relocated to the U.S. Tax relief in the U.S. (21% vs. Irish at the GILTI rate) and accelerated capital expensing, would make this effective U.S. rate around 12%. The insignificance of the U.S./Irish tax difference was demonstrated when Facebook moved 1.5bn non-EU accounts to the U.S. to limit exposure to GDPR. Facilities Users outside of the U.S. and Canada contract with Meta's Irish subsidiary, Meta Platforms Ireland Limited (formerly Facebook Ireland Limited), allowing Meta to avoid US taxes for all users in Europe, Asia, Australia, Africa and South America. Meta is making use of the Double Irish arrangement which allows it to pay 2–3% corporation tax on all international revenue. In 2010, Facebook opened its fourth office, in Hyderabad, India, which houses online advertising and developer support teams and provides support to users and advertisers. In India, Meta is registered as Facebook India Online Services Pvt Ltd. It also has offices or planned sites in Chittagong, Bangladesh; Dublin, Ireland; and Austin, Texas, among other cities. Facebook opened its London headquarters in 2017 in Fitzrovia in central London. Facebook opened an office in Cambridge, Massachusetts in 2018. The offices were initially home to the "Connectivity Lab", a group focused on bringing Internet access to those who do not have access to the Internet. In April 2019, Facebook opened its Taiwan headquarters in Taipei. In March 2022, Meta opened new regional headquarters in Dubai. In September 2023, it was reported that Meta had paid £149m to British Land to break the lease on Triton Square London office. Meta reportedly had another 18 years left on its lease on the site. As of 2023, Facebook operated 21 data centers. It committed to purchase 100% renewable energy and to reduce its greenhouse gas emissions 75% by 2020. Its data center technologies include Fabric Aggregator, a distributed network system that accommodates larger regions and varied traffic patterns. Reception US Representative Alexandria Ocasio-Cortez responded in a tweet to Zuckerberg's announcement about Meta, saying: "Meta as in 'we are a cancer to democracy metastasizing into a global surveillance and propaganda machine for boosting authoritarian regimes and destroying civil society ... for profit!'" Ex-Facebook employee Frances Haugen and whistleblower behind the Facebook Papers responded to the rebranding efforts by expressing doubts about the company's ability to improve while led by Mark Zuckerberg, and urged the chief executive officer to resign. In November 2021, a video published by Inspired by Iceland went viral, in which a Zuckerberg look-alike promoted the Icelandverse, a place of "enhanced actual reality without silly looking headsets". In a December 2021 interview, SpaceX and Tesla chief executive officer Elon Musk said he could not see a compelling use-case for the VR-driven metaverse, adding: "I don't see someone strapping a frigging screen to their face all day." In January 2022, Louise Eccles of The Sunday Times logged into the metaverse with the intention of making a video guide. She wrote: Initially, my experience with the Oculus went well. I attended work meetings as an avatar and tried an exercise class set in the streets of Paris. The headset enabled me to feel the thrill of carving down mountains on a snowboard and the adrenaline rush of climbing a mountain without ropes. Yet switching to the social apps, where you mingle with strangers also using VR headsets, it was at times predatory and vile. Eccles described being sexually harassed by another user, as well as "accents from all over the world, American, Indian, English, Australian, using racist, sexist, homophobic and transphobic language". She also encountered users as young as 7 years old on the platform, despite Oculus headsets being intended for users over 13. See also References External links 37°29′06″N 122°08′54″W / 37.48500°N 122.14833°W / 37.48500; -122.14833 |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-Alexander2019_379-1] | [TOKENS: 12858] |
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Dickinsonia] | [TOKENS: 1901] |
Contents Dickinsonia Dickinsonia is a genus of extinct organism that lived during the late Ediacaran period in what is now Australia, China, Russia, and Ukraine. It had a round, approximately bilaterally symmetric body with multiple segments running along it. It could range from a few millimeters to over a meter in length, and likely lived in shallow waters, feeding on the microbial mats that dominated the seascape at the time. As a member of the Ediacaran biota, its relationships to other organisms has been heavily debated. It was initially proposed to be a jellyfish, and over the years has been claimed to be a land-dwelling lichen, a placozoan, or even a giant protist. Currently, the most popular interpretation is that it was a seafloor-dwelling animal, perhaps a primitive stem group bilaterian, although this is still contentious. Among other Ediacaran organisms, it shares a close resemblance to other segmented forms like Vendia, Yorgia, and Spriggina, and has been proposed to be a member of the phylum Proarticulata or alternatively the morphogroup Dickinsoniomorpha. It is disputed whether the segments of Dickinsonia are bilaterally symmetric across the midline, or are offset from each other via glide reflection, or possibly both. Since the description of Dickinsonia costata in 1947 by Reginald Sprigg, eight other species have been proposed, although only two others—Dickinsonia tenuis and Dickinsonia menneri—are widely considered valid. Description Dickinsonia fossils are known only in the form of imprints and casts in sandstone beds. The specimens found range from a few millimetres to about 1.4 metres (4 ft 7 in) in length, and from a fraction of a millimetre to a few millimetres thick. They are nearly bilaterally symmetric, segmented, round or oval in outline, slightly expanded to one end (i.e. egg-shaped outline). The rib-like segments are radially inclined towards the wide and narrow ends, and the width and length of the segments increases towards the wide end of the fossil. The body is divided into two by a midline ridge or groove, except for a single unpaired segment at one end, dubbed the "anterior most unit" suggested to represent the front of the organism. It is disputed whether the segments are offset from each other following glide reflection, and are thus isomers, or whether the segments are symmetric across the midline, and thus follow true bilateral symmetry, as the specimens displaying the offset may be the result of taphonomic distortion. Dickinsonia could perhaps have had both at the same time, with one side of the organism being glide reflected and the other having true symmetry. The body of Dickinsonia is suggested to have been sack-like, with the outer layer being made of a resistant but unmineralised material. Some specimens from Russia show the presence of branched internal structures. Some authors have suggested that the underside of the body bore cilia, as well as infolded pockets. Dickinsonia is suggested to have lacked a mouth, anus and gut tract. Dickinsonia is suggested to have grown by adding a new pair of segments/isomers at the end opposite the unpaired "anterior most unit". Dickinsonia probably exhibited indeterminate growth (having no maximum size), though it is suggested that the addition of new segments slowed down later in growth. Deformed specimens from Russia indicate that individuals of Dickinsonia could regenerate after being damaged. Ecology Dickinsonia is suggested to have been a mobile marine organism that lived on the seafloor and fed by consuming microbial mats growing on the seabed using structures present on its underside. Dickinsonia-shaped trace fossils, presumed to represent feeding impressions, sometimes found in chains demonstrating this behaviour have been observed. These trace fossils have been assigned to the genus Epibaion. A 2022 study suggested that Dickinsonia temporarily adhered itself to the seafloor by the use of mucus, which may have been an adaptation to living in very shallow water environments. Taphonomy Dickinsonia fossils are preserved as negative impressions on the bases of sandstone beds. Such fossils are imprints of the upper sides of the benthic organisms that have been buried under the sand. The imprints formed as a result of cementation of the sand before complete decomposition of the body. The mechanism of cementation is not quite clear; among many possibilities, the process could have arisen from conditions which gave rise to pyrite "death masks" on the decaying body, or perhaps it was due to the carbonate cementation of the sand. The imprints of the bodies of organisms are often strongly compressed, distorted, and sometimes partly extend into the overlying rock. These deformations appear to show attempts by the organisms to escape from the falling sediment. Rarely, Dickinsonia have been preserved as a cast in massive sandstone lenses, where it occurs together with Pteridinium, Rangea and some others. Large beds containing many hundreds of Dickinsonia (along with many other species) are preserved in situ within Nilpena Ediacara National Park, with park rangers providing on-site guided tours in the cooler months of the year. These specimens are products of events where organisms were first stripped from the sea-floor, transported and deposited within sand flow. In such cases, stretched and ripped Dickinsonia occur. The first such specimen was described as a separate genus and species, Chondroplon bilobatum and later re-identified as Dickinsonia. Taxonomy Dickinsonia was first discovered in 1946 at the Ediacara Member of the Rawnsley Quartzite, Flinders Ranges in South Australia. Reg Sprigg described Dickinsonia the following year and named it after Ben Dickinson, then Director of Mines for South Australia, and head of the government department that employed Sprigg. Additional specimens of Dickinsonia have also been found in the Mogilev Formation in the Dniester River Basin of Ukraine, the White Sea in Russia, and the Dengying Formation in the Yangtze Gorges area, South China. (ca. 551–543 Ma). Sprigg's initial interpretation was that Dickinsonia was a jellyfish-like organism from the early Cambrian. He suspected that the imprint left behind was a cast of the flattened bell, and that the grooves radiating from the center were possibly some sort of canal system or rigid structure. Further analysis in 1949 theorized that the bilateral nature of Dickinsonia could have been a sign of higher complexity, but was unwilling to firmly classify it into any taxon. In 1955, Harrington and Moore published their own classification of Dickinsonia, assigning it to class Dipleurozoa, order Dickinsoniida, and family Dickinsoniidae in the now defunct group Coelenterata. After the discovery of the undisputibly Precambrian Charnia in 1958, the existence of Proterozoic life became more widely accepted among paleontologists. This discovery lead Dickinsonia and other South Australian organisms to be properly recognized as Precambrian in age. The segmentation of the recently discovered Spriggina from the same locality lead it and the similarly segmented Dickinsonia to be classified as annelids, which remained the leading hypothesis for the next few decades, albeit with reservations. In 1985, following studies that concluded that Dickinsonia and related taxa had glide symmetry rather than bilteral symmetry, a new phylum, Proarticulata, was erected to include the Ediacaran organisms that were assumed to have glide reflection, which included Spriggina, Vendia, and several others. Their relationships to other organisms remain uncertain and numerous hypotheses have been offered since. Adolf Seilacher proposed that most Ediacaran organisms were closely related to each other, as part of the grouping "Vendobionta", though recent authors argue that this grouping is likely polyphyletic. Some authors do not use Proarticulata and instead use the clade Dickinsoniomorpha. In 2013 Gregory Retallack proposed that Dickinsonia and other Ediacaran lifeforms were lichens, arguing that their preservation methods were similar. This has been broadly rejected by most authors, who argue that a marine environment better fits available evidence. Other proposals have included giant protists, placozoans, or cnidarians. While Dickinsonia's relationships to other organisms are still highly contentious, most biologists consider an animal with stem-bilaterian affinity to be the most likely interpretation. In 2018 it was found that many Russian specimens contained cholesterol, which is only produced by animals, supporting an animal affinity. These results have been questioned by other authors, however, who consider the association between the cholesterol and Dickinsonia fossils to not be definitive. The predictable growth patterns, clear left and right sides, and a posterior-anterior axis all suggest that Dickinsonia was a bilaterian. However, most modern bilaterians have a mouth and anus connected by a gut, none of which has been found in Dickinsonia. This almost certainly rules out Dickinsonia to be a crown-bilaterian, but could mean it was a stem-bilaterian. Since 1947, a total of nine species have been described, three of which are currently considered valid: References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Encyclopedia_of_Islam] | [TOKENS: 1154] |
Contents Encyclopaedia of Islam The Encyclopaedia of Islam (also French: Encyclopédie de l'Islam and German: Enzyklopädie des Islām; EI is a common abbreviation in all three languages) is a reference work that facilitates the academic study of Islam. It is published by Brill and provides information on various aspects of Islam and the Islamic world. It is considered to be the standard reference work in the field of Islamic studies. The first edition was published in 1913–1938, the second in 1954–2005, and the third was begun in 2007. Content According to Brill, the EI includes "articles on distinguished Muslims of every age and land, on tribes and dynasties, on the crafts and sciences, on political and religious institutions, on the geography, ethnography, flora and fauna of the various countries and on the history, topography and monuments of the major towns and cities. In its geographical and historical scope it encompasses the old Arabo-Islamic empire, the Islamic countries of Iran, Central Asia, the Indian sub-continent and Indonesia, the Ottoman Empire and all other Islamic countries". Reception EI is considered to be the standard reference work in the field of Islamic studies. Each article was written by a recognized specialist on the relevant topic.[citation needed] The most important, authoritative reference work in English on Islam and Islamic subjects. Includes long, signed articles, with bibliographies. Special emphasis is given in this (EI2) edition to economic and social topics, but it remains the standard encyclopedic reference on the Islamic religion in English. — Librarian Suzanne K. Lorimer, Yale University Library The most important and comprehensive reference tool for Islamic studies is the Encyclopaedia of Islam, an immense effort to deal with every aspect of Islamic civilization, conceived in the widest sense, from its origins down to the present day... EI is no anonymous digest of received wisdom. Most of the articles are signed, and while some are hardly more than dictionary entries, others are true research pieces – in many cases the best available treatment of their subject. — Historian R. Stephen Humphreys This reference work is of fundamental importance on topics dealing with the geography, ethnography and biography of Muslim peoples. — Iranologist Elton L. Daniel Historian Richard Eaton criticised the Encyclopaedia of Islam in the book India's Islamic Traditions, 711–1750, published in 2003. He writes that in attempting to describe and define Islam, the project subscribes to the Orientalist, monolithic notion that Islam is a "bounded, self-contained entity". Editions The first edition (EI1) was modeled on the Pauly-Wissowa Realencyclopädie der classischen Altertumswissenschaft. EI1 was created under the aegis of the International Union of Academies, and coordinated by Leiden University. It was published by Brill in four volumes plus supplement from 1913 to 1938 in English, German, and French editions. An abridged version was published in 1953 as the Shorter Encyclopaedia of Islam (SEI), covering mainly law and religion. Excerpts of the SEI have been translated and published in Turkish, Arabic, and Urdu. The second edition of Encyclopaedia of Islam (EI2) was begun in 1954 and completed in 2005 (several indexes to be published until 2007); it is published by the Dutch academic publisher Brill and is available in English and French. Since 1999, (EI2) has been available in electronic form, in both CD-ROM and web-accessible versions. Besides a great expansion in content, the second edition of EI differs from the first mainly in incorporating the work of scholars of Muslim and Middle Eastern background among its many hundreds of contributors: EI1 and SEI were produced almost entirely by European scholars, and they represent a specifically European interpretation of Islamic civilization. The point is not that this interpretation is "wrong", but that the questions addressed in these volumes often differ sharply from those which Muslims have traditionally asked about themselves. EI2 is a somewhat different matter. It began in much the same way as its predecessor, but a growing proportion of the articles now come from scholars of Muslim background. The persons do not represent the traditional learning of Qom and al-Azhar, to be sure; they have been trained in Western-style universities, and they share the methodology if not always the cultural values and attitudes of their Western colleagues. Even so, the change in tone is perceptible and significant. — R. Stephen Humphreys Publication of the Third Edition of EI (EI3) started in 2007. It is available online, printed "Parts" appearing four times per year. The editorial team consists of twenty 'Sectional Editors' and five 'Executive Editors' (i.e. editors-in-chief). The Executive Editors are Kate Fleet, Gudrun Krämer (Free University, Berlin), Everett Rowson (New York University), John Nawas (Catholic University of Leuven), and Denis Matringe (EHESS, CNRS). The scope of EI3 includes comprehensive coverage of Islam in the twentieth century; expansion of geographical focus to include all areas where Islam has been or is a prominent or dominant aspect of society; attention to Muslim minorities all over the world; and full attention to social science as well as humanistic perspectives. Translation It was translated into Urdu in 23 volumes named Urdu Daira Maarif Islamiya, published by University of the Punjab. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Zombie_comedy] | [TOKENS: 410] |
Contents Zombie comedy Zombie comedy, often called zom com or zomedy, is a film genre that aims to blend zombie horror motifs with slapstick comedy as well as morbid humor. History The earliest roots of the genre can be found in Jean Yarbrough's King of the Zombies (1941) and Gordon Douglas's Zombies on Broadway (1945), though both of these films dealt with Haitian-style zombies. While not comedies, George A. Romero's Dawn of the Dead (1978) and Day of the Dead (1985) featured several comedic scenes and satirical commentary on society. An American Werewolf in London (1981) and the Return of the Living Dead series (1985) (especially the first two and the last of the series) can be considered some of the earliest examples of zombie-comedy using the modern zombie. Other early examples include Mr. Vampire (1985), C.H.U.D. II: Bud the C.H.U.D. (1989), Braindead (1992), and Bio Zombie (1998). A popular modern zombie comedy is Edgar Wright's Shaun of the Dead (2004), a self-dubbed romantic zombie comedy, or RomZomCom, with many in-jokes and references to George A. Romero's earlier Dead films, especially Dawn of the Dead. Other popular zombie comedies include Gregg Bishop's Dance of the Dead (2008) and the 2009 film Zombieland. Andrew Currie's Fido, Matthew Leutwyler's Dead & Breakfast, and Peter Jackson's Braindead are also examples of zombie comedies. Sam Raimi's Evil Dead II, although a more direct horror film, contains some lighthearted and dark comedy elements, and its sequel, Army of Darkness, is even more comedic. The Evil Dead franchise features evil spirits that possess dead and living bodies and even objects, however, rather than traditional-style zombies. List Films that can be considered zombie comedies include: See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Voyager_2] | [TOKENS: 4971] |
Contents Voyager 2 Voyager 2 is a space probe launched by NASA on August 20, 1977, as a part of the Voyager program. It was launched on a trajectory towards the gas giants (Jupiter and Saturn) and enabled further encounters with the ice giants (Uranus and Neptune). The only spacecraft to have visited either of the ice giant planets, it was the third of five spacecraft to achieve Solar escape velocity, which allowed it to leave the Solar System. Launched 16 days before its twin Voyager 1, the primary mission of the spacecraft was to study the outer planets and its extended mission is to study interstellar space beyond the Sun's heliosphere. Voyager 2 successfully fulfilled its primary mission of visiting the Jovian system in 1979, the Saturnian system in 1981, Uranian system in 1986, and the Neptunian system in 1989. The spacecraft is currently in its extended mission of studying the interstellar medium. It is at a distance of 142.60 AU (21.3 billion km; 13.3 billion mi) from Earth as of January 2026[update]. The probe entered the interstellar medium on November 5, 2018, at a distance of 119.7 AU (11.1 billion mi; 17.9 billion km) from the Sun and moving at a velocity of 15.341 km/s (34,320 mph) relative to the Sun. Voyager 2 has left the Sun's heliosphere and is traveling through the interstellar medium, though still inside the Solar System, joining Voyager 1, which reached the interstellar medium in 2012. Voyager 2 has begun to provide the first direct measurements of the density and temperature of the interstellar plasma. Voyager 2 is in contact with Earth through the NASA Deep Space Network. Communications are the responsibility of Australia's DSS 43 communication antenna, near Canberra. History In the early space age, it was realized that a periodic alignment of the outer planets would occur in the late 1970s and enable a single probe to visit Jupiter, Saturn, Uranus, and Neptune by taking advantage of the then-new technique of gravity assists. NASA began work on a Grand Tour, which evolved into a massive project involving two groups of two probes each, with one group visiting Jupiter, Saturn, and Pluto and the other Jupiter, Uranus, and Neptune. The spacecraft would be designed with redundant systems to ensure survival throughout the entire tour. By 1972 the mission was scaled back and replaced with two Mariner program-derived spacecraft, the Mariner Jupiter-Saturn probes. To keep apparent lifetime program costs low, the mission would include only flybys of Jupiter and Saturn, but keep the Grand Tour option open.: 263 As the program progressed, the name was changed to Voyager. The primary mission of Voyager 1 was to explore Jupiter, Saturn, and Saturn's largest moon, Titan. Voyager 2 was also to explore Jupiter and Saturn, but on a trajectory that would have the option of continuing on to Uranus and Neptune, or being redirected to Titan as a backup for Voyager 1. Upon successful completion of Voyager 1's objectives, Voyager 2 would get a mission extension to send the probe on towards Uranus and Neptune. Titan was selected due to the interest developed after the images taken by Pioneer 11 in 1979, which had indicated the atmosphere of the moon was substantial and complex. Hence the trajectory was designed for optimum Titan flyby. Constructed by the Jet Propulsion Laboratory (JPL), Voyager 2, whose bus is shaped like a decagonal prism, included 16 hydrazine thrusters, three-axis stabilization, gyroscopes and celestial referencing instruments (a Sun sensor, and a Canopus star tracker) to maintain pointing of the high-gain antenna toward Earth. Collectively these instruments are part of the Attitude and Articulation Control Subsystem (AACS) along with redundant units of most instruments and 8 backup thrusters. The spacecraft also included 11 scientific instruments to study celestial objects as it traveled through space. Built with the intent for eventual interstellar travel, Voyager 2 included a large, 3.7 m (12 ft) parabolic, high-gain antenna (see diagram) to transceive data via the Deep Space Network on Earth. Communications are conducted over the S-band (about 13 cm wavelength) and X-band (about 3.6 cm wavelength) providing data rates as high as 115.2 kilobits per second at the distance of Jupiter, and then ever-decreasing as distance increases, because of the inverse-square law. When the spacecraft is unable to communicate with Earth, the Digital Tape Recorder (DTR) can record about 64 megabytes of data for transmission at another time. Voyager 2 is equipped with three multihundred-watt radioisotope thermoelectric generators (MHW RTGs). Each RTG includes 24 pressed plutonium oxide spheres. At launch, each RTG provided enough heat to generate approximately 157 W of electrical power. Collectively, the RTGs supplied the spacecraft with 470 watts at launch (halving every 87.7 years). They were predicted to allow operations to continue until at least 2020, and continued to provide power to five scientific instruments through the early part of 2023. In April 2023 JPL began using a reservoir of backup power intended for an onboard safety mechanism. As a result, all five instruments had been expected to continue operation through 2026. In October 2024, NASA announced that the plasma science instrument had been turned off, preserving power for the remaining four instruments. Because of the energy required to achieve a Jupiter trajectory boost with an 825-kilogram (1,819 lb) payload, the spacecraft included a propulsion module made of a 1,123-kilogram (2,476 lb) solid-rocket motor and eight hydrazine monopropellant rocket engines, four providing pitch and yaw attitude control, and four for roll control. The propulsion module was jettisoned shortly after the successful Jupiter burn. Sixteen hydrazine Aerojet MR-103 thrusters on the mission module provide attitude control. Four are used to execute trajectory correction maneuvers; the others in two redundant six-thruster branches, to stabilize the spacecraft on its three axes. Only one branch of attitude control thrusters is needed at any time. Thrusters are supplied by a single 70-centimeter (28 in) diameter spherical titanium tank. It contained 100 kilograms (220 lb) of hydrazine at launch, providing enough fuel to last until 2034. Mission profile The Voyager 2 probe was launched on August 20, 1977, by NASA from Space Launch Complex 41 at Cape Canaveral, Florida, aboard a Titan IIIE/Centaur launch vehicle. Two weeks later, the twin Voyager 1 probe was launched on September 5, 1977. However, Voyager 1 reached both Jupiter and Saturn sooner, as Voyager 2 had been launched into a longer, more circular trajectory. Voyager 1's initial orbit had an aphelion of 8.9 AU (830 million mi; 1.33 billion km), just a little short of Saturn's orbit of 9.5 AU (880 million mi; 1.42 billion km). Whereas, Voyager 2's initial orbit had an aphelion of 6.2 AU (580 million mi; 930 million km), well short of Saturn's orbit. In April 1978, no commands were transmitted to Voyager 2 for a period of time, causing the spacecraft to switch from its primary radio receiver to its backup receiver. Sometime afterwards, the primary receiver failed altogether. The backup receiver was functional, but a failed capacitor in the receiver meant that it could only receive transmissions that were sent at a precise frequency, and this frequency would be affected by the Earth's rotation (due to the Doppler effect) and the onboard receiver's temperature, among other things. Voyager 2's closest approach to Jupiter occurred at 22:29 UT on July 9, 1979. It came within 570,000 km (350,000 mi) of the planet's cloud tops. Jupiter's Great Red Spot was revealed as a complex storm moving in a counterclockwise direction. Other smaller storms and eddies were found throughout the banded clouds. Voyager 2 returned images of Jupiter, as well as its moons Amalthea, Io, Callisto, Ganymede, and Europa. During a 10-hour "volcano watch", it confirmed Voyager 1's observations of active volcanism on the moon Io, and revealed how the moon's surface had changed in the four months since the previous visit. Together, the Voyagers observed the eruption of nine volcanoes on Io, and there is evidence that other eruptions occurred between the two Voyager fly-bys. Jupiter's moon Europa displayed a large number of intersecting linear features in the low-resolution photos from Voyager 1. At first, scientists believed the features might be deep cracks, caused by crustal rifting or tectonic processes. Closer high-resolution photos from Voyager 2, however, were puzzling: the features lacked topographic relief, and one scientist said they "might have been painted on with a felt marker". Europa is internally active due to tidal heating at a level about one-tenth that of Io. Europa is thought to have a thin crust (less than 30 km (19 mi) thick) of water ice, possibly floating on a 50 km (31 mi)-deep ocean. Two new, small satellites, Adrastea and Metis, were found orbiting just outside the ring. A third new satellite, Thebe, was discovered between the orbits of Amalthea and Io. The closest approach to Saturn occurred at 03:24:05 UT on August 26, 1981. When Voyager 2 passed behind Saturn, viewed from Earth, it utilized its radio link to investigate Saturn's upper atmosphere, gathering data on both temperature and pressure. In the highest regions of the atmosphere, where the pressure was measured at 70 mbar (1.0 psi), Voyager 2 recorded a temperature of 82 K (−191.2 °C; −312.1 °F). Deeper within the atmosphere, where the pressure was recorded to be 1,200 mbar (17 psi), the temperature rose to 143 K (−130 °C; −202 °F). The spacecraft also observed that the north pole was approximately 10 °C (18 °F) cooler at 100 mbar (1.5 psi) than mid-latitudes, a variance potentially attributable to seasonal shifts (see also Saturn Oppositions). After its Saturn fly-by, Voyager 2's scan platform experienced an anomaly causing its azimuth actuator to seize. This malfunction led to some data loss and posed challenges for the spacecraft's continued mission. The anomaly was traced back to a combination of issues, including a design flaw in the actuator shaft bearing and gear lubrication system, corrosion, and debris build-up. While overuse and depleted lubricant were factors, other elements, such as dissimilar metal reactions and a lack of relief ports, compounded the problem. Engineers on the ground were able to issue a series of commands, rectifying the issue to a degree that allowed the scan platform to resume its function. Voyager 2, which would have been diverted to perform the Titan flyby if Voyager 1 had been unable to, did not pass near Titan due to the malfunction, and subsequently, proceeded with its mission to explore the Uranian system.: 94 The closest approach to Uranus occurred on January 24, 1986, when Voyager 2 came within 81,500 km (50,600 mi) of the planet's cloudtops. Voyager 2 also discovered 11 previously unknown moons: Cordelia, Ophelia, Bianca, Cressida, Desdemona, Juliet, Portia, Rosalind, Belinda, Puck and Perdita.[B] The mission also studied the planet's unique atmosphere, caused by its axial tilt of 97.8°, and examined the Uranian ring system. The length of a day on Uranus as measured by Voyager 2 is 17 hours, 14 minutes. Uranus was shown to have a magnetic field that was misaligned with its rotational axis, unlike other planets that had been visited to that point, and a helix-shaped magnetic tail stretching 10 million kilometers (6 million miles) away from the Sun. When Voyager 2 visited Uranus, much of its cloud features were hidden by a layer of haze; however, false-color and contrast-enhanced images show bands of concentric clouds around its south pole. This area was also found to radiate large amounts of ultraviolet light, a phenomenon that is called "dayglow". The average atmospheric temperature is about 60 K (−351.7 °F; −213.2 °C). The illuminated and dark poles, and most of the planet, exhibit nearly the same temperatures at the cloud tops. The Voyager 2 Planetary Radio Astronomy (PRA) experiment observed 140 lightning flashes, or Uranian electrostatic discharges with a frequency of 0.9-40 MHz. The UEDs were detected from 600,000 km (370,000 mi) of Uranus over 24 hours, most of which were not visible. However, microphysical modeling suggests that Uranian lightning occurs in convective storms occurring in deep troposphere water clouds. If this is the case, lightning will not be visible due to the thick cloud layers above the troposphere. Uranian lightning has a power of around 108 W, emits 1×10^7 J – 2×10^7 J of energy, and lasts an average of 120 ms. Detailed images from Voyager 2's flyby of the Uranian moon Miranda showed huge canyons made from geological faults. One hypothesis suggests that Miranda might consist of a reaggregation of material following an earlier event when Miranda was shattered into pieces by a violent impact. Voyager 2 discovered two previously unknown Uranian rings. Measurements showed that the Uranian rings are different from those at Jupiter and Saturn. The Uranian ring system might be relatively young, and it did not form at the same time that Uranus did. The particles that make up the rings might be the remnants of a moon that was broken up by either a high-velocity impact or torn up by tidal effects. In March 2020, NASA astronomers reported the detection of a large atmospheric magnetic bubble, also known as a plasmoid, released into outer space from the planet Uranus, after reevaluating old data recorded during the flyby. Following a course correction in 1987, Voyager 2's closest approach to Neptune occurred on August 25, 1989. Through repeated computerized test simulations of trajectories through the Neptunian system conducted in advance, flight controllers determined the best way to route Voyager 2 through the Neptune–Triton system. Since the plane of the orbit of Triton is tilted significantly with respect to the plane of the ecliptic; through course corrections, Voyager 2 was directed into a path about 4,950 km (3,080 mi) above the north pole of Neptune. Five hours after Voyager 2 made its closest approach to Neptune, it performed a close fly-by of Triton, Neptune's largest moon, passing within about 40,000 km (25,000 mi). In 1989, the Voyager 2 Planetary Radio Astronomy (PRA) experiment observed around 60 lightning flashes, or Neptunian electrostatic discharges emitting energies over 7×108 J. A plasma wave system (PWS) detected 16 electromagnetic wave events with a frequency range of 50 Hz – 12 kHz at magnetic latitudes 7˚–33˚. These plasma wave detections were possibly triggered by lightning over 20 minutes in the ammonia clouds of the magnetosphere. During Voyager 2's closest approach to Neptune, the PWS instrument provided Neptune’s first plasma wave detections at a sample rate of 28,800 samples per second. The measured plasma densities range from 10–3 – 10–1 cm–3. Voyager 2 discovered previously unknown Neptunian rings, and confirmed six new moons: Despina, Galatea, Larissa, Proteus, Naiad and Thalassa.[C] While in the neighborhood of Neptune, Voyager 2 discovered the "Great Dark Spot", which has since disappeared, according to observations by the Hubble Space Telescope. The Great Dark Spot was later hypothesized to be a region of clear gas, forming a window in the planet's high-altitude methane cloud deck. Interstellar mission Once its planetary mission was over, Voyager 2 was described as working on an interstellar mission, which NASA is using to find out what the Solar System is like beyond the heliosphere. As of September 2023[update], Voyager 2 is transmitting scientific data at about 160 bits per second. Information about continuing telemetry exchanges with Voyager 2 is available from Voyager Weekly Reports. In 1992, Voyager 2 observed the nova V1974 Cygni in the far-ultraviolet, first of its kind. The further increase in the brightness at those wavelengths helped in the more detailed study of the nova. In July 1994, an attempt was made to observe the impacts from fragments of the comet Comet Shoemaker–Levy 9 with Jupiter. The craft's position meant it had a direct line of sight to the impacts and observations were made in the ultraviolet and radio spectrum. Voyager 2 failed to detect anything, with calculations showing that the fireballs were just below the craft's limit of detection. On November 29, 2006, a telemetered command to Voyager 2 was incorrectly decoded by its on-board computer—in a random error—as a command to turn on the electrical heaters of the spacecraft's magnetometer. These heaters remained turned on until December 4, 2006, and during that time, there was a resulting high temperature above 130 °C (266 °F), significantly higher than the magnetometers were designed to endure, and a sensor rotated away from the correct orientation. On August 30, 2007, Voyager 2 passed the termination shock and then entered into the heliosheath, approximately 1 billion mi (1.6 billion km) closer to the Sun than Voyager 1 did. This is due to the interstellar magnetic field of deep space. The southern hemisphere of the Solar System's heliosphere is being pushed in. On April 22, 2010, Voyager 2 encountered scientific data format problems. On May 17, 2010, JPL engineers revealed that a flipped bit in an on-board computer had caused the problem, and scheduled a bit reset for May 19. On May 23, 2010, Voyager 2 resumed sending science data from deep space after engineers fixed the flipped bit. In 2013, it was originally thought that Voyager 2 would enter interstellar space in two to three years, with its plasma spectrometer providing the first direct measurements of the density and temperature of the interstellar plasma. However, Voyager project scientist Edward C. Stone and his colleagues said they lacked evidence of what would be the key signature of interstellar space: a shift in the direction of the magnetic field. Finally, in December 2018, Stone announced that Voyager 2 reached interstellar space on November 5, 2018. Maintenance to the Deep Space Network cut outbound contact with the probe for eight months in 2020. Contact was reestablished on November 2, when a series of instructions was transmitted, subsequently executed, and relayed back with a successful communication message. On February 12, 2021, full communications were restored after a major ground station antenna upgrade that took a year to complete. In October 2020, astronomers reported a significant unexpected increase in density in the space beyond the Solar System as detected by the Voyager 1 and Voyager 2; this implies that "the density gradient is a large-scale feature of the VLISM (very local interstellar medium) in the general direction of the heliospheric nose". On July 18, 2023, Voyager 2 overtook Pioneer 10 as the second farthest spacecraft from the Sun. On July 21, 2023, a programming error misaligned Voyager 2's high gain antenna 2 degrees away from Earth, breaking communications with the spacecraft. By August 1, the spacecraft's carrier signal was detected using multiple antennas of the Deep Space Network. A high-power "shout" on August 4 sent from the Canberra station successfully commanded the spacecraft to reorient towards Earth, resuming communications. As a failsafe measure, the probe is also programmed to autonomously reset its orientation to point towards Earth, which would have occurred by October 15. Reductions in capabilities As the power from the RTG slowly reduces, various items of equipment have been turned off on the spacecraft. The first science equipment turned off on Voyager 2 was the PPS in 1991, which saved 1.2 watts. Some thrusters needed to control the correct attitude of the spacecraft and to point its high-gain antenna in the direction of Earth are out of use due to clogging problems in their hydrazine injector. The spacecraft no longer has backups available for its thruster system and "everything onboard is running on single-string" as acknowledged by Suzanne Dodd, Voyager project manager at JPL, in an interview with Ars Technica. NASA has decided to patch the computer software in order to modify the functioning of the remaining thrusters to slow down the clogging of the small diameter hydrazine injector jets. Before uploading the software update on the Voyager 1 computer, NASA will first try the procedure with Voyager 2, which is closer to Earth. Future of the probe The probe is expected to keep transmitting weak radio messages until at least the mid-2020s, more than 48 years after it was launched. NASA says that "The Voyagers are destined—perhaps eternally—to wander the Milky Way." Voyager 2 is not headed toward any particular star. The nearest star is 4.2 light-years away, and at 15.341 km/s, the spacecraft travels one light-year in about 19,541 years — during which time the nearby stars will also move substantially. In roughly 42,000 years, Voyager 2 will pass the star Ross 248 (10.30 light-years away from Earth) at a distance of 1.7 light-years. If undisturbed for 296,000 years, Voyager 2 should pass by the star Sirius (8.6 light-years from Earth) at a distance of 4.3 light-years. Golden record Both Voyager space probes carry a gold-plated audio-visual disc, a compilation meant to showcase the diversity of life and culture on Earth in the event that either spacecraft is ever found by any extraterrestrial discoverer. The record, made under the direction of a team including Carl Sagan and Timothy Ferris, includes photos of the Earth and its lifeforms, a range of scientific information, spoken greetings from people such as the Secretary-General of the United Nations, and a medley, "Sounds of Earth", that includes the sounds of whales, a baby crying, waves breaking on a shore, and a collection of music spanning different cultures and eras including works by Wolfgang Amadeus Mozart, Blind Willie Johnson, Chuck Berry and Valya Balkanska. Other Eastern and Western classics are included, as well as performances of indigenous music from around the world. The record also contains greetings in 55 different languages. The project aimed to portray the richness of life on Earth and stand as a testament to human creativity and the desire to connect with the cosmos. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/ParaSail_(programming_language)] | [TOKENS: 395] |
Contents ParaSail (programming language) Parallel Specification and Implementation Language (ParaSail) is an object-oriented parallel programming language. Its design and ongoing implementation is described in a blog and on its official website. ParaSail uses a pointer-free programming model, where objects can grow and shrink, and value semantics are used for assignment. It has no global garbage collected heap. Instead, region-based memory management is used throughout. Types can be recursive, so long as the recursive components are declared optional. There are no global variables, no parameter aliasing, and all subexpressions of an expression can be evaluated in parallel. Assertions, preconditions, postconditions, class invariants, etc., are part of the standard syntax, using a Hoare-like notation. Any possible race conditions are detected at compile time. Initial design of ParaSail began in September 2009, by S. Tucker Taft. Both an interpreter using the ParaSail virtual machine, and an LLVM-based ParaSail compiler are available. Work stealing is used for scheduling ParaSail's light-weight threads. The latest version can be downloaded from the ParaSail website. Description The syntax of ParaSail is similar to Modula, but with a class-and-interface-based object-oriented programming model more similar to Java or C#. More recently, the parallel constructs of ParaSail have been adapted to other syntaxes, to produce Java-like, Python-like, and Ada-like parallel languages, dubbed, respectively, Javallel, Parython, and Sparkel (named after the Ada subset SPARK on which it is based). Compilers and interpreters for these languages are included with the ParaSail implementation. Examples The following is a Hello world program in ParaSail: The following is an interface to a basic map module: Here is a possible implementation of this map module, using a binary tree: Here is a simple test program for the BMap module: References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PARI/GP] | [TOKENS: 535] |
Contents PARI/GP PARI/GP is a computer algebra system with the main aim of facilitating number theory computations. Versions 2.1.0 and higher are distributed under the GNU General Public License. It runs on most common operating systems. System overview The PARI/GP system is a package that is capable of doing formal computations on recursive types at high speed; it is primarily aimed at number theorists. Its three main strengths are its speed, the possibility of directly using data types that are familiar to mathematicians, and its extensive algebraic number theory module. The PARI/GP system consists of the following standard components: Also available is gp2c, the GP-to-C compiler, which compiles GP scripts into the C language and transparently loads the resulting functions into gp. The advantage of this is that gp2c-compiled scripts will typically run three to four times faster. gp2c understands almost all of GP. PARI/GP performs arbitrary precision calculations (e.g., the significand can be millions of digits long—and billions of digits on 64-bit machines). It can compute factorizations, perform elliptic curve computations and perform algebraic number theory calculations. It also allows computations with matrices, polynomials, power series, algebraic numbers and implements many special functions. PARI/GP comes with its own built-in graphical plotting capability. PARI/GP has some symbolic manipulation capability, e.g., multivariate polynomial and rational function handling. It also has some formal integration and differentiation capabilities. PARI/GP can be compiled with GMP (GNU Multiple Precision Arithmetic Library) providing faster computations than PARI/GP's native arbitrary-precision kernel. History PARI/GP's progenitor was a program named Isabelle, an interpreter for higher arithmetic, written in 1979 by Henri Cohen and François Dress at the Université Bordeaux 1. PARI/GP was originally developed in 1985 by a team led by Henri Cohen at Laboratoire A2X and is now maintained by Karim Belabas at the Université Bordeaux 1 with the help of many volunteer contributors. The name PARI is a pun about the project's early stages when the authors started to implement a library for "Pascal ARIthmetic" in the Pascal programming language (although they quickly switched to C), and after "pari de Pascal" (Pascal's Wager). The first version of the gp calculator was originally called GPC, for Great Programmable Calculator. The trailing C was eventually dropped. Usage examples Below are some samples of the gp calculator usage: See also References External links • PARI/GP online calculator |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_note-Moskowitz4-105] | [TOKENS: 10628] |
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Jewish_wedding] | [TOKENS: 3144] |
Contents Jewish wedding A Jewish wedding is a wedding ceremony that follows Jewish laws and traditions. While wedding ceremonies vary, common features of a Jewish wedding include a ketubah (marriage contract) that is signed by two witnesses, a chuppah or huppah (wedding canopy), a ring owned by the groom that is given to the bride under the canopy, and the breaking of a glass. Technically, the Jewish wedding process has two distinct stages. The first, kiddushin (Hebrew for "betrothal"; sanctification or dedication, also called erusin) and nissuin (marriage), is when the couple start their life together. It is at the first stage (kiddushin) when the woman becomes prohibited to all other men, requiring a get (religious divorce) to dissolve it, while the second stage permits the couple to each other. The ceremony that accomplishes nissuin is also known as chuppah. Today, erusin/kiddushin occurs when the groom gives the bride a ring or other object of value with the intent of creating a marriage. There are differing opinions as to which part of the ceremony constitutes nissuin/chuppah, such as standing under the canopy and being alone together in a room (yichud). Erusin/kiddushin has evolved from a period in which the man was to prepare financially to marry his wife into becoming the first half of the wedding ceremony. While historically these two events could take place as much as a year apart, they are now commonly combined into one ceremony. Signing of the marriage contract Before the wedding ceremony, the groom agrees to be bound by the terms of the ketubah (marriage contract) in the presence of two witnesses, whereupon the witnesses sign the ketubah. Usually these two witnesses are not closely related to the couple, but family and friends will be present for the signing. The ketubah details the obligations of the groom to the bride, among which are food, clothing, and marital relations. This document has the standing of a legally binding agreement, though it may be hard to collect these amounts in a secular court. It is often written as an illuminated manuscript that is framed and displayed in their home. Under the chuppah, it is traditional to read the signed ketubah aloud, usually in the Aramaic original, but sometimes in translation. Traditionally, this is done to separate the two basic parts of the wedding. Non-Orthodox Jewish couples may opt for a bilingual ketubah, or for a shortened version to be read out. Bridal canopy A traditional Jewish wedding ceremony takes place under a chuppah (wedding canopy), symbolizing the new home being built by the couple when they become husband and wife. This too, in Ashkenazi Jewish custom, was usually placed outdoors under an open sky. The chuppah used in Ashkenazi ceremonies includes a cloth canopy held up by four beams. This structure is meant to represent the home of the new couple and is traditionally standing under an open sky. While some Sephardic weddings will also include a chuppah of a cloth canopy and four beams, some weddings will use the tallit the groom wears as the chuppah. Once the ceremony concludes the groom will wrap the tallit around himself and his new wife, signifying their joining. Covering of the bride Prior to the ceremony, Ashkenazi Jews have a custom for the groom to cover the face of the bride (usually with a veil), and a prayer is often said for her based on the words spoken to Rebecca in Genesis 24:60. The veiling ritual is known in Yiddish as badeken. Various reasons are given for the veil and the ceremony, a commonly accepted reason is that it reminds the Jewish people of how Jacob was tricked by Laban into marrying Leah before Rachel, as her face was covered by her veil (see Vayetze). Another reasoning is that Rebecca is said to have veiled herself when approached by Isaac, who would become her husband. Sephardi Jews do not perform this ceremony. Additionally, the veil emphasizes that the groom is not solely interested in the bride's external beauty, which fades with time; but rather in her inner beauty which she will never lose. If the couple has chosen to spend time apart leading up to the wedding day, this is the first time that they have seen each other since then. Unterfirers In many Orthodox Jewish communities, the bride is escorted to the chuppah by both mothers, and the groom is escorted by both fathers, known by Ashkenazi Jews as unterfirers (Yiddish: "Ones who lead under"). In another custom, bride and groom are each escorted by their respective parents. However, the escorts may be any happily married couple, if parents are unavailable or undesired for some reason. There is a custom in some Ashkenazi communities for the escorts to hold candles as they process to the chuppah. Circling In Ashkenazi tradition, the bride traditionally walks around the groom three or seven times when she arrives at the chuppah. This may derive from Jeremiah 31:22, "A woman shall surround a man". The three circuits may represent the three virtues of marriage: righteousness, justice and loving kindness (see Hosea 2:19). Seven circuits derives from the Biblical concept that seven denotes perfection or completeness. This has also been linked to when Joshua circled the walls of Jericho seven times and they were destroyed. Sephardic Jews do not perform this ceremony. Increasingly, it is common in liberal or progressive Jewish communities (especially Reform, Reconstructionist, or Humanistic) to modify this custom for the sake of egalitarianism, or for a same-gender couple. One adaptation of this tradition is for the bride to circle the groom three times, then for the groom to circle his bride three times, and then for each to circle each other (as in a do-si-do). The symbolism of the circling has been reinterpreted to signify the centrality of one spouse to the other, or to represent the four imahot (matriarchs) and three avot (patriarchs). Presentation of the ring (Betrothal) In traditional weddings, two blessings are recited before the betrothal; a blessing over wine, and the betrothal blessing, which is specified in the Talmud. The wine is then tasted by the couple. Rings are not actually required; they are simply the most common way (since the Middle Ages) of fulfilling the bride price requirement. The bride price (or ring) must have a monetary value no less than a single prutah (the smallest denomination of currency used during the Talmudic era). The low value is to ensure that there are no financial barriers to access marriage. According to Jewish law, the ring must be composed of solid metal (gold or silver are preferred; alloys are discouraged), with no jewel inlays or gem settings, so that it's easy to ascertain the ring's value. Others ascribe a more symbolic meaning, saying that the ring represents the ideal of purity and honesty in a relationship. However, it's quite common for Jewish couples (especially those who are not Orthodox) to use weddings rings with engraving, metallic embellishments, or to go a step further and use gemstone settings. Some Orthodox couples will use a simple gold or silver band during the ceremony to fulfill the halachic obligations, and after the wedding, the bride may wear a ring with any decoration she likes. The groom gives the bride a ring, traditionally a plain wedding band, and recites the declaration: Behold, you are consecrated to me with this ring according to the law of Moses and Israel. The groom places the ring on the bride's right index finger. According to traditional Jewish law, two valid witnesses must see him place the ring. During some egalitarian weddings, the bride will also present a ring to the groom, often with a quote from the Song of Songs: "Ani l'dodi, ve dodi li" (I am my beloved's and my beloved is mine), which may also be inscribed on the ring itself. This ring is sometimes presented outside the chuppah to avoid conflicts with Jewish law. Seven blessings The wedding formally begins when The Sheva Brachot are read. The Sheva Brachot or seven blessings are recited by the hazzan or rabbi, or by select guests who are called up individually. Being called upon to recite one of the seven blessings is considered an honour. The groom is given the cup of wine to drink from after the seven blessings. The bride also drinks the wine. In some traditions, the cup will be held to the lips of the groom by his new father-in-law and to the lips of the bride by her new mother-in-law. Traditions vary as to whether additional songs are sung before the seven blessings. Breaking the glass After the bride has been given the ring, or at the end of the ceremony (depending on local custom), the groom breaks a glass, crushing it with his right foot. There are different reasonings that exist for this custom. Some believe that breaking the glass is a somber occurrence to reflect on the destruction of the two Jewish temples. Former Sephardic Chief Rabbi of Israel Ovadia Yosef has strongly criticized the way this custom is sometimes carried out in Israel, arguing that "Many unknowledgeable people fill their mouths with laughter during the breaking of the glass, shouting 'mazel tov' and turning a beautiful custom meant to express our sorrow" over Jerusalem's destruction "into an opportunity for lightheadedness." The origin of this custom is unknown, although many reasons have been given. The primary reason is that joy must always be tempered. This is based on two accounts in the Talmud of rabbis who, upon seeing that their son's wedding celebration was getting out of hand, broke a vessel – in the second case a glass – to calm things down. Another explanation is that it is a reminder that despite the joy, Jews still mourn the destruction of the Temple in Jerusalem. Because of this, some recite the verses "If I forget thee / O Jerusalem..." (Ps. 137:5) at this point. Many other reasons have been given by traditional authorities. Reform Judaism has a new custom where brides and grooms break the wine glass together.[citation needed] Yichud Yichud (togetherness or seclusion) refers to the Ashkenazi practice of leaving the bride and groom alone for 8–20 minutes after the wedding ceremony, in which the couple retreat to a private room. Yichud can take place anywhere, from a rabbi's study to a synagogue classroom. The reason for yichud is that according to several authorities, standing under the canopy alone does not constitute chuppah, and seclusion is necessary to complete the wedding ceremony. However, Sephardic Jews do not have this custom, as they consider it a davar mechoar (repugnant thing), compromising the couple's modesty. Today, the Yichud is not used to physically consummate the marriage. Instead, couples will often eat and relax together for this short period of time before the dancing and celebrations of nissuin begin. Since the wedding day is considered the bride and groom's personal Yom Kippur, they may choose to fast leading up to the wedding. The Yichud can be spent as a time for the couple to break their fast and have their first meal together. Even if they did not choose to fast, it is still a secluded opportunity for the couple to spend quality time with one another before continuing on with the busyness of their wedding day. In Yemen, the Jewish practice was not for the groom and his bride to be secluded in a canopy (chuppah), as is widely practiced today in Jewish weddings, but rather in a bridal chamber that was, in effect, a highly decorated room in the house of the groom. This room was traditionally decorated with large hanging sheets of colored, patterned cloth, replete with wall cushions and short-length mattresses for reclining. Their marriage is consummated when they have been left together alone in this room. The chuppah is described the same way in Sefer HaIttur (12th century), and similarly in the Jerusalem Talmud. Wedding feast After the wedding ceremony and the Yichud, the bride and groom will make a grand entrance into a room filled with friends and family to begin the celebrations. The wedding ceremony is considered a serious religious event, while the wedding feast is considered a fun, lively celebration for the couple. It is expected and required for the guests to bring joy and festivities to the couple on their wedding day. At the wedding feast, there is dancing, singing, eating, and drinking. This is broken up into two celebrations. Towards the beginning of the wedding feast, there is dancing and celebrations, but men and women are separated. After a couple of hours, a more lively celebration begins. Typically, this occurs after the older guests leave, and there is a mixing of men and women (not at orthodox weddings), and a dance is usually involved. Special dances Dancing is a major feature of Jewish weddings. It is customary for the guests to dance in front of the seated couple and entertain them. Traditional Ashkenazi dances include: Birkat hamazon and sheva brachot After the meal, Birkat Hamazon (Grace after meals) is recited, followed by sheva brachot. At a wedding banquet, an enhanced version of the call to Birkat Hamazon is used, including (in Ashkenazic communities) the first stanza of Devai Haser. Prayer booklets called bentshers may be handed out to guests. After the prayers, the blessing over the wine is recited, with two glasses of wine poured together into a third, symbolising the creation of a new life together. Jewish prenuptial agreements In present times, Jewish rabbinical bodies have developed Jewish prenuptial agreements designed to prevent the husband from withholding a get from his wife, should she want a divorce. Such documents have been developed and widely used in the United States, Israel, the United Kingdom and other places. However, this approach has not been universally accepted, particularly by the Orthodox. Conservative Judaism developed the Lieberman clause in order to prevent husbands from refusing to give their wives a get. To do this, the ketubah has built in provisions; so, if predetermined circumstances occur, the divorce goes into effect immediately. Timing Weddings should not be performed on Shabbat or on Jewish holidays, including Chol HaMoed. Weddings cannot be held on Shabbat because the purpose of a wedding is for the bride to acquire her groom, and vice versa. Shabbat regulations prohibit any transactions or acquisitions, so weddings are not allowed. Additionally, for guests to arrive at the wedding via transportation or for the wedding to be a success, there would have to be labor performed that day, which is not permitted. The period of the counting of the omer and the three weeks are also prohibited, although customs vary regarding part of these periods. Some months and days are considered more or less auspicious. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Fairy_tale] | [TOKENS: 9002] |
Contents Fairy tale A fairy tale (alternative names include fairytale, fairy story, household tale, magic tale, or wonder tale) is a short story that belongs to the folklore genre. Such stories typically feature magic, enchantments, and mythical or fanciful beings. In most cultures, there is no clear line separating myth from folk or fairy tale; all these together form the literature of preliterate societies. Fairy tales may be distinguished from other folk narratives such as legends (which generally involve belief in the veracity of the events described) and explicit moral tales, including beast fables. Prevalent elements include dragons, dwarfs, elves, fairies, giants, gnomes, goblins, griffins, merfolk, monsters, monarchy, pixies, talking animals, trolls, unicorns, witches, wizards, woodwoses, magic, and enchantments. In less technical contexts, the term is also used to describe something blessed with unusual happiness, as in "fairy-tale ending" (a happy ending) or "fairy-tale romance". Colloquially, the term "fairy tale" or "fairy story" can also mean any far-fetched story or tall tale; it is used especially to describe any story that not only is not true, but also could not possibly be true. Legends are perceived as real within their culture; fairy tales may merge into legends, where the narrative is perceived both by teller and hearers as being grounded in historical truth. However, unlike legends and epics, fairy tales usually do not contain more than superficial references to religion and to actual places, people, and events; they take place "once upon a time" rather than in actual times. Fairy tales occur both in oral and in literary form (literary fairy tale); the name "fairy tale" ("conte de fées" in French) was first ascribed to them by Madame d'Aulnoy in the late 17th century. Many of today's fairy tales have evolved from centuries-old stories that have appeared, with variations, in multiple cultures around the world. The history of the fairy tale is particularly difficult to trace because often only the literary forms survive. Still, according to researchers at universities in Durham and Lisbon, such stories may date back thousands of years, some to the Bronze Age. Fairy tales, and works derived from fairy tales, are still written today. Folklorists have classified fairy tales in various ways. The Aarne–Thompson–Uther Index and the morphological analysis of Vladimir Propp are among the most notable. Other folklorists have interpreted the tales' significance, but no school has been definitively established for the meaning of the tales. Terminology Some folklorists prefer to use the German term Märchen or "wonder tale" to refer to the genre rather than fairy tale, a practice given weight by the definition of Thompson in his 1977 edition of The Folktale: "...a tale of some length involving a succession of motifs or episodes. It moves in an unreal world without definite locality or definite creatures and is filled with the marvellous. In this never-never land, humble heroes kill adversaries, succeed to kingdoms and marry princesses." The characters and motifs of fairy tales are simple and archetypal: princesses and goose-girls; youngest sons and gallant princes; ogres, giants, dragons, and trolls; wicked stepmothers and false heroes; fairy godmothers and other magical helpers, often talking horses, or foxes, or birds; glass mountains; and prohibitions and breaking of prohibitions. Definition Although the fairy tale is a distinct genre within the larger category of folktale, the definition that marks a work as a fairy tale is a source of considerable dispute. The term itself comes from the translation of Madame D'Aulnoy's Conte de fées, first used in her collection in 1697. Common parlance conflates fairy tales with beast fables and other folktales, and scholars differ on the degree to which the presence of fairies and/or similarly mythical beings (e.g., elves, goblins, trolls, giants, huge monsters, or mermaids) should be taken as a differentiator. Vladimir Propp, in his Morphology of the Folktale, criticized the common distinction between "fairy tales" and "animal tales" on the grounds that many tales contained both fantastic elements and animals. Nevertheless, to select works for his analysis, Propp used all Russian folktales classified as a folklore, Aarne–Thompson–Uther Index 300–749,—in a cataloguing system that made such a distinction—to gain a clear set of tales. His own analysis identified fairy tales by their plot elements, but that in itself has been criticized, as the analysis does not lend itself easily to tales that do not involve a quest, and furthermore, the same plot elements are found in non-fairy tale works. Were I asked, what is a fairytale? I should reply, Read Undine: that is a fairytale ... of all fairytales I know, I think Undine the most beautiful. — George MacDonald, The Fantastic Imagination As Stith Thompson points out, talking animals and the presence of magic seem to be more common to the fairy tale than fairies themselves. However, the mere presence of animals that talk does not make a tale a fairy tale, especially when the animal is clearly a mask on a human face, as in fables. In his essay "On Fairy-Stories", J. R. R. Tolkien agreed with the exclusion of "fairies" from the definition, defining fairy tales as stories about the adventures of men in Faërie, the land of fairies, fairytale princes and princesses, dwarves, elves, and not only other magical species but many other marvels. However, the same essay excludes tales that are often considered fairy tales, citing as an example The Monkey's Heart, which Andrew Lang included in The Lilac Fairy Book. Steven Swann Jones identified the presence of magic as the feature by which fairy tales can be distinguished from other sorts of folktales. Davidson and Chaudri identify "transformation" as the key feature of the genre. From a psychological point of view, Jean Chiriac argued for the necessity of the fantastic in these narratives. In terms of aesthetic values, Italo Calvino cited the fairy tale as a prime example of "quickness" in literature, because of the economy and concision of the tales. Originally, stories that would contemporarily be considered fairy tales were not marked out as a separate genre. The German term "Märchen" stems from the old German word "Mär", which means news or tale. The word "Märchen" is the diminutive of the word "Mär", therefore it means a "little story". Together with the common beginning "once upon a time", this tells us that a fairy tale or a märchen was originally a little story from a long time ago when the world was still magic. (Indeed, one less regular German opening is "In the old times when wishing was still effective".) The French writers and adaptors of the conte de fées genre often included fairies in their stories; the genre name became "fairy tale" in English translation and "gradually eclipsed the more general term folk tale that covered a wide variety of oral tales". Jack Zipes also attributes this shift to changing sociopolitical conditions in the seventeenth and eighteenth centuries that led to the trivialization of these stories by the upper classes. Roots of the genre come from different oral stories passed down in European cultures. The genre was first marked out by writers of the Renaissance, such as Giovanni Francesco Straparola and Giambattista Basile, and stabilized through the works of later collectors such as Charles Perrault and the Brothers Grimm. In this evolution, the name was coined when the précieuses took up writing literary stories; Madame d'Aulnoy invented the term Conte de fée, or fairy tale, in the late 17th century. Before the definition of the genre of fantasy, many works that would now be classified as fantasy were termed "fairy tales", including Tolkien's The Hobbit, George Orwell's Animal Farm, and L. Frank Baum's The Wonderful Wizard of Oz. Indeed, Tolkien's "On Fairy-Stories" includes discussions of world-building and is considered a vital part of fantasy criticism. Although fantasy, particularly the subgenre of fairytale fantasy, draws heavily on fairy tale motifs, the genres are now regarded as distinct. The fairy tale, told orally, is a sub-class of the folktale. Many writers have written in the form of the fairy tale. These are the literary fairy tales, or Kunstmärchen. The oldest forms, from Panchatantra to the Pentamerone, show considerable reworking from the oral form. The Grimm brothers were among the first to try to preserve the features of oral tales. Yet the stories printed under the Grimm name have been considerably reworked to fit the written form. Literary fairy tales and oral fairy tales freely exchanged plots, motifs, and elements with one another and with the tales of foreign lands. The literary fairy tale came into fashion during the 17th century, developed by aristocratic women as a parlour game. This, in turn, helped to maintain the oral tradition. According to Jack Zipes, "The subject matter of the conversations consisted of literature, mores, taste, and etiquette, whereby the speakers all endeavoured to portray ideal situations in the most effective oratorical style that would gradually have a major effect on literary forms." Many 18th-century folklorists attempted to recover the "pure" folktale, uncontaminated by literary versions. Yet while oral fairy tales likely existed for thousands of years before the literary forms, there is no pure folktale, and each literary fairy tale draws on folk traditions, if only in parody. This makes it impossible to trace forms of transmission of a fairy tale. Oral story-tellers have been known to read literary fairy tales to increase their own stock of stories and treatments. History The oral tradition of the fairy tale came long before the written page. Tales were told or enacted dramatically, rather than written down, and handed down from generation to generation. Because of this, the history of their development is necessarily obscure and blurred. Fairy tales appear, now and again, in written literature throughout literate cultures,[a][b] as in The Golden Ass, which includes Cupid and Psyche (Roman, 100–200 AD), or the Panchatantra (India 3rd century BC), but it is unknown to what extent these reflect the actual folk tales even of their own time. The stylistic evidence indicates that these, and many later collections, reworked folk tales into literary forms. What they do show is that the fairy tale has ancient roots, older than the Arabian Nights collection of magical tales (compiled circa 1500 AD), such as Vikram and the Vampire, and Bel and the Dragon. Besides such collections and individual tales, in China Taoist philosophers such as Liezi and Zhuangzi recounted fairy tales in their philosophical works. In the broader definition of the genre, the first famous Western fairy tales are those of Aesop (6th century BC) in ancient Greece. Scholarship points out that Medieval literature contains early versions or predecessors of later known tales and motifs, such as the grateful dead, The Bird Lover or the quest for the lost wife.[c] Recognizable folktales have also been reworked as the plot of folk literature and oral epics. Jack Zipes writes in When Dreams Came True, "There are fairy tale elements in Chaucer's The Canterbury Tales, Edmund Spenser's The Faerie Queene, and in many of William Shakespeare plays." King Lear can be considered a literary variant of fairy tales such as Water and Salt and Cap O' Rushes. The tale itself resurfaced in Western literature in the 16th and 17th centuries, with The Facetious Nights of Straparola by Giovanni Francesco Straparola (Italy, 1550 and 1553), which contains many fairy tales in its inset tales, and the Neapolitan tales of Giambattista Basile (Naples, 1634–36), which are all fairy tales. Carlo Gozzi made use of many fairy tale motifs among his Commedia dell'Arte scenarios, including among them one based on The Love For Three Oranges (1761). Simultaneously, Pu Songling, in China, included many fairy tales in his collection, Strange Stories from a Chinese Studio (published posthumously, 1766), which has been described by Yuken Fujita of Keio University as having "a reputation as the most outstanding short story collection." The fairy tale itself became popular among the précieuses of upper-class France (1690–1710), and among the tales told in that time were the ones of La Fontaine and the Contes of Charles Perrault (1697), who fixed the forms of Sleeping Beauty and Cinderella. Although Straparola's, Basile's and Perrault's collections contain the oldest known forms of various fairy tales, on the stylistic evidence, all the writers rewrote the tales for literary effect. In the mid-17th century, a vogue for magical tales emerged among the intellectuals who frequented the salons of Paris. These salons were regular gatherings hosted by prominent aristocratic women, where women and men could gather together to discuss the issues of the day. In the 1630s, aristocratic women began to gather in their own living rooms, salons, to discuss the topics of their choice: arts and letters, politics, and social matters of immediate concern to the women of their class: marriage, love, financial and physical independence, and access to education. This was a time when women were barred from receiving a formal education. Some of the most gifted women writers of the period came out of these early salons (such as Madeleine de Scudéry and Madame de Lafayette), which encouraged women's independence and pushed against the gender barriers that defined their lives. The salonnières argued particularly for love and intellectual compatibility between the sexes, opposing the system of arranged marriages. Sometime in the middle of the 17th century, a passion for the conversational parlour game based on the plots of old folk tales swept through the salons. Each salonnière was called upon to retell an old tale or rework an old theme, spinning clever new stories that not only showcased verbal agility and imagination but also slyly commented on the conditions of aristocratic life. Great emphasis was placed on a mode of delivery that seemed natural and spontaneous. The decorative language of the fairy tales served an important function: disguising the rebellious subtext of the stories and sliding them past the court censors. Critiques of court life (and even of the king) were embedded in extravagant tales and in dark, sharply dystopian ones. Not surprisingly, the tales by women often featured young (but clever) aristocratic girls whose lives were controlled by the arbitrary whims of fathers, kings, and elderly wicked fairies, as well as tales in which groups of wise fairies (i.e., intelligent, independent women) stepped in and put all to rights. The salon tales as they were originally written and published have been preserved in a monumental work called Le Cabinet des Fées, an enormous collection of stories from the 17th and 18th centuries. The first collectors to attempt to preserve not only the plot and characters of the tale, but also the style in which they were told, was the Brothers Grimm, collecting German fairy tales; ironically, this meant although their first edition (1812 & 1815) remains a treasure for folklorists, they rewrote the tales in later editions to make them more acceptable, which ensured their sales and the later popularity of their work. Such literary forms did not merely draw from the folktale, but also influenced folktales in turn. The Brothers Grimm rejected several tales for their collection, though told orally to them by Germans, because the tales derived from Perrault, and they concluded they were thereby French and not German tales; an oral version of "Bluebeard" was thus rejected, and the tale of Little Briar Rose, clearly related to Perrault's "Sleeping Beauty", was included only because Jacob Grimm convinced his brother that the figure of Brynhildr, from much earlier Norse mythology, proved that the sleeping princess was authentically Germanic folklore. This consideration of whether to keep Sleeping Beauty reflected a belief common among folklorists of the 19th century: that the folk tradition preserved fairy tales in forms from pre-history except when "contaminated" by such literary forms, leading people to tell inauthentic tales. The rural, illiterate, and uneducated peasants, if suitably isolated, were the folk and would tell pure folk tales. Sometimes they regarded fairy tales as a form of fossil, the remnants of a once-perfect tale. However, further research has concluded that fairy tales never had a fixed form, and regardless of literary influence, the tellers constantly altered them for their own purposes. The work of the Brothers Grimm influenced other collectors, both inspiring them to collect tales and leading them to similarly believe, in a spirit of romantic nationalism, that the fairy tales of a country were particularly representative of it, to the neglect of cross-cultural influence. Among those influenced were the Russian Alexander Afanasyev (first published in 1866), the Norwegians Peter Christen Asbjørnsen and Jørgen Moe (first published in 1845), the Romanian Petre Ispirescu (first published in 1874), the English Joseph Jacobs (first published in 1890), and Jeremiah Curtin, an American who collected Irish tales (first published in 1890). Ethnographers collected fairy tales throughout the world, finding similar tales in Africa, the Americas, and Australia; Andrew Lang was able to draw on not only the written tales of Europe and Asia, but those collected by ethnographers, to fill his "coloured" fairy books series. They also encouraged other collectors of fairy tales, as when Yei Theodora Ozaki created a collection, Japanese Fairy Tales (1908), after encouragement from Lang. Simultaneously, writers such as Hans Christian Andersen and George MacDonald continued the tradition of literary fairy tales. Andersen's work sometimes drew on old folktales, but more often deployed fairytale motifs and plots in new tales. MacDonald incorporated fairytale motifs both in new literary fairy tales, such as The Light Princess, and in works of the genre that would become fantasy, as in The Princess and the Goblin or Lilith. Cross-cultural transmission Two theories of origins have attempted to explain the common elements in fairy tales found spread over continents. One is that a single point of origin generated any given tale, which then spread over the centuries; the other is that such fairy tales stem from common human experience and therefore can appear separately in many different origins. Fairy tales with very similar plots, characters, and motifs are found spread across many different cultures. Many researchers hold this to be caused by the spread of such tales, as people repeat tales they have heard in foreign lands, although the oral nature makes it impossible to trace the route except by inference. Folklorists have attempted to determine the origin by internal evidence, which can not always be clear; Joseph Jacobs, comparing the Scottish tale The Ridere of Riddles with the version collected by the Brothers Grimm, The Riddle, noted that in The Ridere of Riddles one hero ends up polygamously married, which might point to an ancient custom, but in The Riddle, the simpler riddle might argue greater antiquity. Folklorists of the "Finnish" (or historical-geographical) school attempted to place fairy tales to their origin, with inconclusive results. Sometimes influence, especially within a limited area and time, is clearer, as when considering the influence of Perrault's tales on those collected by the Brothers Grimm. Little Briar-Rose appears to stem from Perrault's The Sleeping Beauty, as the Grimms' tale appears to be the only independent German variant. Similarly, the close agreement between the opening of the Grimms' version of Little Red Riding Hood and Perrault's tale points to an influence, although the Grimms' version adds a different ending (perhaps derived from The Wolf and the Seven Young Kids). Fairy tales tend to take on the color of their location, through the choice of motifs, the style in which they are told, and the depiction of character and local color. The Brothers Grimm believed that European fairy tales derived from the cultural history shared by all Indo-European peoples and were therefore ancient, far older than written records. This view is supported by research by the anthropologist Jamie Tehrani and the folklorist Sara Graca Da Silva using phylogenetic analysis, a technique developed by evolutionary biologists to trace the relatedness of living and fossil species. Among the tales analysed were Jack and the Beanstalk, traced to the time of splitting of Eastern and Western Indo-European, over 5000 years ago. Both Beauty and the Beast and Rumpelstiltskin appear to have been created some 4000 years ago. The story of The Smith and the Devil (Deal with the Devil) appears to date from the Bronze Age, some 6000 years ago. Various other studies converge to suggest that some fairy tales, for example the swan maiden, could go back to the Upper Palaeolithic. Association with children Originally, adults were the audience of a fairy tale just as often as children. Literary fairy tales appeared in works intended for adults, but in the 19th and 20th centuries the fairy tale became associated with children's literature. The précieuses, including Madame d'Aulnoy, intended their works for adults, but regarded their source as the tales that servants, or other women of lower class, would tell to children. Indeed, a novel of that time, depicting a countess's suitor offering to tell such a tale, has the countess exclaim that she loves fairy tales as if she were still a child. Among the late précieuses, Jeanne-Marie Leprince de Beaumont redacted a version of Beauty and the Beast for children, and it is her tale that is best known today. The Brothers Grimm titled their collection Children's and Household Tales and rewrote their tales after complaints that they were not suitable for children. In the modern era, fairy tales were altered so that they could be read to children. The Brothers Grimm concentrated mostly on sexual references; Rapunzel, in the first edition, revealed the prince's visits by asking why her clothing had grown tight, thus letting the witch deduce that she was pregnant, but in subsequent editions carelessly revealed that it was easier to pull up the prince than the witch. On the other hand, in many respects, violence—particularly when punishing villains—was increased. Other, later, revisions cut out violence; J. R. R. Tolkien noted that The Juniper Tree often had its cannibalistic stew cut out in a version intended for children. The moralizing strain in the Victorian era altered the classical tales to teach lessons, as when George Cruikshank rewrote Cinderella in 1854 to contain temperance themes. His acquaintance Charles Dickens protested, "In an utilitarian age, of all other times, it is a matter of grave importance that fairy tales should be respected." Psychoanalysts such as Bruno Bettelheim, who regarded the cruelty of older fairy tales as indicative of psychological conflicts, strongly criticized this expurgation, because it weakened their usefulness to both children and adults as ways of symbolically resolving issues. Fairy tales do teach children how to deal with difficult times. To quote Rebecca Walters (2017, p. 56) "Fairytales and folktales are part of the cultural conserve that can be used to address children's fears …. and give them some role training in an approach that honors the children's window of tolerance". These fairy tales teach children how to deal with certain social situations and helps them to find their place in society. Fairy tales teach children other important lessons too. For example, Tsitsani et al. carried out a study on children to determine the benefits of fairy tales. Parents of the children who took part in the study found that fairy tales, especially the color in them, triggered their child's imagination as they read them. Jungian Analyst and fairy tale scholar Marie Louise Von Franz interprets fairy tales[d] based on Jung's view of fairy tales as a spontaneous and naive product of soul, which can only express what soul is. That means, she looks at fairy tales as images of different phases of experiencing the reality of the soul. They are the "purest and simplest expression of collective unconscious psychic processes" and "they represent the archetypes in their simplest, barest and most concise form" because they are less overlaid with conscious material than myths and legends. "In this pure form, the archetypal images afford us the best clues to the understanding of the processes going on in the collective psyche". "The fairy tale itself is its own best explanation; that is, its meaning is contained in the totality of its motifs connected by the thread of the story. [...] Every fairy tale is a relatively closed system compounding one essential psychological meaning which is expressed in a series of symbolical pictures and events and is discoverable in these". "I have come to the conclusion that all fairy tales endeavour to describe one and the same psychic fact, but a fact so complex and far-reaching and so difficult for us to realize in all its different aspects that hundreds of tales and thousands of repetitions with a musician's variation are needed until this unknown fact is delivered into consciousness; and even then the theme is not exhausted. This unknown fact is what Jung calls the Self, which is the psychic reality of the collective unconscious. [...] Every archetype is in its essence only one aspect of the collective unconscious as well as always representing also the whole collective unconscious. Other famous people commented on the importance of fairy tales, especially for children. For example, G. K. Chesterton argued that "Fairy tales, then, are not responsible for producing in children fear, or any of the shapes of fear; fairy tales do not give the child the idea of the evil or the ugly; that is in the child already, because it is in the world already. Fairy tales do not give the child his first idea of bogey. What fairy tales give the child is his first clear idea of the possible defeat of bogey. The baby has known the dragon intimately ever since he had an imagination. What the fairy tale provides for him is a St. George to kill the dragon." Albert Einstein once showed how important he believed fairy tales were for children's intelligence in the quote "If you want your children to be intelligent, read them fairytales. If you want them to be more intelligent, read them more fairytales." The adaptation of fairy tales for children continues. Walt Disney's influential Snow White and the Seven Dwarfs was largely (although certainly not solely) intended for the children's market. The anime Magical Princess Minky Momo draws on the fairy tale Momotarō. Jack Zipes has spent many years working to make the older traditional stories accessible to modern readers and their children. Motherhood Many fairy tales feature an absentee mother, as an example "Beauty and the Beast", "The Little Mermaid", "Little Red Riding Hood" and "Donkeyskin", where the mother is deceased or absent and unable to help the heroines. Mothers are depicted as absent or wicked in the most popular contemporary versions of tales like "Rapunzel", "Snow White", "Cinderella" and "Hansel and Gretel", however, some lesser known tales or variants such as those found in volumes edited by Angela Carter and Jane Yolen depict mothers in a more positive light. Carter's protagonist in The Bloody Chamber is an impoverished piano student married to a Marquis who was much older than herself to "banish the spectre of poverty". The story is a variant on Bluebeard, a tale about a wealthy man who murders numerous young women. Carter's protagonist, who is unnamed, describes her mother as "eagle-featured" and "indomitable". Her mother is depicted as a woman who is prepared for violence, instead of hiding from it or sacrificing herself to it. The protagonist recalls how her mother kept an "antique service revolver" and once "shot a man-eating tiger with her own hand." Contemporary tales In contemporary literature, many authors have used the form of fairy tales for various reasons, such as examining the human condition from the simple framework a fairytale provides. Some authors seek to recreate a sense of the fantastic in a contemporary discourse. Some writers use fairy tale forms for modern issues; this can include using the psychological dramas implicit in the story, as when Robin McKinley retold Donkeyskin as the novel Deerskin, with emphasis on the abusive treatment the father of the tale dealt to his daughter. Sometimes, especially in children's literature, fairy tales are retold with a twist simply for comic effect, such as The Stinky Cheese Man by Jon Scieszka and The ASBO Fairy Tales by Chris Pilbeam. A common comic motif is a world where all the fairy tales take place, and the characters are aware of their role in the story, such as in the film series Shrek. Other authors may have specific motives, such as multicultural or feminist reevaluations of predominantly Eurocentric masculine-dominated fairy tales, implying critique of older narratives. The figure of the damsel in distress has been particularly attacked by many feminist critics. Examples of narrative reversal rejecting this figure include The Paperbag Princess by Robert Munsch, a picture book aimed at children in which a princess rescues a prince, Angela Carter's The Bloody Chamber, which retells a number of fairy tales from a female point of view and Simon Hood's contemporary interpretation of various popular classics.[citation needed] There are also many contemporary erotic retellings of fairy tales, which explicitly draw upon the original spirit of the tales, and are specifically for adults. Modern retellings focus on exploring the tale through use of the erotic, explicit sexuality, dark and/or comic themes, female empowerment, fetish and BDSM, multicultural, and heterosexual characters. Cleis Press has released several fairy tale-themed erotic anthologies, including Fairy Tale Lust, Lustfully Ever After, and A Princess Bound. It may be hard to lay down the rule between fairy tales and fantasies that use fairy tale motifs, or even whole plots, but the distinction is commonly made, even within the works of a single author: George MacDonald's Lilith and Phantastes are regarded as fantasies, while his "The Light Princess", "The Golden Key", and "The Wise Woman" are commonly called fairy tales. The most notable distinction is that fairytale fantasies, like other fantasies, make use of novelistic writing conventions of prose, characterization, or setting. Fairy tales have been enacted dramatically; records exist of this in commedia dell'arte, and later in pantomime. Unlike oral and literacy form, fairy tales in film is considered one of the most effective way to convey the story to the audience. The advent of cinema has meant that such stories could be presented in a more plausible manner, with the use of special effects and animation. The Walt Disney Company has had a significant impact on the evolution of the fairy tale film. Some of the earliest short silent films from the Disney studio were based on fairy tales, and some fairy tales were adapted into shorts in the musical comedy series "Silly Symphony", such as Three Little Pigs. Walt Disney's first feature-length film Snow White and the Seven Dwarfs, released in 1937, was a ground-breaking film for fairy tales and, indeed, fantasy in general. With the cost of over 400 percent of the budget and more than 300 artists, assistants and animators, Snow White and the Seven Dwarfs was arguably one of the highest work force demanded film at that time. The studio even hired Don Graham to open animation training programs for more than 700 staff. As for the motion capture and personality expression, the studio used a dancer, Marjorie Celeste, from the beginning to the end for the best results. Disney and his creative successors have returned to traditional and literary fairy tales numerous times with films such as Cinderella (1950), Sleeping Beauty (1959), The Little Mermaid (1989) and Beauty and the Beast (1991). Disney's influence helped establish the fairy tale genre as a genre for children, and has been accused by some of bowdlerizing the gritty naturalism – and sometimes unhappy endings – of many folk fairy tales. However, others note that the softening of fairy tales occurred long before Disney, some of which was even done by the Grimm brothers themselves. Many filmed fairy tales have been made primarily for children, from Disney's later works to Aleksandr Rou's retelling of Vasilissa the Beautiful, the first Soviet film to use Russian folk tales in a big-budget feature. Others have used the conventions of fairy tales to create new stories with sentiments more relevant to contemporary life, as in Labyrinth, My Neighbor Totoro, Happily N'Ever After, and the films of Michel Ocelot. Other works have retold familiar fairy tales in a darker, more horrific or psychological variant aimed primarily at adults. Notable examples are Jean Cocteau's Beauty and the Beast and The Company of Wolves, based on Angela Carter's retelling of Little Red Riding Hood. Likewise, Princess Mononoke, Pan's Labyrinth, Suspiria, and Spike create new stories in this genre from fairy tale and folklore motifs. In comics and animated TV series, The Sandman, Hellboy, Revolutionary Girl Utena, Princess Tutu, Fables, and MÄR all make use of standard fairy tale elements to various extents but are more accurately categorised as fairytale fantasy due to the definite locations and characters which a longer narrative requires. A more modern cinematic fairy tale would be Luchino Visconti's Le Notti Bianche, starring Marcello Mastroianni before he became a superstar. It involves many of the romantic conventions of fairy tales, yet it takes place in post-World War II Italy, and it ends realistically. In recent years, Disney has been dominating the fairy tale film industry by remaking their animated fairy tale films into live action. Examples include Maleficent (2014), Cinderella (2015), Beauty and the Beast (2017) and so on. Motifs Any comparison of fairy tales quickly discovers that many fairy tales have features in common with each other. Two of the most influential classifications are those of Antti Aarne, as revised by Stith Thompson into the Aarne-Thompson classification system, and Vladimir Propp's Morphology of the Folk Tale. This system groups fairy and folk tales according to their overall plot. Common, identifying features are picked out to decide which tales are grouped together. Much therefore depends on what features are regarded as decisive. For instance, tales like Cinderella—in which a persecuted heroine, with the help of the fairy godmother or similar magical helper, attends an event (or three) in which she wins the love of a prince and is identified as his true bride—are classified as type 510, the persecuted heroine. Some such tales are The Wonderful Birch; Aschenputtel; Katie Woodencloak; The Story of Tam and Cam; Ye Xian; Cap O' Rushes; Catskin; Fair, Brown and Trembling; Finette Cendron; Allerleirauh. Further analysis of the tales shows that in Cinderella, The Wonderful Birch, The Story of Tam and Cam, Ye Xian, and Aschenputtel, the heroine is persecuted by her stepmother and refused permission to go to the ball or other event, and in Fair, Brown and Trembling and Finette Cendron by her sisters and other female figures, and these are grouped as 510A; while in Cap O' Rushes, Catskin, and Allerleirauh, the heroine is driven from home by her father's persecutions, and must take work in a kitchen elsewhere, and these are grouped as 510B. But in Katie Woodencloak, she is driven from home by her stepmother's persecutions and must take service in a kitchen elsewhere, and in Tattercoats, she is refused permission to go to the ball by her grandfather. Given these features common with both types of 510, Katie Woodencloak is classified as 510A because the villain is the stepmother, and Tattercoats as 510B because the grandfather fills the father's role. This system has its weaknesses in the difficulty of having no way to classify subportions of a tale as motifs. Rapunzel is type 310 (The Maiden in the Tower), but it opens with a child being demanded in return for stolen food, as does Puddocky; but Puddocky is not a Maiden in the Tower tale, while The Canary Prince, which opens with a jealous stepmother, is. It also lends itself to emphasis on the common elements, to the extent that the folklorist describes The Black Bull of Norroway as the same story as Beauty and the Beast. This can be useful as a shorthand but can also erase the coloring and details of a story. Vladimir Propp specifically studied a collection of Russian fairy tales, but his analysis has been found useful for the tales of other countries.[page needed] Having criticized Aarne-Thompson type analysis for ignoring what motifs did in stories, and because the motifs used were not clearly distinct, he analyzed the tales for the function each character and action fulfilled and concluded that a tale was composed of thirty-one elements ('functions') and seven characters or 'spheres of action' ('the princess and her father' are a single sphere). While the elements were not all required for all tales, when they appeared they did so in an invariant order – except that each individual element might be negated twice, so that it would appear three times, as when, in Brother and Sister, the brother resists drinking from enchanted streams twice, so that it is the third that enchants him. Propp's 31 functions also fall within six 'stages' (preparation, complication, transference, struggle, return, recognition), and a stage can also be repeated, which can affect the perceived order of elements. One such element is the donor who gives the hero magical assistance, often after testing him. In The Golden Bird, the talking fox tests the hero by warning him against entering an inn and, after he succeeds, helps him find the object of his quest; in The Boy Who Drew Cats, the priest advised the hero to stay in small places at night, which protects him from an evil spirit; in Cinderella, the fairy godmother gives Cinderella the dresses she needs to attend the ball, as their mothers' spirits do in Bawang Putih Bawang Merah and The Wonderful Birch; in The Fox Sister, a Buddhist monk gives the brothers magical bottles to protect against the fox spirit. The roles can be more complicated. In The Red Ettin, the role is split into the mother—who offers the hero the whole of a journey cake with her curse or half with her blessing—and when he takes the half, a fairy who gives him advice; in Mr Simigdáli, the sun, the moon, and the stars all give the heroine a magical gift. Characters who are not always the donor can act like the donor. In Kallo and the Goblins, the villain goblins also give the heroine gifts, because they are tricked; in Schippeitaro, the evil cats betray their secret to the hero, giving him the means to defeat them. Other fairy tales, such as The Story of the Youth Who Went Forth to Learn What Fear Was, do not feature the donor. Analogies have been drawn between this and the analysis of myths into the hero's journey. Interpretations Many fairy tales have been interpreted for their (purported) significance. One mythological interpretation saw many fairy tales, including Hansel and Gretel, Sleeping Beauty, and The Frog King, as solar myths; this mode of interpretation subsequently became rather less popular. Freudian, Jungian, and other psychological analyses have also explicated many tales, but no mode of interpretation has established itself definitively.[page needed] Specific analyses have often been criticized[by whom?] for lending great importance to motifs that are not, in fact, integral to the tale; this has often stemmed from treating one instance of a fairy tale as the definitive text, where the tale has been told and retold in many variations. In variants of Bluebeard, the wife's curiosity is betrayed by a blood-stained key, by an egg's breaking, or by the singing of a rose she wore, without affecting the tale, but interpretations of specific variants have claimed that the precise object is integral to the tale. Other folklorists have interpreted tales as historical documents. Many[quantify] German folklorists, believing the tales to have preserved details from ancient times, have used the Grimms' tales to explain ancient customs. One approach sees the topography of European Märchen as echoing the period immediately following the last Ice Age. Other folklorists have explained the figure of the wicked stepmother in a historical/sociological context: many women did die in childbirth, their husbands remarried, and the new stepmothers competed with the children of the first marriage for resources. In a 2012 lecture, Jack Zipes reads fairy tales as examples of what he calls "childism". He suggests that there are terrible aspects to the tales, which (among other things) have conditioned children to accept mistreatment and even abuse. Fairy tales in music Fairy tales have inspired music, namely opera, such as the French Opéra féerie and the German Märchenoper. French examples include Gretry's Zémire et Azor, and Auber's Le cheval de bronze, German operas are Mozart's Die Zauberflöte, Humperdinck's Hänsel und Gretel, Siegfried Wagner's An allem ist Hütchen schuld!, which is based on many fairy tales, and Carl Orff's Die Kluge. Ballet, too, is fertile ground for bringing fairy tales to life. Igor Stravinsky's first ballet, The Firebird uses elements from various classic Russian tales in that work. Even contemporary fairy tales have been written for the purpose of inspiration in the music world. "Raven Girl" by Audrey Niffenegger was written to inspire a new dance for the Royal Ballet in London. The song "Singring and the Glass Guitar" by the American band Utopia, recorded for their album "Ra", is called "An Electrified Fairytale". Composed by the four members of the band, Roger Powell, Kasim Sulton, Willie Wilcox and Todd Rundgren, it tells the story of the theft of the Glass Guitar by Evil Forces, which has to be recovered by the four heroes. Compilations Authors and works: See also Notes References Further reading On origin and migration of folktales: External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Animal#cite_note-Bobrovskiy_Hope_Ivantsov_Nettersheim_pp._1246–1249-94] | [TOKENS: 6011] |
Contents Animal Animals are multicellular, eukaryotic organisms belonging to the biological kingdom Animalia (/ˌænɪˈmeɪliə/). With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Animals form a clade, meaning that they arose from a single common ancestor. Over 1.5 million living animal species have been described, of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are as many as 7.77 million animal species on Earth. Animal body lengths range from 8.5 μm (0.00033 in) to 33.6 m (110 ft). They have complex ecologies and interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology, and the study of animal behaviour is known as ethology. The animal kingdom is divided into five major clades, namely Porifera, Ctenophora, Placozoa, Cnidaria and Bilateria. Most living animal species belong to the clade Bilateria, a highly proliferative clade whose members have a bilaterally symmetric and significantly cephalised body plan, and the vast majority of bilaterians belong to two large clades: the protostomes, which includes organisms such as arthropods, molluscs, flatworms, annelids and nematodes; and the deuterostomes, which include echinoderms, hemichordates and chordates, the latter of which contains the vertebrates. The much smaller basal phylum Xenacoelomorpha have an uncertain position within Bilateria. Animals first appeared in the fossil record in the late Cryogenian period and diversified in the subsequent Ediacaran period in what is known as the Avalon explosion. Nearly all modern animal phyla first appeared in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago (Mya), and most classes during the Ordovician radiation 485.4 Mya. Common to all living animals, 6,331 groups of genes have been identified that may have arisen from a single common ancestor that lived about 650 Mya during the Cryogenian period. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa. Humans make use of many other animal species for food (including meat, eggs, and dairy products), for materials (such as leather, fur, and wool), as pets and as working animals for transportation, and services. Dogs, the first domesticated animal, have been used in hunting, in security and in warfare, as have horses, pigeons and birds of prey; while other terrestrial and aquatic animals are hunted for sports, trophies or profits. Non-human animals are also an important cultural element of human evolution, having appeared in cave arts and totems since the earliest times, and are frequently featured in mythology, religion, arts, literature, heraldry, politics, and sports. Etymology The word animal comes from the Latin noun animal of the same meaning, which is itself derived from Latin animalis 'having breath or soul'. The biological definition includes all members of the kingdom Animalia. In colloquial usage, the term animal is often used to refer only to nonhuman animals. The term metazoa is derived from Ancient Greek μετα meta 'after' (in biology, the prefix meta- stands for 'later') and ζῷᾰ zōia 'animals', plural of ζῷον zōion 'animal'. A metazoan is any member of the group Metazoa. Characteristics Animals have several characteristics that they share with other living things. Animals are eukaryotic, multicellular, and aerobic, as are plants and fungi. Unlike plants and algae, which produce their own food, animals cannot produce their own food, a feature they share with fungi. Animals ingest organic material and digest it internally. Animals have structural characteristics that set them apart from all other living things: Typically, there is an internal digestive chamber with either one opening (in Ctenophora, Cnidaria, and flatworms) or two openings (in most bilaterians). Animal development is controlled by Hox genes, which signal the times and places to develop structures such as body segments and limbs. During development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised into specialised tissues and organs, making the formation of complex structures possible, and allowing cells to be differentiated. The extracellular matrix may be calcified, forming structures such as shells, bones, and spicules. In contrast, the cells of other multicellular organisms (primarily algae, plants, and fungi) are held in place by cell walls, and so develop by progressive growth. Nearly all animals make use of some form of sexual reproduction. They produce haploid gametes by meiosis; the smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm. In most cases, a third germ layer, the mesoderm, also develops between them. These germ layers then differentiate to form tissues and organs. Repeated instances of mating with a close relative during sexual reproduction generally leads to inbreeding depression within a population due to the increased prevalence of harmful recessive traits. Animals have evolved numerous mechanisms for avoiding close inbreeding. Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentation; budding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids. Ecology Animals are categorised into ecological groups depending on their trophic levels and how they consume organic material. Such groupings include carnivores (further divided into subcategories such as piscivores, insectivores, ovivores, etc.), herbivores (subcategorised into folivores, graminivores, frugivores, granivores, nectarivores, algivores, etc.), omnivores, fungivores, scavengers/detritivores, and parasites. Interactions between animals of each biome form complex food webs within that ecosystem. In carnivorous or omnivorous species, predation is a consumer–resource interaction where the predator feeds on another organism, its prey, who often evolves anti-predator adaptations to avoid being fed upon. Selective pressures imposed on one another lead to an evolutionary arms race between predator and prey, resulting in various antagonistic/competitive coevolutions. Almost all multicellular predators are animals. Some consumers use multiple methods; for example, in parasitoid wasps, the larvae feed on the hosts' living tissues, killing them in the process, but the adults primarily consume nectar from flowers. Other animals may have very specific feeding behaviours, such as hawksbill sea turtles which mainly eat sponges. Most animals rely on biomass and bioenergy produced by plants and phytoplanktons (collectively called producers) through photosynthesis. Herbivores, as primary consumers, eat the plant material directly to digest and absorb the nutrients, while carnivores and other animals on higher trophic levels indirectly acquire the nutrients by eating the herbivores or other animals that have eaten the herbivores. Animals oxidise carbohydrates, lipids, proteins and other biomolecules in cellular respiration, which allows the animal to grow and to sustain basal metabolism and fuel other biological processes such as locomotion. Some benthic animals living close to hydrothermal vents and cold seeps on the dark sea floor consume organic matter produced through chemosynthesis (via oxidising inorganic compounds such as hydrogen sulfide) by archaea and bacteria. Animals originated in the ocean; all extant animal phyla, except for Micrognathozoa and Onychophora, feature at least some marine species. However, several lineages of arthropods begun to colonise land around the same time as land plants, probably between 510 and 471 million years ago, during the Late Cambrian or Early Ordovician. Vertebrates such as the lobe-finned fish Tiktaalik started to move on to land in the late Devonian, about 375 million years ago. Other notable animal groups that colonized land environments are Mollusca, Platyhelmintha, Annelida, Tardigrada, Onychophora, Rotifera, Nematoda. Animals occupy virtually all of earth's habitats and microhabitats, with faunas adapted to salt water, hydrothermal vents, fresh water, hot springs, swamps, forests, pastures, deserts, air, and the interiors of other organisms. Animals are however not particularly heat tolerant; very few of them can survive at constant temperatures above 50 °C (122 °F) or in the most extreme cold deserts of continental Antarctica. The collective global geomorphic influence of animals on the processes shaping the Earth's surface remains largely understudied, with most studies limited to individual species and well-known exemplars. Diversity The blue whale (Balaenoptera musculus) is the largest animal that has ever lived, weighing up to 190 tonnes and measuring up to 33.6 metres (110 ft) long. The largest extant terrestrial animal is the African bush elephant (Loxodonta africana), weighing up to 12.25 tonnes and measuring up to 10.67 metres (35.0 ft) long. The largest terrestrial animals that ever lived were titanosaur sauropod dinosaurs such as Argentinosaurus, which may have weighed as much as 73 tonnes, and Supersaurus which may have reached 39 metres. Several animals are microscopic; some Myxozoa (obligate parasites within the Cnidaria) never grow larger than 20 μm, and one of the smallest species (Myxobolus shekel) is no more than 8.5 μm when fully grown. The following table lists estimated numbers of described extant species for the major animal phyla, along with their principal habitats (terrestrial, fresh water, and marine), and free-living or parasitic ways of life. Species estimates shown here are based on numbers described scientifically; much larger estimates have been calculated based on various means of prediction, and these can vary wildly. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million. Using patterns within the taxonomic hierarchy, the total number of animal species—including those not yet described—was calculated to be about 7.77 million in 2011.[a] 3,000–6,500 4,000–25,000 Evolutionary origin Evidence of animals is found as long ago as the Cryogenian period. 24-Isopropylcholestane (24-ipc) has been found in rocks from roughly 650 million years ago; it is only produced by sponges and pelagophyte algae. Its likely origin is from sponges based on molecular clock estimates for the origin of 24-ipc production in both groups. Analyses of pelagophyte algae consistently recover a Phanerozoic origin, while analyses of sponges recover a Neoproterozoic origin, consistent with the appearance of 24-ipc in the fossil record. The first body fossils of animals appear in the Ediacaran, represented by forms such as Charnia and Spriggina. It had long been doubted whether these fossils truly represented animals, but the discovery of the animal lipid cholesterol in fossils of Dickinsonia establishes their nature. Animals are thought to have originated under low-oxygen conditions, suggesting that they were capable of living entirely by anaerobic respiration, but as they became specialised for aerobic metabolism they became fully dependent on oxygen in their environments. Many animal phyla first appear in the fossil record during the Cambrian explosion, starting about 539 million years ago, in beds such as the Burgess Shale. Extant phyla in these rocks include molluscs, brachiopods, onychophorans, tardigrades, arthropods, echinoderms and hemichordates, along with numerous now-extinct forms such as the predatory Anomalocaris. The apparent suddenness of the event may however be an artefact of the fossil record, rather than showing that all these animals appeared simultaneously. That view is supported by the discovery of Auroralumina attenboroughii, the earliest known Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predators, catching small prey with its nematocysts as modern cnidarians do. Some palaeontologists have suggested that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Early fossils that might represent animals appear for example in the 665-million-year-old rocks of the Trezona Formation of South Australia. These fossils are interpreted as most probably being early sponges. Trace fossils such as tracks and burrows found in the Tonian period (from 1 gya) may indicate the presence of triploblastic worm-like animals, roughly as large (about 5 mm wide) and complex as earthworms. However, similar tracks are produced by the giant single-celled protist Gromia sphaerica, so the Tonian trace fossils may not indicate early animal evolution. Around the same time, the layered mats of microorganisms called stromatolites decreased in diversity, perhaps due to grazing by newly evolved animals. Objects such as sediment-filled tubes that resemble trace fossils of the burrows of wormlike animals have been found in 1.2 gya rocks in North America, in 1.5 gya rocks in Australia and North America, and in 1.7 gya rocks in Australia. Their interpretation as having an animal origin is disputed, as they might be water-escape or other structures. Phylogeny Animals are monophyletic, meaning they are derived from a common ancestor. Animals are the sister group to the choanoflagellates, with which they form the Choanozoa. Ros-Rocher and colleagues (2021) trace the origins of animals to unicellular ancestors, providing the external phylogeny shown in the cladogram. Uncertainty of relationships is indicated with dashed lines. The animal clade had certainly originated by 650 mya, and may have come into being as much as 800 mya, based on molecular clock evidence for different phyla. Holomycota (inc. fungi) Ichthyosporea Pluriformea Filasterea The relationships at the base of the animal tree have been debated. Other than Ctenophora, the Bilateria and Cnidaria are the only groups with symmetry, and other evidence shows they are closely related. In addition to sponges, Placozoa has no symmetry and was often considered a "missing link" between protists and multicellular animals. The presence of hox genes in Placozoa shows that they were once more complex. The Porifera (sponges) have long been assumed to be sister to the rest of the animals, but there is evidence that the Ctenophora may be in that position. Molecular phylogenetics has supported both the sponge-sister and ctenophore-sister hypotheses. In 2017, Roberto Feuda and colleagues, using amino acid differences, presented both, with the following cladogram for the sponge-sister view that they supported (their ctenophore-sister tree simply interchanging the places of ctenophores and sponges): Porifera Ctenophora Placozoa Cnidaria Bilateria Conversely, a 2023 study by Darrin Schultz and colleagues uses ancient gene linkages to construct the following ctenophore-sister phylogeny: Ctenophora Porifera Placozoa Cnidaria Bilateria Sponges are physically very distinct from other animals, and were long thought to have diverged first, representing the oldest animal phylum and forming a sister clade to all other animals. Despite their morphological dissimilarity with all other animals, genetic evidence suggests sponges may be more closely related to other animals than the comb jellies are. Sponges lack the complex organisation found in most other animal phyla; their cells are differentiated, but in most cases not organised into distinct tissues, unlike all other animals. They typically feed by drawing in water through pores, filtering out small particles of food. The Ctenophora and Cnidaria are radially symmetric and have digestive chambers with a single opening, which serves as both mouth and anus. Animals in both phyla have distinct tissues, but these are not organised into discrete organs. They are diploblastic, having only two main germ layers, ectoderm and endoderm. The tiny placozoans have no permanent digestive chamber and no symmetry; they superficially resemble amoebae. Their phylogeny is poorly defined, and under active research. The remaining animals, the great majority—comprising some 29 phyla and over a million species—form the Bilateria clade, which have a bilaterally symmetric body plan. The Bilateria are triploblastic, with three well-developed germ layers, and their tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and in the Nephrozoa there is an internal body cavity, a coelom or pseudocoelom. These animals have a head end (anterior) and a tail end (posterior), a back (dorsal) surface and a belly (ventral) surface, and a left and a right side. A modern consensus phylogenetic tree for the Bilateria is shown below. Xenacoelomorpha Ambulacraria Chordata Ecdysozoa Spiralia Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Many bilaterians have a combination of circular muscles that constrict the body, making it longer, and an opposing set of longitudinal muscles, that shorten the body; these enable soft-bodied animals with a hydrostatic skeleton to move by peristalsis. They also have a gut that extends through the basically cylindrical body from mouth to anus. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. However, over evolutionary time, descendant spaces have evolved which have lost one or more of each of these characteristics. For example, adult echinoderms are radially symmetric (unlike their larvae), while some parasitic worms have extremely simplified body structures. Genetic studies have considerably changed zoologists' understanding of the relationships within the Bilateria. Most appear to belong to two major lineages, the protostomes and the deuterostomes. It is often suggested that the basalmost bilaterians are the Xenacoelomorpha, with all other bilaterians belonging to the subclade Nephrozoa. However, this suggestion has been contested, with other studies finding that xenacoelomorphs are more closely related to Ambulacraria than to other bilaterians. Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage. Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm. The main deuterostome taxa are the Ambulacraria and the Chordata. Ambulacraria are exclusively marine and include acorn worms, starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals. The protostomes include the Ecdysozoa, named after their shared trait of ecdysis, growth by moulting, Among the largest ecdysozoan phyla are the arthropods and the nematodes. The rest of the protostomes are in the Spiralia, named for their pattern of developing by spiral cleavage in the early embryo. Major spiralian phyla include the annelids and molluscs. History of classification In the classical era, Aristotle divided animals,[d] based on his own observations, into those with blood (roughly, the vertebrates) and those without. The animals were then arranged on a scale from man (with blood, two legs, rational soul) down through the live-bearing tetrapods (with blood, four legs, sensitive soul) and other groups such as crustaceans (no blood, many legs, sensitive soul) down to spontaneously generating creatures like sponges (no blood, no legs, vegetable soul). Aristotle was uncertain whether sponges were animals, which in his system ought to have sensation, appetite, and locomotion, or plants, which did not: he knew that sponges could sense touch and would contract if about to be pulled off their rocks, but that they were rooted like plants and never moved about. In 1758, Carl Linnaeus created the first hierarchical classification in his Systema Naturae. In his original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then, the last four have all been subsumed into a single phylum, the Chordata, while his Insecta (which included the crustaceans and arachnids) and Vermes have been renamed or broken up. The process was begun in 1793 by Jean-Baptiste de Lamarck, who called the Vermes une espèce de chaos ('a chaotic mess')[e] and split the group into three new phyla: worms, echinoderms, and polyps (which contained corals and jellyfish). By 1809, in his Philosophie Zoologique, Lamarck had created nine phyla apart from vertebrates (where he still had four phyla: mammals, birds, reptiles, and fish) and molluscs, namely cirripedes, annelids, crustaceans, arachnids, insects, worms, radiates, polyps, and infusorians. In his 1817 Le Règne Animal, Georges Cuvier used comparative anatomy to group the animals into four embranchements ('branches' with different body plans, roughly corresponding to phyla), namely vertebrates, molluscs, articulated animals (arthropods and annelids), and zoophytes (radiata) (echinoderms, cnidaria and other forms). This division into four was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860. In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals, with five phyla: coelenterates, echinoderms, articulates, molluscs, and vertebrates) and Protozoa (single-celled animals), including a sixth animal phylum, sponges. The protozoa were later moved to the former kingdom Protista, leaving only the Metazoa as a synonym of Animalia. In human culture The human population exploits a large number of other animal species for food, both of domesticated livestock species in animal husbandry and, mainly at sea, by hunting wild species. Marine fish of many species are caught commercially for food. A smaller number of species are farmed commercially. Humans and their livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined. Invertebrates including cephalopods, crustaceans, insects—principally bees and silkworms—and bivalve or gastropod molluscs are hunted or farmed for food, fibres. Chickens, cattle, sheep, pigs, and other animals are raised as livestock for meat across the world. Animal fibres such as wool and silk are used to make textiles, while animal sinews have been used as lashings and bindings, and leather is widely used to make shoes and other items. Animals have been hunted and farmed for their fur to make items such as coats and hats. Dyestuffs including carmine (cochineal), shellac, and kermes have been made from the bodies of insects. Working animals including cattle and horses have been used for work and transport from the first days of agriculture. Animals such as the fruit fly Drosophila melanogaster serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug trabectedin are based on toxins or other molecules of animal origin. People have used hunting dogs to help chase down and retrieve animals, and birds of prey to catch birds and mammals, while tethered cormorants have been used to catch fish. Poison dart frogs have been used to poison the tips of blowpipe darts. A wide variety of animals are kept as pets, from invertebrates such as tarantulas, octopuses, and praying mantises, reptiles such as snakes and chameleons, and birds including canaries, parakeets, and parrots all finding a place. However, the most kept pet species are mammals, namely dogs, cats, and rabbits. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own. A wide variety of terrestrial and aquatic animals are hunted for sport. The signs of the Western and Chinese zodiacs are based on animals. In China and Japan, the butterfly has been seen as the personification of a person's soul, and in classical representation the butterfly is also the symbol of the soul. Animals have been the subjects of art from the earliest times, both historical, as in ancient Egypt, and prehistoric, as in the cave paintings at Lascaux. Major animal paintings include Albrecht Dürer's 1515 The Rhinoceros, and George Stubbs's c. 1762 horse portrait Whistlejacket. Insects, birds and mammals play roles in literature and film, such as in giant bug movies. Animals including insects and mammals feature in mythology and religion. The scarab beetle was sacred in ancient Egypt, and the cow is sacred in Hinduism. Among other mammals, deer, horses, lions, bats, bears, and wolves are the subjects of myths and worship. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Jewish_religious_clothing] | [TOKENS: 2275] |
Contents Jewish religious clothing Jewish religious clothing is apparel worn by Jews in connection with the practice of the Jewish religion. Jewish religious clothing has changed over time while maintaining the influences of biblical commandments and Jewish religious law regarding clothing and modesty (tzniut). Contemporary styles in the wider culture also have a bearing on Jewish religious clothing, although this extent is limited. Historical background The Torah set forth rules for dress that, following later rabbinical tradition, were interpreted as setting Jews apart from the communities in which they lived. Classical Greek and Roman sources, that often ridicule many aspects of Jewish life, do not remark on their clothing and subject it to caricature, as they do when touching on Celtic, Germanic, and Iranian peoples, and mock their different modes of dress. Cultural anthropologist Eric Silverman argues that Jews in the late antiquity period used clothes and hair-styles like the people around them. At 2 Maccabees 4:12, it is said that the Maccabees slaughtered Jewish youths guilty of Hellenizing in wearing caps typical of Greek youths. In the Mishnaic period, as well as in many Islamic countries until the mid-20th century, Jewish men typically wore a tunic (Hebrew: חלוק, romanized: ḥaluq), instead of trousers. In the same countries, many different local regulations emerged to make Christian and Jewish dhimmis look distinctive in their public appearance. In 1198, the Almohad caliph Yaqub al-Mansur decreed Jews must wear dark blue garb with very large sleeves and a grotesquely oversized hat; his son altered the colour to yellow, a change that may have influenced Catholic ordinances some time later. German ethnographer Erich Brauer (1895–1942) noted that in Yemen of his time, Jews were not allowed to wear clothing of any color besides blue. Earlier, in Jacob Saphir's time (1859), they would wear outer garments that were "utterly black".[citation needed] In France, during the Middle Ages, Jewish men typically wore trousers and chemise, thought by Rashi to have been equivalent to the tunic worn by Jewish men of the east. Men's clothing Many Jewish men historically wore a turban or sudra, a tunic, a tallit, and sandals in summer. Oriental Jewish men in late-Ottoman and British Mandate Palestine would wear the tarbush on their heads. The tallit is a Jewish prayer shawl worn while reciting morning prayers as well as in the synagogue on Shabbat and holidays. In Yemen, the wearing of such garments was not unique to prayer time alone but was worn the entire day. In many Ashkenazi communities, a tallit is worn only after marriage. The tallit has special twined and knotted fringes known as tzitzit attached to its four corners. It is sometimes referred to as Arba kanefot (lit. 'four corners') although the term is more common for a tallit katan, an undergarment with tzitzit. According to the Biblical commandments, tzitzit must be attached to any four-cornered garment, and a thread with a blue dye known as tekhelet was originally included in the tzitzit. However, the missing blue thread does not impair the validness of the white. Jewish tradition varies with respect to burial with or without a tallit. While all the deceased are buried in tachrichim (burial shrouds), some communities (Yemenite Jews) do not bury their dead in their tallit. The Shulhan Arukh and the Arba'ah Turim, following the legal opinion of Nahmanides, require burying the dead with their tallit, and which has become the general practice amongst most religious Jews. Among others, the matter is dependent upon custom. Since tzitzit are considered to be a time-bound commandment, only men are required to wear them. Authorities have differed as to whether women are prohibited, permitted or encouraged to wear them. Medieval authorities tended toward leniency, with more prohibitive rulings gaining in precedence since the 16th century. Conservative Judaism regards women as exempt from wearing tzitzit, not as prohibited, and the tallit has become more common among Conservative women since the 1970s. Some progressive Jewish women choose to take on the obligations of tzitzit and tefillin, and it has become common for a girl to receive a tallit when she becomes bat mitzvah. A kippah or yarmulke (also called a kappel) is a thin, slightly-rounded skullcap traditionally worn at all times by Orthodox Jewish men, and sometimes by both men and women in Conservative and Reform communities. Its use is associated with demonstrating respect and reverence for God. Jews in Arab lands did not traditionally wear yarmulkes, but rather larger, rounded, brimless hats, such as the kufi or tarboush. A kittel (Yiddish: קיטל, romanized: kitl) is a white, knee-length, cotton robe worn by Jewish prayer leaders and some Orthodox Jews on the High Holy Days. In some families, the head of the household wears a kittel at the Passover seder, while in other families all married men wear them. In many Ashkenazi Orthodox circles, it is customary for the groom to wear a kittel under the chuppah (wedding canopy). Women's clothing Ezra the Scribe is said to have made one of the earliest enactments on women's attire, requiring all Jewish women to be girded with a wide belt (waist band) (Hebrew: סינר), whether from the front or from the back, out of modesty (Babylonian Talmud, Baba Kama 82a). In subsequent years, the Sages of Israel forbade Jewish women from wearing any predominantly red colored accoutrement, as it attracts undue attention to themselves. Married observant Jewish women wear a scarf (tichel or mitpahat), snood, hat, beret, or sometimes a wig (sheitel) in order to conform with the requirement of Jewish religious law that married women cover their hair. Jewish women were distinguished from others in the western regions of the Roman Empire by their custom of veiling in public. The custom of veiling was shared by Jews with others in the eastern regions. The custom petered out among Roman women, but was retained by Jewish women as a sign of their identification as Jews. The custom has been retained among Orthodox women. Evidence drawn from the Talmud shows that pious Jewish women would wear shawls over their heads when they would leave their homes, but there was no practice of fully covering the face. In the medieval era, Jewish women started veiling their faces under the influence of the Islamic societies they lived in. In some Muslim regions such as in Baghdad, Jewish women veiled their faces until the 1930s. In the more lax Kurdish regions, Jewish women did not cover their faces. Jewish vs gentile customs Based on the rabbinic traditions of the Talmud, the 12th century philosopher Maimonides forbade emulating gentile dress and apparel when those same items of clothing have immodest designs, or that they are connected somehow to an idolatrous practice, or are worn because of some superstitious practice (i. e., "the ways of an Amorite"). A question was posed to 15th-century Rabbi Joseph Colon (Maharik) regarding "gentile clothing" and whether or not a Jew who wears such clothing transgresses a biblical prohibition that states, "You shall not walk in their precepts" (Leviticus 18:3). In a protracted responsum, Rabbi Colon wrote that any Jew who might be a practising physician is permitted to wear a physician's cape (traditionally worn by gentile physicians on account of their expertise in that particular field of science and their wanting to be recognized as such), and that the Jewish physician who wore it has not infringed upon any law in the Torah, even though Jews were not wont to wear such garments in former times. He noted that there is nothing attributed to "superstitious" practice by their wearing such a garment, while, at the same time, there isn't anything promiscuous or immodest about wearing such a cape, neither is it worn out of haughtiness. Moreover, he has understood from Maimonides (Hilkhot Avodat Kokhavim 11:1) that there is no commandment requiring a fellow Jew to seek out and look for clothing which would make them stand out as "different" from what is worn by gentiles, but rather, only to make sure that what a Jew might wear is not an "exclusive" gentile item of clothing. He noted that wearing a physician's cape is not an exclusive gentile custom, noting, moreover, that since the custom to wear the cape varies from place to place, and that, in France, physicians do not have it as a custom to wear such capes, it cannot therefore be an exclusive gentile custom. According to Rabbi Colon, modesty was still a criterion for wearing gentile clothing, writing: "...even if Israel made it as their custom [to wear] a certain item of clothing, while the Gentiles [would wear] something different, if the Israelite garment should not measure up to [the standard established in] Judaism or of modesty more than what the Gentiles hold as their practice, there is no prohibition whatsoever for an Israelite to wear the garment that is practised among the Gentiles, seeing that it is in [keeping with] the way of fitness and modesty just as that of Israel." Rabbi Joseph Karo (1488–1575), following in the footsteps of Colon, ruled in accordance with Colon's teaching in his seminal work Beit Yosef on the Tur (Yoreh De'ah §178), and in his commentary Kessef Mishneh (on Maimonides' Mishne Torah, Hilkhot Avodat Kokhavim 11:1), making the wearing of gentile clothing contingent upon three factors: 1) that they not be promiscuous clothing; 2) not be clothing linked to an idolatrous practice; 3) not be clothing that was worn because of some superstitious practice (or "the way of the Amorites"). Rabbi Moses Isserles (1530–1572) opines that to these strictures can be added one additional prohibition of wearing clothes that are a "custom" for them (the gentiles) to wear, that is to say, an exclusive gentile custom where the clothing is immodest. Rabbi and posek Moshe Feinstein (1895–1986) subscribed to the same strictures. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Anaerobic_respiration] | [TOKENS: 1132] |
Contents Anaerobic respiration Anaerobic respiration is respiration using electron acceptors other than molecular oxygen (O2) in its electron transport chain. In aerobic organisms, electrons are shuttled to an electron transport chain, and the final electron acceptor is oxygen. Molecular oxygen is an excellent electron acceptor. Anaerobes instead use less-oxidizing (in either thermodynamic or kinetics sense) substances such as nitrate (NO−3), fumarate (C4H2O2−4), sulfate (SO2−4), or elemental sulfur (S). These terminal electron acceptors have smaller reduction potentials than O2. Less energy per oxidized molecule is released. Therefore, anaerobic respiration is less efficient than aerobic.[citation needed] As compared with fermentation Anaerobic cellular respiration and fermentation generate ATP in very different ways, and the terms should not be treated as synonyms. Cellular respiration (both aerobic and anaerobic) uses highly reduced chemical compounds such as NADH and FADH2 (for example produced during glycolysis and the citric acid cycle) to establish an electrochemical gradient (often a proton gradient) across a membrane. This results in an electrical potential or ion concentration difference across the membrane. The reduced chemical compounds are oxidized by a series of respiratory integral membrane proteins with sequentially increasing reduction potentials, with the final electron acceptor being oxygen (in aerobic respiration) or another chemical substance (in anaerobic respiration). A proton motive force drives protons down the gradient (across the membrane) through the proton channel of ATP synthase. The resulting current drives ATP synthesis from ADP and inorganic phosphate.[citation needed] Fermentation, in contrast, does not use an electrochemical gradient but instead uses only substrate-level phosphorylation to produce ATP. The electron acceptor NAD+ is regenerated from NADH formed in oxidative steps of the fermentation pathway by the reduction of oxidized compounds. These oxidized compounds are often formed during the fermentation pathway itself, but may also be external. For example, in homofermentative lactic acid bacteria, NADH formed during the oxidation of glyceraldehyde-3-phosphate is oxidized back to NAD+ by the reduction of pyruvate to lactic acid at a later stage in the pathway. In yeast, acetaldehyde is reduced to ethanol to regenerate NAD+.[citation needed] There are two important anaerobic microbial methane formation pathways, through carbon dioxide / bicarbonate (HCO−3) reduction (respiration) or acetate fermentation. Ecological importance Anaerobic respiration is a critical component of the global nitrogen, iron, sulfur, and carbon cycles through the reduction of the oxyanions of nitrogen, sulfur, and carbon to more-reduced compounds. The biogeochemical cycling of these compounds, which depends upon anaerobic respiration, significantly impacts the carbon cycle and global warming. Anaerobic respiration occurs in many environments, including freshwater and marine sediments, soil, subsurface aquifers, deep subsurface environments, and biofilms. Even environments that contain oxygen, such as soil, have micro-environments that lack oxygen due to the slow diffusion characteristics of oxygen gas.[citation needed] An example of the ecological importance of anaerobic respiration is the use of nitrate as a terminal electron acceptor, or dissimilatory denitrification, which is the main route by which fixed nitrogen is returned to the atmosphere as molecular nitrogen gas. The denitrification process is also very important in host-microbe interactions. Like mitochondria in oxygen-respiring microorganisms, some single-cellular anaerobic ciliates use denitrifying endosymbionts to gain energy. Another example is methanogenesis, a form of carbon-dioxide respiration, that is used to produce methane gas by anaerobic digestion. Biogenic methane can be a sustainable alternative to fossil fuels. However, uncontrolled methanogenesis in landfill sites releases large amounts of methane into the atmosphere, acting as a potent greenhouse gas. Sulfate respiration produces hydrogen sulfide, which is responsible for the characteristic 'rotten egg' smell of coastal wetlands and has the capacity to precipitate heavy metal ions from solution, leading to the deposition of sulfidic metal ores. Economic relevance Dissimilatory denitrification is widely used in the removal of nitrate and nitrite from municipal wastewater. An excess of nitrate can lead to eutrophication of waterways into which treated water is released. Elevated nitrite levels in drinking water can lead to problems due to its toxicity. Denitrification converts both compounds into harmless nitrogen gas. Specific types of anaerobic respiration are also critical in bioremediation, which uses microorganisms to convert toxic chemicals into less-harmful molecules to clean up contaminated beaches, aquifers, lakes, and oceans. For example, toxic arsenate or selenate can be reduced to less toxic compounds by various anaerobic bacteria via anaerobic respiration. The reduction of chlorinated chemical pollutants, such as vinyl chloride and carbon tetrachloride, also occurs through anaerobic respiration.[citation needed] Anaerobic respiration is useful in generating electricity in microbial fuel cells, which employ bacteria that respire solid electron acceptors (such as oxidized iron) to transfer electrons from reduced compounds to an electrode. This process can simultaneously degrade organic carbon waste and generate electricity. Examples of electron acceptors in respiration See also Further reading References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/4_Vesta] | [TOKENS: 6750] |
Contents 4 Vesta Vesta (minor-planet designation: 4 Vesta) is one of the largest objects in the asteroid belt, with a mean diameter of 525 kilometres (326 mi). It was discovered by the German astronomer Heinrich Wilhelm Matthias Olbers on 29 March 1807 and is named after Vesta, the virgin goddess of home and hearth from Roman mythology. Vesta is thought to be the second-largest asteroid, both by mass and by volume, after the dwarf planet Ceres. Measurements give it a nominal volume only slightly larger than that of Pallas (about 5% greater), but it is 25% to 30% more massive. It constitutes an estimated 9% of the mass of the asteroid belt. Vesta is the only known remaining rocky protoplanet of the kind that formed the terrestrial planets. Numerous fragments of Vesta were ejected by collisions one and two billion years ago that left two enormous craters occupying much of Vesta's southern hemisphere. Debris from these events has fallen to Earth as howardite–eucrite–diogenite (HED) meteorites, which have been a rich source of information about Vesta. Vesta is the brightest asteroid visible from Earth. It is regularly as bright as magnitude 5.1, at which times it is faintly visible to the naked eye. Its maximum distance from the Sun is slightly greater than the minimum distance of Ceres from the Sun,[e] although its orbit lies entirely within that of Ceres. NASA's Dawn spacecraft entered orbit around Vesta on 16 July 2011 for a one-year exploration and left the orbit of Vesta on 5 September 2012 en route to its final destination, Ceres. Researchers continue to examine data collected by Dawn for additional insights into the formation and history of Vesta. History Heinrich Olbers discovered Pallas in 1802, the year after the discovery of Ceres. He proposed that the two objects were the remnants of a destroyed planet. He sent a letter with his proposal to the British astronomer William Herschel, suggesting that a search near the locations where the orbits of Ceres and Pallas intersected might reveal more fragments. These orbital intersections were located in the constellations of Cetus and Virgo. Olbers commenced his search in 1802, and on 29 March 1807 he discovered Vesta in the constellation Virgo—a coincidence, because Ceres, Pallas, and Vesta are not fragments of a larger body. Because the asteroid Juno had been discovered in 1804, this made Vesta the fourth object to be identified in the region that is now known as the asteroid belt. The discovery was announced in a letter addressed to German astronomer Johann H. Schröter dated 31 March. Because Olbers already had credit for discovering a planet (Pallas; at the time, the asteroids were considered to be planets), he gave the honor of naming his new discovery to German mathematician Carl Friedrich Gauss, whose orbital calculations had enabled astronomers to confirm the existence of Ceres, the first asteroid, and who had computed the orbit of the new planet in the remarkably short time of 10 hours. Gauss decided on the Roman virgin goddess of home and hearth, Vesta. Vesta was the fourth asteroid to be discovered, hence the number 4 in its formal designation. The name Vesta, or national variants thereof, is in international use with two exceptions: Greece and China. In Greek, the name adopted was the Hellenic equivalent of Vesta, Hestia (4 Εστία); in English, that name is used for 46 Hestia (Greeks use the name "Hestia" for both, with the minor-planet numbers used for disambiguation). In Chinese, Vesta is called the 'hearth-god(dess) star', 灶神星 Zàoshénxīng, naming the asteroid for Vesta's role in mythology, similar to the Chinese names of Uranus, Neptune, and Pluto.[f] Upon its discovery, Vesta was, like Ceres, Pallas, and Juno before it, classified as a planet and given a planetary symbol. The symbol represented the altar of Vesta with its sacred fire and was designed by Gauss. In Gauss's conception, now obsolete, this was drawn . His form was encoded in Unicode 17.0 as U+1F777 .[g] The asteroid symbols were gradually retired from astronomical use after 1852, but the symbols for the first four asteroids were resurrected for astrology in the 1970s. The abbreviated modern astrological variant of the Vesta symbol is (U+26B6 ⚶).[h] After the discovery of Vesta, no further objects were discovered for 38 years, and during this time the Solar System was thought to have eleven planets. However, in 1845, new asteroids started being discovered at a rapid pace, and by 1851 there were fifteen, each with its own symbol, in addition to the eight major planets (Neptune had been discovered in 1846). It soon became clear that it would be impractical to continue inventing new planetary symbols indefinitely, and some of the existing ones proved difficult to draw quickly. That year, the problem was addressed by Benjamin Apthorp Gould, who suggested numbering asteroids in their order of discovery, and placing this number in a disk (circle) as the generic symbol of an asteroid. Thus, the fourth asteroid, Vesta, acquired the generic symbol ④. This was soon coupled with the name into an official number–name designation, ④ Vesta, as the number of minor planets increased. By 1858, the circle had been simplified to parentheses, (4) Vesta, which were easier to typeset. Other punctuation, such as 4) Vesta and 4, Vesta, was also briefly used, but had more or less completely died out by 1949. Photometric observations of Vesta were made at the Harvard College Observatory in 1880–1882 and at the Observatoire de Toulouse in 1909. These and other observations allowed the rotation rate of Vesta to be determined by the 1950s. However, the early estimates of the rotation rate came into question because the light curve included variations in both shape and albedo. Early estimates of the diameter of Vesta ranged from 383 kilometres (238 mi) in 1825, to 444 km (276 mi). E.C. Pickering produced an estimated diameter of 513 ± 17 km (319 ± 11 mi) in 1879, which is close to the modern value for the mean diameter, but the subsequent estimates ranged from a low of 390 km (242 mi) up to a high of 602 km (374 mi) during the next century. The measured estimates were based on photometry. In 1989, speckle interferometry was used to measure a dimension that varied between 498 and 548 km (309 and 341 mi) during the rotational period. In 1991, an occultation of the star SAO 93228 by Vesta was observed from multiple locations in the eastern United States and Canada. Based on observations from 14 different sites, the best fit to the data was an elliptical profile with dimensions of about 550 km × 462 km (342 mi × 287 mi). Dawn confirmed this measurement.[i] These measurements will help determine the thermal history, size of the core, role of water in asteroid evolution and what meteorites found on Earth come from these bodies, with the ultimate goal of understanding the conditions and processes present at the solar system's earliest epoch and the role of water content and size in planetary evolution. Vesta became the first asteroid to have its mass determined. Every 18 years, the asteroid 197 Arete approaches within 0.04 AU of Vesta. In 1966, based upon observations of Vesta's gravitational perturbations of Arete, Hans G. Hertz estimated the mass of Vesta at (1.20±0.08)×10−10 M☉ (solar masses). More refined estimates followed, and in 2001 the perturbations of 17 Thetis were used to calculate the mass of Vesta to be (1.31±0.02)×10−10 M☉. Dawn determined it to be 1.3029×10−10 M☉. Orbit Vesta orbits the Sun between Mars and Jupiter, within the asteroid belt, with a period of 3.6 Earth years, specifically in the inner asteroid belt, interior to the Kirkwood gap at 2.50 AU. Its orbit is moderately inclined (i = 7.1°, compared to 7° for Mercury and 17° for Pluto) and moderately eccentric (e = 0.09, about the same as for Mars). True orbital resonances between asteroids are considered unlikely. Because of their small masses relative to their large separations, such relationships should be very rare. Nevertheless, Vesta is able to capture other asteroids into temporary 1:1 resonant orbital relationships (for periods up to 2 million years or more) and about forty such objects have been identified. Decameter-sized objects detected in the vicinity of Vesta by Dawn may be such quasi-satellites rather than proper satellites. Rotation Vesta's rotation is relatively fast for an asteroid (5.342 h) and prograde, with the north pole pointing in the direction of right ascension 20 h 32 min, declination +48° (in the constellation Cygnus) with an uncertainty of about 10°. This gives an axial tilt of 29°. Coordinate systems Two longitudinal coordinate systems are used for Vesta, with prime meridians separated by 150°. The IAU established a coordinate system in 1997 based on Hubble photos, with the prime meridian running through the center of Olbers Regio, a dark feature 200 km (120 mi) across. When Dawn arrived at Vesta, mission scientists found that the location of the pole assumed by the IAU was off by 10°, so that the IAU coordinate system drifted across the surface of Vesta at 0.06° per year, and also that Olbers Regio was not discernible from up close, and so was not adequate to define the prime meridian with the precision they needed. They corrected the pole, but also established a new prime meridian 4° from the center of Claudia, a sharply defined crater 700 m (2,300 ft) across, which they say results in a more logical set of mapping quadrangles. All NASA publications, including images and maps of Vesta, use the Claudian meridian, which is unacceptable to the IAU. The IAU Working Group on Cartographic Coordinates and Rotational Elements recommended a coordinate system, correcting the pole but rotating the Claudian longitude by 150° to coincide with Olbers Regio. It was accepted by the IAU, although it disrupts the maps prepared by the Dawn team, which had been positioned so they would not bisect any major surface features. Physical characteristics Vesta is the second most massive body in the asteroid belt, although it is only 28% as massive as Ceres, the most massive body. Vesta is, however, the most massive body that formed in the asteroid belt, as Ceres is believed to have formed between Jupiter and Saturn. Vesta's density is lower than those of the four terrestrial planets but is higher than those of most asteroids, as well as all of the moons in the Solar System except Io. Vesta's surface area is about the same as the land area of Pakistan, Venezuela, Tanzania, or Nigeria; slightly under 900,000 km2 (350,000 mi2; 90 million ha; 220 million acres). It has an only partially differentiated interior. Vesta is only slightly larger (525.4±0.2 km) than 2 Pallas (512±3 km) in mean diameter, but is about 25% more massive. Vesta's shape is close to a gravitationally relaxed oblate spheroid, but the large concavity and protrusion at the southern pole (see 'Surface features' below) combined with a mass less than 5×1020 kg precluded Vesta from automatically being considered a dwarf planet under International Astronomical Union (IAU) Resolution XXVI 5. A 2012 analysis of Vesta's shape and gravity field using data gathered by the Dawn spacecraft has shown that Vesta is currently not in hydrostatic equilibrium. Temperatures on the surface have been estimated to lie between about −20 °C (253 K) with the Sun overhead, dropping to about −190 °C (83.1 K) at the winter pole. Typical daytime and nighttime temperatures are −60 °C (213 K) and −130 °C (143 K), respectively. This estimate is for 6 May 1996, very close to perihelion, although details vary somewhat with the seasons. Surface features Before the arrival of the Dawn spacecraft, some Vestan surface features had already been resolved using the Hubble Space Telescope and ground-based telescopes (e.g., the Keck Observatory). The arrival of Dawn in July 2011 revealed the complex surface of Vesta in detail. The most prominent of these surface features are two enormous impact basins, the 500-kilometre-wide (311 mi) Rheasilvia, centered near the south pole; and the 400-kilometre-wide (249 mi) Veneneia. The Rheasilvia impact basin is younger and overlies the Veneneia. The Dawn science team named the younger, more prominent crater Rheasilvia, after the mother of Romulus and Remus and a mythical vestal virgin. Its width is 95% of the mean diameter of Vesta. The crater is about 19 km (12 mi) deep. A central peak rises 23 km (14 mi) above the lowest measured part of the crater floor and the highest measured part of the crater rim is 31 km (19 mi) above the crater floor low point. It is estimated that the impact responsible excavated about 1% of the volume of Vesta, and it is likely that the Vesta family and V-type asteroids are the products of this collision. If this is the case, then the fact that 10 km (6 mi) fragments have survived bombardment until the present indicates that the crater is at most only about 1 billion years old. It would also be the site of origin of the HED meteorites. All the known V-type asteroids taken together account for only about 6% of the ejected volume, with the rest presumably either in small fragments, ejected by approaching the 3:1 Kirkwood gap, or perturbed away by the Yarkovsky effect or radiation pressure. Spectroscopic analyses of the Hubble images have shown that this crater has penetrated deep through several distinct layers of the crust, and possibly into the mantle, as indicated by spectral signatures of olivine. Subsequent analysis of data from the Dawn mission provided much greater detail on Rheasilvia's structure and composition, confirming it as one of the largest impact structures known relative to its parent body size. The impact clearly modified the pre-existing very large, Veneneia structure, indicating Rheasilvia's younger age. Rheasilvia's size makes Vesta's southern topography unique, creating a flattened southern hemisphere and contributing significantly to the asteroid's overall oblate shape. Rheasilvia's ~22 km (14 mi) central peak stands as one of the tallest mountains identified in the Solar System. Its base width of roughly 180 km (110 mi) and complex morphology distinguishes it from the simpler central peaks seen in smaller craters. Numerical modeling indicates that such a large central structure within a ~505 km (314 mi) diameter basin requires formation on a differentiated body with significant gravity. Scaling laws for craters on smaller asteroids fail to predict such a feature; instead, impact dynamics involving transient crater collapse and rebound of the underlying material (potentially upper mantle) are needed to explain its formation. Hydrocode simulations suggest the impactor responsible was likely 60–70 km (37–43 mi) across, impacting at roughly 5.4 km/s (3.4 mi/s). Models of impact angle (around 30–45 degrees from vertical) better match the detailed morphology of the basin and its prominent peak. Crater density measurements on Rheasilvia's relatively unmodified floor materials and surrounding ejecta deposits, calibrated using standard lunar chronology functions adapted for Vesta's location, place the impact event at approximately 1 billion years ago. This age makes Rheasilvia a relatively young feature on a protoplanetary body formed early in Solar System history. The estimated excavation of ~1% of Vesta's volume provides a direct link to the Vesta family of asteroids (Vestoids) and the HED meteorites. Since Vesta's spectral signature matches that of the Vestoids and HEDs, this strongly indicates they are fragments ejected from Vesta most likely during the Rheasilvia impact. The Dawn mission's VIR instrument helped to confirm the basin's deep excavation and compositional diversity. VIR mapping revealed spectral variations across the basin consistent with the mixing of different crustal layers expected in the HED meteorites. Signatures matching eucrites (shallow crustal basalts) and diogenites (deeper crustal orthopyroxenites) were identified, which usually correlate with specific morphological features like crater walls or slump blocks. The confirmed signature of olivine-rich material, which were first hinted at by Hubble observations is strongest on the flanks of the central peak and in specific patches along the basin rim and walls, suggesting it is not uniformly distributed but rather exposed in distinct outcrops. As the dominant mineral expected in Vesta's mantle beneath the HED-like crust, the presence of olivine indicates the Rheasilvia impact penetrated Vesta's entire crust (~20–40 km (12–25 mi) thick in the region) and excavated material from the upper mantle. Furthermore, the global stresses resulting from this massive impact are considered the likely trigger for the formation of the large trough systems, like Divalia Fossa, that encircle Vesta's equatorial regions. Several old, degraded craters approach Rheasilvia and Veneneia in size, although none are quite so large. They include Feralia Planitia, shown at right, which is 270 km (168 mi) across. More-recent, sharper craters range up to 158 km (98 mi) Varronilla and 196 km (122 mi) Postumia. Dust fills up some craters, creating so-called dust ponds. They are a phenomenon where pockets of dust are seen in celestial bodies without a significant atmosphere. These are smooth deposits of dust accumulated in depressions on the surface of the body (like craters), contrasting from the Rocky terrain around them. On the surface of Vesta, we have identified both type 1 (formed from impact melt) and type 2 (electrostatically made) dust ponds within 0˚–30°N/S, that is, Equatorial region. 10 craters have been identified with such formations. The "snowman craters" are a group of three adjacent craters in Vesta's northern hemisphere. Their official names, from largest to smallest (west to east), are Marcia, Calpurnia, and Minucia. Marcia is the youngest and cross-cuts Calpurnia. Minucia is the oldest. The majority of the equatorial region of Vesta is sculpted by a series of parallel troughs designated Divalia Fossae; its longest trough is 10–20 kilometres (6.2–12.4 mi) wide and 465 kilometres (289 mi) long. Despite the fact that Vesta is a one-seventh the size of the Moon, Divalia Fossae dwarfs the Grand Canyon. A second series, inclined to the equator, is found further north. This northern trough system is named Saturnalia Fossae, with its largest trough being roughly 40 km (25 mi) wide and over 370 km (230 mi) long. These troughs are thought to be large-scale graben resulting from the impacts that created Rheasilvia and Veneneia craters, respectively. They are some of the longest chasms in the Solar System, nearly as long as Ithaca Chasma on Tethys. The troughs may be graben that formed after another asteroid collided with Vesta, a process that can happen only in a body that is differentiated, which Vesta may not fully be. Alternatively, it is proposed that the troughs may be radial sculptures created by secondary cratering from Rheasilvia. Compositional information from the visible and infrared spectrometer (VIR), gamma-ray and neutron detector (GRaND), and framing camera (FC), all indicate that the majority of the surface composition of Vesta is consistent with the composition of the howardite, eucrite, and diogenite meteorites. The Rheasilvia region is richest in diogenite, consistent with the Rheasilvia-forming impact excavating material from deeper within Vesta. The presence of olivine within the Rheasilvia region would also be consistent with excavation of mantle material. However, olivine has only been detected in localized regions of the northern hemisphere, not within Rheasilvia. The origin of this olivine is currently unclear. Though olivine was expected by astronomers to have originated from Vesta's mantle prior to the arrival of the Dawn orbiter, the lack of olivine within the Rheasilvia and Veneneia impact basins complicates this view. Both impact basins excavated Vestian material down to 60–100 km (37–62 mi), far deeper than the expected thickness of ~30–40 km (19–25 mi) for Vesta's crust. Vesta's crust may be far thicker than expected or the violent impact events that created Rheasilvia and Veneneia may have mixed material enough to obscure olivine from observations. Alternatively, Dawn observations of olivine could instead be due to delivery by olivine-rich impactors, unrelated to Vesta's internal structure. Pitted terrain has been observed in four craters on Vesta: Marcia, Cornelia, Numisia and Licinia. The formation of the pitted terrain is proposed to be degassing of impact-heated volatile-bearing material. Along with the pitted terrain, curvilinear gullies are found in Marcia and Cornelia craters. The curvilinear gullies end in lobate deposits, which are sometimes covered by pitted terrain, and are proposed to form by the transient flow of liquid water after buried deposits of ice were melted by the heat of the impacts. Hydrated materials have also been detected, many of which are associated with areas of dark material. Consequently, dark material is thought to be largely composed of carbonaceous chondrite, which was deposited on the surface by impacts. Carbonaceous chondrites are comparatively rich in mineralogically bound OH. Geology A large collection of potential samples from Vesta is accessible to scientists, in the form of over 1200 HED meteorites (Vestan achondrites), giving insight into Vesta's geologic history and structure. NASA Infrared Telescope Facility (NASA IRTF) studies of asteroid (237442) 1999 TA10 suggest that it originated from deeper within Vesta than the HED meteorites. Vesta is thought to consist of a metallic iron–nickel core, variously estimated to be 90 km (56 mi) to 220 km (140 mi) in diameter, an overlying rocky olivine mantle, with a surface crust of similar composition to HED meteorites. From the first appearance of calcium–aluminium-rich inclusions (the first solid matter in the Solar System, forming about 4.567 billion years ago), a likely time line is as follows: Vesta is the only known intact asteroid that has been resurfaced in this manner. Because of this, some scientists refer to Vesta as a protoplanet. On the basis of the sizes of V-type asteroids (thought to be pieces of Vesta's crust ejected during large impacts), and the depth of Rheasilvia crater (see below), the crust is thought to be roughly 10 kilometres (6 mi) thick. Findings from the Dawn spacecraft have found evidence that the troughs that wrap around Vesta could be graben formed by impact-induced faulting (see Troughs section above), meaning that Vesta has more complex geology than other asteroids. The impacts that created the Rheasilvia and Veneneia craters occurred when Vesta was no longer warm and plastic enough to return to an equilibrium shape, distorting its once rounded shape and prohibiting it from being classified as a dwarf planet today.[citation needed] Vesta's surface is covered by regolith distinct from that found on the Moon or asteroids such as Itokawa. This is because space weathering acts differently. Vesta's surface shows no significant trace of nanophase iron because the impact speeds on Vesta are too low to make rock melting and vaporization an appreciable process. Instead, regolith evolution is dominated by brecciation and subsequent mixing of bright and dark components. The dark component is probably due to the infall of carbonaceous material, whereas the bright component is the original Vesta basaltic soil. Fragments Some small Solar System bodies are suspected to be fragments of Vesta caused by impacts. The Vestian asteroids and HED meteorites are examples. The V-type asteroid 1929 Kollaa has been determined to have a composition akin to cumulate eucrite meteorites, indicating its origin deep within Vesta's crust. Vesta is currently one of only eight identified Solar System bodies of which we have physical samples, coming from a number of meteorites suspected to be Vestan fragments. It is estimated that 1 out of 16 meteorites originated from Vesta. The other identified Solar System samples are from Earth itself, meteorites from Mars, meteorites from the Moon, and samples returned from the Moon, the comet Wild 2, and the asteroids 25143 Itokawa, 162173 Ryugu, and 101955 Bennu.[k] Exploration In 1981, a proposal for an asteroid mission was submitted to the European Space Agency (ESA). Named the Asteroidal Gravity Optical and Radar Analysis (AGORA), this spacecraft was to launch some time in 1990–1994 and perform two flybys of large asteroids. The preferred target for this mission was Vesta. AGORA would reach the asteroid belt either by a gravitational slingshot trajectory past Mars or by means of a small ion engine. However, the proposal was refused by the ESA. A joint NASA–ESA asteroid mission was then drawn up for a Multiple Asteroid Orbiter with Solar Electric Propulsion (MAOSEP), with one of the mission profiles including an orbit of Vesta. NASA indicated they were not interested in an asteroid mission. Instead, the ESA set up a technological study of a spacecraft with an ion drive. Other missions to the asteroid belt were proposed in the 1980s by France, Germany, Italy and the United States, but none were approved. Exploration of Vesta by fly-by and impacting penetrator was the second main target of the first plan of the multi-aimed Soviet Vesta mission, developed in cooperation with European countries for realisation in 1991–1994 but canceled due to the dissolution of the Soviet Union. In the early 1990s, NASA initiated the Discovery Program, which was intended to be a series of low-cost scientific missions. In 1996, the program's study team recommended a mission to explore the asteroid belt using a spacecraft with an ion engine as a high priority. Funding for this program remained problematic for several years, but by 2004 the Dawn vehicle had passed its critical design review and construction proceeded.[citation needed] It launched on 27 September 2007 as the first space mission to Vesta. On 3 May 2011, Dawn acquired its first targeting image 1.2 million kilometres (0.75×10^6 mi) from Vesta. On 16 July 2011, NASA confirmed that it received telemetry from Dawn indicating that the spacecraft successfully entered Vesta's orbit. It was scheduled to orbit Vesta for one year, until July 2012. Dawn's arrival coincided with late summer in the southern hemisphere of Vesta, with the large crater at Vesta's south pole (Rheasilvia) in sunlight. Because a season on Vesta lasts eleven months, the northern hemisphere, including anticipated compression fractures opposite the crater, would become visible to Dawn's cameras before it left orbit. Dawn left orbit around Vesta on 4 September 2012 11:26 p.m. PDT to travel to Ceres. NASA/DLR released imagery and summary information from a survey orbit, two high-altitude orbits (60–70 m/pixel) and a low-altitude mapping orbit (20 m/pixel), including digital terrain models, videos and atlases. Scientists also used Dawn to calculate Vesta's precise mass and gravity field. The subsequent determination of the J2 component yielded a core diameter estimate of about 220 km (140 mi) assuming a crustal density similar to that of the HED. Dawn data can be accessed by the public at the UCLA website. Vesta comes into view as the Dawn spacecraft approaches and enters orbit: Detailed images retrieved during the high-altitude (60–70 m/pixel) and low-altitude (~20 m/pixel) mapping orbits are available on the Dawn Mission website of JPL/NASA. Visibility Its size and unusually bright surface make Vesta the brightest asteroid, and it is occasionally visible to the naked eye from dark skies (without light pollution). In May and June 2007, Vesta reached a peak magnitude of +5.4, the brightest since 1989. At that time, opposition and perihelion were only a few weeks apart. It was brighter still at its 22 June 2018 opposition, reaching a magnitude of +5.3. Less favorable oppositions during late autumn 2008 in the Northern Hemisphere still had Vesta at a magnitude of from +6.5 to +7.3. Even when in conjunction with the Sun, Vesta will have a magnitude around +8.5; thus from a pollution-free sky it can be observed with binoculars even at elongations much smaller than near opposition. In 2010, Vesta reached opposition in the constellation of Leo on the night of 17–18 February, at about magnitude 6.1, a brightness that makes it visible in binocular range but generally not for the naked eye. Under perfect dark sky conditions where all light pollution is absent it might be visible to an experienced observer without the use of a telescope or binoculars. Vesta came to opposition again on 5 August 2011, in the constellation of Capricornus at about magnitude 5.6. Vesta was at opposition again on 9 December 2012. According to Sky and Telescope magazine, this year Vesta came within about 6 degrees of 1 Ceres during the winter of 2012 and spring 2013. Vesta orbits the Sun in 3.63 years and Ceres in 4.6 years, so every 17.4 years Vesta overtakes Ceres (the previous overtaking was in April 1996). On 1 December 2012, Vesta had a magnitude of 6.6, but it had decreased to 8.4 by 1 May 2013. Ceres and Vesta came within one degree of each other in the night sky in July 2014. See also Notes References Bibliography External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of". |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/MENA] | [TOKENS: 3105] |
Contents Middle East and North Africa The Middle East and North Africa (MENA), also referred to as West Asia and North Africa (WANA) or South West Asia and North Africa (SWANA), is a geographic region which comprises the Middle East (also called West Asia) and North Africa together. It exists as an alternative to the concept of the Greater Middle East, which comprises the bulk of the Muslim world. The region has no standardized definition and groupings may vary, but the term typically includes countries like Algeria, Bahrain, Egypt, Iraq, Jordan, Kuwait, Lebanon, Libya, Morocco, Oman, Palestine, Qatar, Saudi Arabia, Syria, Tunisia, the United Arab Emirates, and Yemen. As a regional identifier, the term "MENA" is often used in academia, military planning, disaster relief, media planning (as a broadcast region), and business writing. Moreover, it shares a number of cultural, economic, and environmental similarities across the countries that it spans; for example, some of the most extreme impacts of climate change will be felt in MENA. Some related terms have a wider definition than MENA, such as MENASA (lit. 'Middle East and North Africa and South Asia') or MENAP (lit. 'Middle East and North Africa and Afghanistan and Pakistan'). The term MENAT explicitly includes Turkey, which is usually excluded from some MENA definitions, even though Turkey is almost always considered part of the Middle East proper. Ultimately, MENA can be considered as a grouping scheme that brings together most of the Arab League and variously includes their neighbors, like Iran, Turkey, Israel, Cyprus, the Caucasian countries, Afghanistan, Pakistan, Malta, and a few others. Definitions The Middle East and North Africa has no standardized definition; different organizations define the region as consisting of different territories, or do not define it as a region at all. There is no MENA region amongst the United Nations Regional Groups, nor in the United Nations geoscheme used by the UNSD (though the latter does feature two subregions called 'Western Asia' and 'Northern Africa', see WANA). Some agencies and programmes of the United Nations do define the MENA region, but their definitions may contradict each other, and sometimes only apply to specific studies or reports. Historians Michael Dumper and Bruce Stanley stated in 2007: 'For the purposes of this volume, the editors have generally chosen to define the MENA region as stretching from Morocco to Iran and from Turkey to the Horn of Africa. This definition thus includes the twenty-two countries of the Arab League (including the Palestinian Authority enclaves in the West Bank and Gaza Strip), Turkey, Israel, Iran, and Cyprus.' They stressed, however, how controversial and problematic this definition is, and that other choices could also have been made according to various criteria. For its December 2012 global religion survey, the Pew Research Center grouped 20 countries and territories as 'the Middle East and North Africa', namely: 'Algeria, Bahrain, Egypt, Iraq, Israel, Jordan, Kuwait, Lebanon, Libya, Morocco, Oman, the Palestinian territories, Qatar, Saudi Arabia, Sudan, Syria, Tunisia, United Arab Emirates, Western Sahara and Yemen.' For the Global Peace Index 2020, the Institute for Economics & Peace defined the MENA region as containing 20 countries: Algeria, Bahrain, Egypt, Iraq, Iran, Israel, Jordan, Kuwait, Lebanon, Libya, Morocco, Oman, Palestine, Qatar, Saudi Arabia, Sudan, Syria, Tunisia, United Arab Emirates, and Yemen. Due to the geographic ambiguity and Eurocentric nature of the term "Middle East", some people, especially in sciences such as agriculture and climatology, prefer to use other terms like "SWANA" (South West Asia and North Africa), "WANA" (West Asia and North Africa), or the less common NAWA (North Africa-West Asia). Usage of the term WANA has also been advanced by postcolonial studies. The United Nations geoscheme used by the UN Statistics Division for its specific political geography statistics needs, does not define a single WANA region, but it does feature two subregions called Western Asia and Northern Africa, respectively: In a 1995 publication, the then-Aleppo-based International Center for Agricultural Research in the Dry Areas (ICARDA) defined its West Asia/North Africa (WANA) region as 25 countries, including: 'Afghanistan, Algeria, Egypt, Ethiopia, Iran, Iraq, Jordan, Lebanon, Libya, Morocco, Oman, Pakistan, Saudi Arabia, Sudan, Syria, Tunisia, Turkey and Yemen.' It noted that CGIAR's Technical Advisory Committee (TAC) excluded Ethiopia, Sudan and Pakistan from its 1992 WANA definition, but otherwise listed the same countries. In a 2011 study, ICARDA stated 27 countries/territories: 'The WANA region includes: Afghanistan, Algeria, Bahrain, Djibouti, Egypt, Eritrea, Ethiopia, Gaza Strip, Iran, Iraq, Jordan, Kuwait, Lebanon, Libya, Mauritania, Morocco, Oman, Pakistan, Qatar, Saudi Arabia, Somalia, Sudan, Syria, Tunisia, Turkey, United Arab Emirates, and Yemen.' *Also called State of Palestine, or West Bank and Gaza (Strip). In a preparatory working paper for the June 2004 G8 Summit, the U.S. government (at the end of the George W. Bush administration's first term) defined the Greater Middle East as including the Arab states, Israel, Turkey, Iran, Pakistan and Afghanistan. From April 2013, the International Monetary Fund started using a new analytical region called MENAP (Middle East, North Africa, Afghanistan, and Pakistan), which adds Afghanistan and Pakistan to MENA countries. Now MENAP is a prominent economic grouping in IMF reports. MENASA refers to the Middle East, North Africa and South Asia region. Its usage consists of the region of MENA together with South Asia, with Dubai chosen by the United Nations as the data hub for the region. In some contexts, specifically the Lauder Institute at the University of Pennsylvania, the region is abbreviated as SAMENA instead of the more common MENASA. The term MENAT (Middle East, North Africa, and Turkey) has been used to include Turkey in the list of MENA countries. The term Near East was commonly used before the term Middle East was coined by the British in the early 20th century. The term Ancient Near East is commonly used by scholars for the region in antiquity. Some organisations and scholars insist on still using 'Near East' today, with some including North Africa, but definitions range widely and there is no consensus on its geographical application. EMME refers to a grouping of 18 nations situated in and around the Eastern Mediterranean and Middle East. The 18 nations in the Eastern Mediterranean and Middle East are: Bahrain, Cyprus, Egypt, Greece, Iran, Iraq, Israel, Jordan, Kuwait, Lebanon, Oman, Palestine, Qatar, Saudi Arabia, Syria, Turkey, United Arab Emirates, and Yemen. Geography In 2018, the MENA region emitted 3.2 billion tonnes of carbon dioxide and produced 8.7% of global greenhouse gas emissions (GHG) despite making up only 6% of the global population. These emissions are mostly from the energy sector, an integral component of many Middle Eastern and North African economies due to the extensive oil and natural gas reserves that are found within the region. The Middle East region is one of the most vulnerable to climate change. The impacts include increase in drought conditions, aridity, heatwaves and sea level rise. Sharp global temperature and sea level changes, shifting precipitation patterns and increased frequency of extreme weather events are some of the main impacts of climate change as identified by the Intergovernmental Panel on Climate Change (IPCC). The MENA region is especially vulnerable to such impacts due to its arid and semi-arid environment, facing climatic challenges such as low rainfall, high temperatures and dry soil. The climatic conditions that foster such challenges for MENA are projected by the IPCC to worsen throughout the 21st century. If greenhouse gas emissions are not significantly reduced, part of the MENA region risks becoming uninhabitable before the year 2100. Climate change is expected to put significant strain on already scarce water and agricultural resources within the MENA region, threatening the national security and political stability of all included countries. Over 60 percent of the region's population lives in high and very high water-stressed areas compared to the global average of 35 percent. This has prompted some MENA countries to engage with the issue of climate change on an international level through environmental accords such as the Paris Agreement. Law and policy are also being established on a national level amongst MENA countries, with a focus on the development of renewable energies. Politics In its Global Peace Index 2020, the Institute for Economics & Peace stated that 'the Middle East and North Africa remains the world's least peaceful region, despite improvements for 11 countries'. According to an in-depth multi-part study by the Center for Strategic and International Studies (CSIS) published in April 2016, the factors shaping the MENA region are exceedingly complex, and it is difficult to find 'any overall model that fits the different variables involved'. It found that there were 'deep structural causes of violence and instability'. Wars and upheavals are partly 'shaped by the major tribal, ethnic, sectarian, and regional differences', by 'demographic, economic, and security trends', and by 'quality of governance, internal security system, justice systems, and [social] progress.' In some countries, the necessary societal factors for successful democratic change (often championed by some in the region and in the West to address various issues) are absent, and political revolutions may not always lead to more stability, nor solve the underlying problems in a given MENA country. However, it also found that 'the majority of MENA nations have remained relatively stable and continue to make progress'. During and after the decolonisation of Africa and Asia in the 20th century, many different armed conflicts have occurred in the MENA region, including but not limited to the Rif War; the Iraqi–Kurdish conflict; the Arab–Israeli conflict; the Western Sahara conflict; the Lebanese Civil War; the Kurdish–Turkish conflict (1978–present); the Iranian Revolution; the Iran–Iraq War; Iran–Saudi Arabia proxy conflict; the Berber Spring; the Toyota War; the Invasion of Kuwait and the Gulf War; the Algerian Civil War; the Iraqi Kurdish Civil War; the rise of terrorism and anti-terrorist actions; the U.S.-led intervention of Iraq in 2003 and subsequent Iraq War. The Arab Spring (2010–2011) led to the Tunisian Revolution, the Egyptian revolution of 2011 and Egyptian Crisis (2011–2014), while also sparking war throughout the region such as the Syrian Civil War, the Libyan Civil War, the Yemeni Civil War and the Iraqi war against ISIS (Islamic State of Iraq and the Levant).[citation needed] During the Sudanese Revolution, months of protests and a military coup led to the fall of Omar al-Bashir's regime and the initiation of the 2019–2022 Sudanese transition to democracy and the Sudanese peace process. Economy and education The MENA region has vast reserves of petroleum and natural gas that make it a vital source of global economic stability. According to the Oil and Gas Journal (January 1, 2009), the MENA region has 60% of the world's oil reserves (810.98 billion barrels (128.936 km3)) and 45% of the world's natural gas reserves ( 2,868,886 billion cubic feet (81,237.8 km3) ). As of 2023, 7 of the 13 OPEC nations are within the MENA region.[citation needed] According to Pew Research Center's 2016 "Religion and Education Around the World" study, 40% of the adult population in MENA had completed less than a year of primary school. The fraction was higher for women, of whom half had been to school for less than a year. Investment also flows from the Middle East into North Africa, with research finding that bilateral trade between the United Arab Emirates and Africa had increased by more than 38% in the two years to the end of 2023. Demographics The demographics of the Middle East and North Africa (MENA) region show a highly populated, culturally diverse region spanning three continents. As of 2023, the population was around 501 million. The class, cultural, ethnic, governmental, linguistic and religious make-up of the region is highly variable. Debates on which countries should be included in the Middle East are wide-ranging. The Greater Middle East and North Africa region can include the Caucasus, Cyprus, Afghanistan, and several sub-Saharan African states due to various social, religious and historic ties. The most commonly accepted countries in the MENA region are included on this page. Culture Islam is by far the dominant religion in nearly all of the MENA territories; 91.2% of the population is Muslim. The Middle East–North Africa region comprises 20 countries and territories with an estimated Muslim population of 315 million or about 23% of the world's Muslim population. The term "MENA" is often defined in part in relation to majority-Muslim countries located in the region, although several nations in the region are not Muslim-dominated. Major non-Islamic religions native here are Christianity, Judaism, Yazidism, Druzeism, African folk religions, Berberism and other Arab paganism.[citation needed] Migrant population, mostly within the Gulf nations, practice mostly the beliefs they follow to, such as Buddhism and Hinduism among South Asian, East Asian and Southeast Asian migrants. Artists with connections to or within the geographic region of the Middle East and North Africa have created art that reflects queer expressions: the rejection, alteration, or challenging of social norms that prioritize heterosexuality and normative gender roles. Queer art practices are not limited to a specific medium. Queer art spans performance, painting, installation, photography, video, sculpture, fiber arts, drawing, mixed media practices, and more. Artists across the Middle East and North Africa and the diaspora have been exploring and reconciling the relationship between their queer and the Middle East and North Africa identities. The geographic region of Southwest Asia North Africa is also referred to as the Middle East and North Africa, the Middle East or Islamicate world. The Middle East and North Africa describes the region by the areas of the continents it spans, rather than positioning the region in relation to Europe or North America (as with the term Middle East). Several predominant themes emerge in the queer art of the Middle East and North Africa. Artists frequently explore gender identity, constructing binary interpretations of gender and creating non-normative expressions of gender, often through engagement with historical or archival material. Representations of sexuality, intimacy and diaspora are likewise frequent themes seen in queer art across the region. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/List_of_fairy_tales] | [TOKENS: 316] |
List of fairy tales Fairy tales are stories that range from those in folklore to more modern stories defined as literary fairy tales. Despite subtle differences in the categorizing of fairy tales, folklore, fables, myths, and legends, a modern definition of the literary fairy tale, as provided by Jens Tismar's monograph in German, is a story that differs "from an oral folk tale" in that it is written by "a single identifiable author". They differ from oral folktales, which can be characterized as "simple and anonymous", and exist in a mutable and difficult to define genre with a close relationship to oral tradition. Non-categorized Afghanistan Africa Albania Ancient Egypt 1900 Arabic Argentina Armenia Asia (East Asia) Well-known Japanese "fairy tale"[a] are often found in the Otogi-zōshi or the Konjaku Monogatarishū. Asia (Southeast Asia) Australia Azerbaijan Baltic Belgium Canada Celtic Chile England Finland France Georgia Germany Germany and German-speaking Austria, Switzerland, etc. Greece Hungary Jeremiah Curtin Myths and Folk-tales of the Russians, Western Slavs, and Magyars Iberian Peninsula Indian subcontinent List of folktales from the Indian subcontinent, including India, Pakistan, Bangladesh, Sri Lanka, and Nepal. Indonesia Iran Iraq Ireland Isle of Man Italy Lebanon Libya Mexico The Netherlands New Zealand Nicaragua Scandinavia Hans Christian Andersen's works may be considered "literary fairytales.[b] Romania Russia Slavic Scotland Syria Turkey United States Uzbekistan Venezuela Wales Explanatory notes See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Interstellar_medium] | [TOKENS: 5734] |
Contents Interstellar medium The interstellar medium (ISM) is the matter and radiation that exists in the space between the star systems in a galaxy. This matter includes gas in ionic, atomic, and molecular form, as well as dust and cosmic rays. It fills interstellar space and blends smoothly into the surrounding intergalactic medium. The energy that occupies the same volume, in the form of electromagnetic radiation, is the interstellar radiation field. Although the density of atoms in the ISM is usually far below that in the best laboratory vacuums, the mean free path between collisions is short compared to typical interstellar lengths, so on these scales the ISM behaves as a gas (more precisely, as a plasma: it is everywhere at least slightly ionized), responding to electromagnetic radiation, and not as a collection of non-interacting particles. The interstellar medium is composed of multiple phases distinguished by whether matter is ionic, atomic, or molecular, and the temperature and density of the matter. The interstellar medium is composed primarily of hydrogen, followed by helium with trace amounts of carbon, oxygen, and nitrogen. The thermal pressures of these phases are in rough equilibrium with one another. Magnetic fields and turbulent motions also provide pressure in the ISM, and are typically more important, dynamically, than the thermal pressure. In the interstellar medium, matter is primarily in molecular form and reaches number densities of 1012 molecules per m3 (1 trillion molecules per m3). In hot, diffuse regions, gas is highly ionized, and the density may be as low as 100 ions per m3. Compare this with a number density of roughly 1025 molecules per m3 for air at sea level, and 1016 molecules per m3 (10 quadrillion molecules per m3) for a laboratory high-vacuum chamber. Within our galaxy, by mass, 99% of the ISM is gas in any form, and 1% is dust. Of the gas in the ISM, by number 91% of atoms are hydrogen and 8.9% are helium, with 0.1% being atoms of elements heavier than hydrogen or helium, known as "metals" in astronomical parlance. By mass this amounts to 70% hydrogen, 28% helium, and 1.5% heavier elements. The hydrogen and helium are primarily a result of primordial nucleosynthesis, while the heavier elements in the ISM are mostly a result of enrichment (due to stellar nucleosynthesis) in the process of stellar evolution. The ISM plays a crucial role in astrophysics precisely because of its intermediate role between stellar and galactic scales. Stars form within the densest regions of the ISM, which ultimately contributes to molecular clouds and replenishes the ISM with matter and energy through planetary nebulae, stellar winds, and supernovae. This interplay between stars and the ISM helps determine the rate at which a galaxy depletes its gaseous content, and therefore its lifespan of active star formation.Voyager 1 reached the ISM on August 25, 2012, making it the first artificial object from Earth to do so. Interstellar plasma and dust will be studied until the estimated mission end date of 2025. Its twin Voyager 2 entered the ISM on November 5, 2018. Interstellar matter Table 1 shows a breakdown of the properties of the components of the ISM of the Milky Way. Field, Goldsmith & Habing (1969) put forward the static two phase equilibrium model to explain the observed properties of the ISM. Their modeled ISM included a cold dense phase (T < 300 K), consisting of clouds of neutral and molecular hydrogen, and a warm intercloud phase (T ~ 104 K), consisting of rarefied neutral and ionized gas. McKee & Ostriker (1977) added a dynamic third phase that represented the very hot (T ~ 106 K) gas that had been shock heated by supernovae and constituted most of the volume of the ISM. These phases are the temperatures where heating and cooling can reach a stable equilibrium. Their paper formed the basis for further study over the subsequent three decades. However, the relative proportions of the phases and their subdivisions are still not well understood. The basic physics behind these phases can be understood through the behaviour of hydrogen, since this is by far the largest constituent of the ISM. The different phases are roughly in pressure balance over most of the Galactic disk, since regions of excess pressure will expand and cool, and likewise under-pressure regions will be compressed and heated. Therefore, since P = n k T, hot regions (high T) generally have low particle number density n. Coronal gas has low enough density that collisions between particles are rare and so little radiation is produced, hence there is little loss of energy and the temperature can stay high for periods of hundreds of millions of years. In contrast, once the temperature falls to O(105 K) with correspondingly higher density, protons and electrons can recombine to form hydrogen atoms, emitting photons which take energy out of the gas, leading to runaway cooling. Left to itself this would produce the warm neutral medium. However, OB stars are so hot that some of their photons have energy greater than the Lyman limit, E > 13.6 eV, enough to ionize hydrogen. Such photons will be absorbed by, and ionize, any neutral hydrogen atom they encounter, setting up a dynamic equilibrium between ionization and recombination such that gas close enough to OB stars is almost entirely ionized, with temperature around 8000 K (unless already in the coronal phase), until the distance where all the ionizing photons are used up. This ionization front marks the boundary between the Warm ionized and Warm neutral medium. OB stars, and also cooler ones, produce many more photons with energies below the Lyman limit, which pass through the ionized region almost unabsorbed. Some of these have high enough energy (> 11.3 eV) to ionize carbon atoms, creating a C II ("ionized carbon") region outside the (hydrogen) ionization front. In dense regions this may also be limited in size by the availability of photons, but often such photons can penetrate throughout the neutral phase and only get absorbed in the outer layers of molecular clouds. Photons with E > 4 eV or so can break up molecules such as H2 and CO, creating a photodissociation region (PDR) which is more or less equivalent to the Warm neutral medium. These processes contribute to the heating of the WNM. The distinction between Warm and Cold neutral medium is again due to a range of temperature/density in which runaway cooling occurs. The densest molecular clouds have significantly higher pressure than the interstellar average, since they are bound together by their own gravity. When stars form in such clouds, especially OB stars, they convert the surrounding gas into the warm ionized phase, a temperature increase of several hundred. Initially the gas is still at molecular cloud densities, and so at vastly higher pressure than the ISM average: this is a classical H II region. The large overpressure causes the ionized gas to expand away from the remaining molecular gas (a Champagne flow), and the flow will continue until either the molecular cloud is fully evaporated or the OB stars reach the end of their lives, after a few millions years. At this point the OB stars explode as supernovas, creating blast waves in the warm gas that increase temperatures to the coronal phase (supernova remnants, SNR). These too expand and cool over several million years until they return to average ISM pressure. Most discussion of the ISM concerns spiral galaxies like the Milky Way, in which nearly all the mass in the ISM is confined to a relatively thin disk, typically with scale height about 100 parsecs (300 light years), which can be compared to a typical disk diameter of 30,000 parsecs. Gas and stars in the disk orbit the galactic centre with typical orbital speeds of 200 km/s. This is much faster than the random motions of atoms in the ISM, but since the orbital motion of the gas is coherent, the average motion does not directly affect structure in the ISM. The vertical scale height of the ISM is set in roughly the same way as the Earth's atmosphere, as a balance between the local gravitation field (dominated by the stars in the disk) and the pressure. Further from the disk plane, the ISM is mainly in the low-density warm and coronal phases, which extend at least several thousand parsecs away from the disk plane. This galactic halo or 'corona' also contains significant magnetic field and cosmic ray energy density. The rotation of galaxy disks influences ISM structures in several ways. Since the angular velocity declines with increasing distance from the centre, any ISM feature, such as giant molecular clouds or magnetic field lines, that extend across a range of radius are sheared by differential rotation, and so tend to become stretched out in the tangential direction; this tendency is opposed by interstellar turbulence (see below) which tends to randomize the structures. Spiral arms are due to perturbations in the disk orbits - essentially ripples in the disk, that cause orbits to alternately converge and diverge, compressing and then expanding the local ISM. The visible spiral arms are the regions of maximum density, and the compression often triggers star formation in molecular clouds, leading to an abundance of H II regions along the arms. Coriolis force also influences large ISM features. Irregular galaxies such as the Magellanic Clouds have similar interstellar mediums to spirals, but less organized. In elliptical galaxies the ISM is almost entirely in the coronal phase, since there is no coherent disk motion to support cold gas far from the center: instead, the scale height of the ISM must be comperable to the radius of the galaxy. This is consistent with the observation that there is little sign of current star formation in ellipticals. Some elliptical galaxies do show evidence for a small disk component, with ISM similar to spirals, buried close to their centers. The ISM of lenticular galaxies, as with their other properties, appear intermediate between spirals and ellipticals. Very close to the center of most galaxies (within a few hundred light years at most), the ISM is profoundly modified by the central supermassive black hole: see Galactic Center for the Milky Way, and Active galactic nucleus for extreme examples in other galaxies. The rest of this article will focus on the ISM in the disk plane of spirals, far from the galactic center. Astronomers describe the ISM as turbulent, meaning that the gas has quasi-random motions coherent over a large range of spatial scales. Unlike normal turbulence, in which the fluid motions are highly subsonic, the bulk motions of the ISM are usually larger than the sound speed. Supersonic collisions between gas clouds cause shock waves which compress and heat the gas, increasing the sounds speed so that the flow is locally subsonic; thus supersonic turbulence has been described as 'a box of shocklets', and is inevitably associated with complex density and temperature structure. In the ISM this is further complicated by the magnetic field, which provides wave modes such as Alfvén waves which are often faster than pure sound waves: if turbulent speeds are supersonic but below the Alfvén wave speed, the behaviour is more like subsonic turbulence. Stars are born deep inside large complexes of molecular clouds, typically a few parsecs in size. During their lives and deaths, stars interact physically with the ISM. Stellar winds from young clusters of stars (often with giant or supergiant HII regions surrounding them) and shock waves created by supernovae inject enormous amounts of energy into their surroundings, which leads to hypersonic turbulence. The resultant structures – of varying sizes – can be observed, such as stellar wind bubbles and superbubbles of hot gas, seen by X-ray satellite telescopes or turbulent flows observed in radio telescope maps. Stars and planets, once formed, are unaffected by pressure forces in the ISM, and so do not take part in the turbulent motions, although stars formed in molecular clouds in a galactic disk share their general orbital motion around the galaxy center. Thus stars are usually in motion relative to their surrounding ISM. The Sun is currently traveling through the Local Interstellar Cloud, an irregular clump of the warm neutral medium a few parsecs across, within the low-density Local Bubble, a 100-parsec radius region of coronal gas. In October 2020, astronomers reported a significant unexpected increase in density in the space beyond the Solar System as detected by the Voyager 1 and Voyager 2 space probes. According to the researchers, this implies that "the density gradient is a large-scale feature of the VLISM (very local interstellar medium) in the general direction of the heliospheric nose". The interstellar medium begins where the interplanetary medium of the Solar System ends. The solar wind slows to subsonic velocities at the termination shock, 90–100 astronomical units from the Sun. In the region beyond the termination shock, called the heliosheath, interstellar matter interacts with the solar wind. Voyager 1, the farthest human-made object from the Earth (after 1998), crossed the termination shock December 16, 2004 and later entered interstellar space when it crossed the heliopause on August 25, 2012, providing the first direct probe of conditions in the ISM (Stone et al. 2005). Dust grains in the ISM are responsible for extinction and reddening, the decreasing light intensity and shift in the dominant observable wavelengths of light from a star. These effects are caused by scattering and absorption of photons and allow the ISM to be observed with the naked eye in a dark sky. The apparent rifts that can be seen in the band of the Milky Way – a uniform disk of stars – are caused by absorption of background starlight by dust in molecular clouds within a few thousand light years from Earth. This effect decreases rapidly with increasing wavelength ("reddening" is caused by greater absorption of blue than red light), and becomes almost negligible at mid-infrared wavelengths (> 5 μm). Extinction provides one of the best ways of mapping the three-dimensional structure of the ISM, especially since the advent of accurate distances to millions of stars from the Gaia mission. The total amount of dust in front of each star is determined from its reddening, and the dust is then located along the line of sight by comparing the dust column density in front of stars projected close together on the sky, but at different distances. By 2022 it was possible to generate a map of ISM structures within 3 kpc (10,000 light years) of the Sun. Far ultraviolet light is absorbed effectively by the neutral hydrogen gas in the ISM. Specifically, atomic hydrogen absorbs very strongly at about 121.5 nanometers, the Lyman-alpha transition, and also at the other Lyman series lines. Therefore, it is nearly impossible to see light emitted at those wavelengths from a star farther than a few hundred light years from Earth, because most of it is absorbed during the trip to Earth by intervening neutral hydrogen. All photons with wavelength < 91.6 nm, the Lyman limit, can ionize hydrogen and are also very strongly absorbed. The absorption gradually decreases with increasing photon energy, and the ISM begins to become transparent again in soft X-rays, with wavelengths shorter than about 1 nm. Heating and cooling The ISM is usually far from thermodynamic equilibrium. Collisions establish a Maxwell–Boltzmann distribution of velocities, and the 'temperature' normally used to describe interstellar gas is the 'kinetic temperature', which describes the temperature at which the particles would have the observed Maxwell–Boltzmann velocity distribution in thermodynamic equilibrium. However, the interstellar radiation field is typically much weaker than a medium in thermodynamic equilibrium; it is most often roughly that of an A star (surface temperature of ~10,000 K) highly diluted. Therefore, bound levels within an atom or molecule in the ISM are rarely populated according to the Boltzmann formula (Spitzer 1978, § 2.4). Depending on the temperature, density, and ionization state of a portion of the ISM, different heating and cooling mechanisms determine the temperature of the gas. Grain heating by thermal exchange is very important in supernova remnants where densities and temperatures are very high. Gas heating via grain-gas collisions is dominant deep in giant molecular clouds (especially at high densities). Far infrared radiation penetrates deeply due to the low optical depth. Dust grains are heated via this radiation and can transfer thermal energy during collisions with the gas. A measure of efficiency in the heating is given by the accommodation coefficient: α = T 2 − T T d − T {\displaystyle \alpha ={\frac {T_{2}-T}{T_{d}-T}}} where T is the gas temperature, Td the dust temperature, and T2 the post-collision temperature of the gas atom or molecule. This coefficient was measured by (Burke & Hollenbach 1983) as α = 0.35. Observations of the ISM Despite its extremely low density, photons generated in the ISM are prominent in nearly all bands of the electromagnetic spectrum. In fact the optical band, on which astronomers relied until well into the 20th century, is the one in which the ISM is least obvious. Radiowave propagation Radio waves are affected by the plasma properties of the ISM. The lowest frequency radio waves, below ≈ 0.1 MHz, cannot propagate through the ISM since they are below its plasma frequency. At higher frequencies, the plasma has a significant refractive index, decreasing with increasing frequency, and also dependent on the density of free electrons. Random variations in the electron density cause interstellar scintillation, which broadens the apparent size of distant radio sources seen through the ISM, with the broadening decreasing with frequency squared. The variation of refractive index with frequency causes the arrival times of pulses from pulsars and Fast radio bursts to be delayed at lower frequencies (dispersion). The amount of delay is proportional to the column density of free electrons (Dispersion measure, DM), which is useful for both mapping the distribution of ionized gas in the Galaxy and estimating distances to pulsars (more distant ones have larger DM). A second propagation effect is Faraday rotation, which affects linearly polarized radio waves, such as those produced by synchrotron radiation, one of the most common sources of radio emission in astrophysics. Faraday rotation depends on both the electron density and the magnetic field strength, and so is used as a probe of the interstellar magnetic field. The ISM is generally very transparent to radio waves, allowing unimpeded observations right through the disk of the Galaxy. There are a few exceptions to this rule. The most intense spectral lines in the radio spectrum can become opaque, so that only the surface of the line-emitting cloud is visible. This mainly affects the carbon monoxide lines at millimetre wavelengths that are used to trace molecular clouds, but the 21-cm line from neutral hydrogen can become opaque in the cold neutral medium. Such absorption only affects photons at the line frequencies: the clouds are otherwise transparent. The other significant absorption process occurs in dense ionized regions. These emit photons, including radio waves, via thermal bremsstrahlung. At short wavelengths, typically microwaves, these are quite transparent, but their brightness approaches the black body limit as ∝ λ 2.1 {\displaystyle \propto \lambda ^{2.1}} , and at wavelengths long enough that this limit is reached, they become opaque. Thus metre-wavelength observations show H II regions as cool spots blocking the bright background emission from Galactic synchrotron radiation, while at decametres the entire galactic plane is absorbed, and the longest radio waves observed, 1 km, can only propagate 10-50 parsecs through the Local Bubble. The frequency at which a particular nebula becomes optically thick depends on its emission measure E M = ∫ n e 2 d l , {\displaystyle EM=\int n_{e}^{2}\,dl,} the column density of squared electron number density. Exceptionally dense nebulae can become optically thick at centimetre wavelengths: these are just-formed and so both rare and small ('Ultra-compact H II regions') The general transparency of the ISM to radio waves, especially microwaves, may seem surprising since radio waves at frequencies > 10 GHz are significantly attenuated by Earth's atmosphere (as seen in the figure). But the column density through the atmosphere is vastly larger than the column through the entire Galaxy, due to the extremely low density of the ISM. History of knowledge of interstellar space The word 'interstellar' (between the stars) was coined by Francis Bacon in the context of the ancient theory of a literal sphere of fixed stars. Later in the 17th century, when the idea that stars were scattered through infinite space became popular, it was debated whether that space was a true vacuum or filled with a hypothetical fluid, sometimes called aether, as in René Descartes' vortex theory of planetary motions. While vortex theory did not survive the success of Newtonian physics, an invisible luminiferous aether was re-introduced in the early 19th century as the medium to carry light waves; e.g., in 1862 a journalist wrote: "this efflux occasions a thrill, or vibratory motion, in the ether which fills the interstellar spaces." In 1864, William Huggins used spectroscopy to determine that a nebula is made of gas. Huggins had a private observatory with an 8-inch telescope, with a lens by Alvan Clark; but it was equipped for spectroscopy, which enabled breakthrough observations. From around 1889, Edward Barnard pioneered deep photography of the sky, finding many 'holes in the Milky Way'. At first he compared them to sunspots, but by 1899 was prepared to write: "One can scarcely conceive a vacancy with holes in it, unless there is nebulous matter covering these apparently vacant places in which holes might occur". These holes are now known as dark nebulae, dusty molecular clouds silhouetted against the background star field of the galaxy; the most prominent are listed in his Barnard Catalogue. The first direct detection of cold diffuse matter in interstellar space came in 1904, when Johannes Hartmann observed the binary star Mintaka (Delta Orionis) with the Potsdam Great Refractor. Hartmann reported that absorption from the "K" line of calcium appeared "extraordinarily weak, but almost perfectly sharp" and also reported the "quite surprising result that the calcium line at 393.4 nanometres does not share in the periodic displacements of the lines caused by the orbital motion of the spectroscopic binary star". The stationary nature of the line led Hartmann to conclude that the gas responsible for the absorption was not present in the atmosphere of the star, but was instead located within an isolated cloud of matter residing somewhere along the line of sight to this star. This discovery launched the study of the interstellar medium. Interstellar gas was further confirmed by Slipher in 1909, and then by 1912 interstellar dust was confirmed by Slipher. Interstellar sodium was detected by Mary Lea Heger in 1919 through the observation of stationary absorption from the atom's "D" lines at 589.0 and 589.6 nanometres towards Delta Orionis and Beta Scorpii. In the series of investigations, Viktor Ambartsumian introduced the now commonly accepted notion that interstellar matter occurs in the form of clouds. Subsequent observations of the "H" and "K" lines of calcium by Beals (1936) revealed double and asymmetric profiles in the spectra of Epsilon and Zeta Orionis. These were the first steps in the study of the very complex interstellar sightline towards Orion. Asymmetric absorption line profiles are the result of the superposition of multiple absorption lines, each corresponding to the same atomic transition (for example the "K" line of calcium), but occurring in interstellar clouds with different radial velocities. Because each cloud has a different velocity (either towards or away from the observer/Earth), the absorption lines occurring within each cloud are either blue-shifted or red-shifted (respectively) from the lines' rest wavelength through the Doppler Effect. These observations confirming that matter is not distributed homogeneously were the first evidence of multiple discrete clouds within the ISM. The growing evidence for interstellar material led Pickering (1912) to comment: "While the interstellar absorbing medium may be simply the ether, yet the character of its selective absorption, as indicated by Kapteyn, is characteristic of a gas, and free gaseous molecules are certainly there, since they are probably constantly being expelled by the Sun and stars." The same year, Victor Hess's discovery of cosmic rays, highly energetic charged particles that rain onto the Earth from space, led others to speculate whether they also pervaded interstellar space. The following year, the Norwegian explorer and physicist Kristian Birkeland wrote: "It seems to be a natural consequence of our points of view to assume that the whole of space is filled with electrons and flying electric ions of all kinds. We have assumed that each stellar system in evolutions throws off electric corpuscles into space. It does not seem unreasonable therefore to think that the greater part of the material masses in the universe is found, not in the solar systems or nebulae, but in 'empty' space" (Birkeland 1913). Thorndike (1930) noted that "it could scarcely have been believed that the enormous gaps between the stars are completely void. Terrestrial aurorae are not improbably excited by charged particles emitted by the Sun. If the millions of other stars are also ejecting ions, as is undoubtedly true, no absolute vacuum can exist within the galaxy." In September 2012, NASA scientists reported that polycyclic aromatic hydrocarbons (PAHs), subjected to interstellar medium (ISM) conditions, are transformed, through hydrogenation, oxygenation and hydroxylation, to more complex organics, "a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively". Further, as a result of these transformations, the PAHs lose their spectroscopic signature, which could be one of the reasons "for the lack of PAH detection in interstellar ice grains, particularly the outer regions of cold, dense clouds or the upper molecular layers of protoplanetary disks." In February 2014, NASA announced a greatly upgraded database for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. According to scientists, more than 20% of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets. In April 2019, scientists, working with the Hubble Space Telescope, reported the confirmed detection of the large and complex ionized molecules of buckminsterfullerene (C60) (also known as "buckyballs") in the interstellar medium spaces between the stars. In September 2020, evidence was presented of solid-state water in the interstellar medium, and particularly, of water ice mixed with silicate grains in cosmic dust grains. See also References External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.