text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Unbirthday] | [TOKENS: 430] |
Contents Unbirthday An unbirthday (originally written un-birthday) is an event celebrated on all days of the year which are not a person's birthday. It is a neologism which first appeared in Lewis Carroll's 1871 novel Through the Looking-Glass. The concept gave rise to "The Unbirthday Song" in the 1951 animated feature film Alice in Wonderland. In Through the Looking-Glass, Humpty Dumpty is wearing a cravat (which Alice at first mistakes for a belt) which he says was given to him as an "un-birthday present" by the White King and Queen. He then has Alice calculate the number of unbirthdays in a year. In the Disney animated film Alice in Wonderland, Alice stumbles upon the Mad Hatter, the March Hare and the Dormouse having an unbirthday party and singing "The Unbirthday Song" (music and lyrics by Mack David, Al Hoffman and Jerry Livingston). Alice at first does not understand what an unbirthday is; when the Mad Hatter explains it to her, she realises it is her unbirthday as well, and receives an unbirthday cake from the Mad Hatter. The scene from the film combines the idea of an unbirthday introduced in Through the Looking-Glass with the "Mad Tea Party" described in Alice's Adventures in Wonderland. Later in the film; the Mad Hatter mentions this unbirthday party when he is summoned as a witness at Alice's trial. The King of Hearts realises that it is the Queen of Hearts' unbirthday as well, and the trial is abruptly halted for celebration. The unbirthday party is also the subject of a 1951 comic released to coincide with the film. The comic version is substantially longer (32 pages) than the scene in the animated film, and has Alice being invited to the unbirthday party of Tweedledum and Tweedledee (who are not actually present at the unbirthday party). Humpty Dumpty is a character in the comic version, although not in the manner in which he appears in Through the Looking-Glass. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Planetary_habitability_in_the_Solar_System] | [TOKENS: 4856] |
Contents Planetary habitability in the Solar System Planetary habitability in the Solar System is the study that searches the possible existence of past or present extraterrestrial life in those celestial bodies. As exoplanets are too far away and can only be studied by indirect means, the celestial bodies in the Solar System allow for a much more detailed study: direct telescope observation, space probes, rovers and even human spaceflight. Aside from Earth, no planets in the solar system are known to harbor life. Mars, Europa, and Titan are considered to have once had or currently have conditions permitting the existence of life. Multiple rovers have been sent to Mars, while Europa Clipper is planned to reach Europa in 2030, and the Dragonfly space probe is planned to launch in 2027. Outer space The vacuum of outer space is a harsh environment. Besides the vacuum itself, temperatures are extremely low and there is a high amount of radiation from the Sun. Multicellular life cannot endure such conditions. Bacteria can not thrive in the vacuum either, but may be able to survive under special circumstances. An experiment by microbiologist Akihiko Yamagishi held at the International Space Station exposed a group of bacteria to the vacuum, completely unprotected, for three years. The Deinococcus radiodurans survived the exposure. In earlier experiments, it had survived radiation, vacuum, and low temperatures in lab-controlled experiments. The outer cells of the group had died, but their remains shielded the cells on the inside, which were able to survive. Those studies give credence to the theory of panspermia, which proposes that life may be moved across planets within meteorites. Yamagishi even proposed the term massapanspermia for cells moving across the space in clumps instead of rocks. However, astrobiologist Natalie Grefenstette considers that unprotected cell clumps would have no protection during the ejection from one planet and the re-entry into another one. Mercury According to NASA, Mercury is not a suitable planet for Earth-like life. It has a surface boundary exosphere instead of a layered atmosphere, extreme temperatures that range from 800 °F (430 °C) during the day to -290 °F (-180 °C) during the night, and high solar radiation. It is unlikely that any living beings can withstand those conditions. It is unlikely to ever find remains of ancient life, either. If any type of life ever appeared on the planet, it would have suffered an extinction event in a very short time. It is also suspected that most of the planetary surface was stripped away by a large impact, which would have also removed any life on the planet. The spacecraft MESSENGER found evidence of water ice on Mercury, within permanently shadowed craters not reached by sunlight. As a result of the thin atmosphere, temperatures within them stay cold and there is very little sublimation. There may be scientific support, based on studies reported in March 2020, for considering that parts of the planet Mercury may have hosted sub-surfaced volatiles. The geology of Mercury is considered to be shaped by impact craters and earthquakes caused by a large impact at the Caloris basin. The studies suggest that the required times would not be consistent and that it could be instead that sub-surface volatiles were heated and sublimated, causing the surface to fall apart. Those volatiles may have condensed at craters elsewhere on the planet, or lost to space by solar winds. It is not known which volatiles may have been part of this process. Venus The surface of Venus is completely inhospitable for life. As a result of a runaway greenhouse effect Venus has a temperature of 900 degrees Fahrenheit (475 degrees Celsius), hot enough to melt lead. It is the hottest planet in the Solar System, even more than Mercury, despite being farther away from the Sun. Likewise, the atmosphere of Venus is almost completely carbon dioxide, and the atmospheric pressure is 90 times that of Earth. There is no significant temperature change during the night, and the low axial tilt, only 3.39 degrees with respect to the Sun, makes temperatures quite uniform across the planet and without noticeable seasons. Venus likely had liquid water on its surface for at least a few million years after its formation. The Venus Express detected that Venus loses oxygen and hydrogen to space, and that the escaping hydrogen doubles the oxygen. The source could be Venusian water, that the ultraviolet radiation from the Sun splits into its basic composition. There is also deuterium in the planet's atmosphere, a heavy type of hydrogen that is less capable of escaping the planet's gravity. However, the surface water may have been only atmospheric and not form any oceans. Astrobiologist David Grinspoon considers that although there is no proof of Venus having oceans, it is likely that it had them, as a result of similar processes to those that took place on Earth. He considers that those oceans may have lasted for 600 million years, and were lost 4 billion years ago. The growing scarcity of liquid water altered the carbon cycle, reducing carbon sequestration. With most carbon dioxide staying in the atmosphere for good, the greenhouse effect worsened even more. Nevertheless, between the altitudes of 50 and 65 kilometers, the pressure and temperature are Earth-like, and it may accommodate thermoacidophilic extremophile microorganisms in the acidic upper layers of the Venusian atmosphere. According to this theory, life would have started in Venusian oceans when the planet was cooler, adapt to other environments as it did on Earth, and remain at the last habitable zone of the planet. The putative detection of an absorption line of phosphine in Venus's atmosphere, with no known pathway for abiotic production, led to speculation in September 2020 that there could be extant life currently present in the atmosphere. Later research attributed the spectroscopic signal that was interpreted as phosphine to sulfur dioxide, or found that in fact there was no absorption line. Earth Earth is the only celestial body known for sure to have generated living beings, and thus the only current example of a habitable planet. At a distance of 1 AU from the Sun, it is within the circumstellar habitable zone of the Solar system, which means it can have oceans of water in a liquid state. There also exist a great number of elements required by lifeforms, such as carbon, oxygen, nitrogen, hydrogen, and phosphorus. The Sun provides energy for most ecosystems on Earth, processed by vegetals with photosynthesis. However, there are also ecosystems that form in places that never receive sunlight, such as deep areas of Earth's oceans. Organisms in these environments must utilize other forms of energy production. One such example is chemosynthesis, which takes place in isolated environments like Movile Cave. The atmosphere of Earth also plays an important role. The ozone layer protects the planet from the harmful radiations from the Sun, and free oxygen is abundant enough for the breathing needs of terrestrial life. Earth's magnetosphere, generated by its active core, is also important for the long-term habitability of Earth, as it prevents the solar winds from stripping the atmosphere out of the planet. The atmosphere is thick enough to generate atmospheric pressure at sea level that keeps water in a liquid state, but it is not strong enough to be harmful either. Many other elements benefit the presence of life, but it is unclear whether life would have thrived without them. The planet is not tidally locked, and the atmosphere allows the distribution of heat, so temperatures are largely uniform and without great swift changes. Bodies of water cover most of the world, but they still leave large landmasses and interact with rocks on their seabeds. A nearby celestial body, the Moon, subjects the Earth to substantial, but not catastrophic, tidal forces. Following a suggestion made by Carl Sagan, the Galileo probe studied Earth from a distance, using methods typically applied to study other planets. Atmospheric levels of oxygen and methane confirmed the presence of life on Earth. The red edge was evidence of plants. The probe even detected a technosignature – strong radio waves that would not be present naturally. Despite its proximity to Earth, the Moon is mostly inhospitable to life. No native lunar life has ever been found, nor have signs of life been located in samples of lunar rocks and soil. In 2019, the Israeli craft Beresheet crash-landed on the Moon while carrying tardigrades. While their chances of survival on the Moon were "extremely high," the force of the crash – not the Moon's environment – likely killed them. Many important conditions for life are not present on the Moon; its atmosphere is almost nonexistent, it holds no liquid water (though there is solid ice at some permanently shadowed craters), and it provides no protection from the radiation of the Sun. However, circumstances may have been different in the past. There are two possible time periods of lunar habitability: right after its formation, and during a period of high volcanic activity. In the first case, it is debated how many volatiles would survive in the debris disk, but some water could have been retained thanks to its difficulty to diffuse in a silicate-dominated vapor. In the second case, owing to extreme outgassing from lunar magma, the Moon during that time could have had an atmosphere of 10 millibars. Although just 1% of the atmosphere of Earth, that number is higher than on Mars; those conditions may have been sufficient to allow liquid surface water, such as in the theorized Lunar magma ocean. This theory is supported by studies of Lunar rocks and soil, both of which were more hydrated than expected. Studies of Lunar volcanism also reveal water within the Moon; some models indicate that the Lunar mantle would have had a composition of water similar to Earth's upper mantle. These findings may be confirmed by studies on the Moon's crust that would suggest an ancient exposition to magma water. The early Moon may have also had its own magnetic field, deflecting solar winds. Life on the Moon could have resulted from a local process of abiogenesis, though it could have also stemmed from panspermia from Earth. Dirk Schulze-Makuch, professor of planetary science and astrobiology at the University of London, considers that those theories may be properly tested if a future expedition to the Moon seeks markers of life on lunar samples from the age of volcanic activity. They could then test the survival of microorganisms in a simulated lunar environment while imitating that specific time period. Mars Mars is the celestial body in the solar system with the most similarities to Earth. A Mars sol lasts almost the same amount of time as an Earth day, and its axial tilt gives it similar seasons. There is water on Mars, most of it frozen at the Martian polar ice caps, and some of it underground. However, there are many obstacles to its habitability. The surface temperature averages about -60 degrees Celsius (-80 degrees Fahrenheit). There are no permanent bodies of liquid water on the surface. The atmosphere is thin, and more than 96% of it is carbon dioxide, which is toxic. Its atmospheric pressure is below 1% that of Earth. Combined with its lack of a magnetosphere, Mars is open to harmful radiation from the Sun. Although no astronauts have set foot on Mars, the planet has been studied in great detail by rovers. So far, no native lifeforms have been found. The origin of the potential biosignature of methane observed in the atmosphere of Mars is unexplained, although hypotheses not involving life have been proposed. These conditions may have been different in the past. Mars could have had bodies of water, a thicker atmosphere, and a working magnetosphere; Mars may have been habitable then. The rover Opportunity first discovered evidence pointing to a watery past, but later studies found that the territories studied by the rover were in contact with sulfuric acid, not water. The Gale crater, on the other hand, has clay minerals that could have only been formed in water with a neutral pH. For this reason, NASA selected it for the landing of the Curiosity rover. The crater Jezero is suspected of being the location of an ancient lake. For this reason, NASA sent the Perseverance rover to investigate. Although no living organisms have been found, rocks could contain fossil traces of ancient life if the lake had any. Microscopic life may have escaped the worsening conditions of the surface by moving underground. An experiment simulated those conditions to check the reactions of lichen and found that it survived by finding refuge in rock cracks and soil gaps. Although many geological studies suggest that Mars was habitable in the past, that does not necessarily mean that it was inhabited. Finding fossils of microscopic life of such distant times is an incredibly difficult task, even for Earth's earliest known life forms. Such fossils require a material capable of preserving cellular structures and surviving degradational rock-forming and environmental processes. The knowledge of taphonomy for those cases is limited to the sparse fossils found so far and is based on Earth's environment, which greatly differs from that of Mars'. Asteroid belt Ceres, the only dwarf planet in the asteroid belt, has a thin water-vapor atmosphere. The vapor is likely the result of impacts of meteorites containing ice, but there is hardly an atmosphere besides said vapor. Nevertheless, the presence of water has led to speculation that life may be possible there. It is even proposed that Ceres could be the source of life on Earth by panspermia, as its small size would allow fragments of it to escape its gravity more easily. Although the dwarf planet might not have living things today, there could be signs it harbored life in the past. The water in Ceres, however, is not liquid water on the surface. It comes frozen in meteorites and sublimates to vapor. The dwarf planet is out of the habitable zone, is too small to have sustained tectonic activity, and does not orbit a tidally disruptive body like the moons of the gas giants. However, studies by the Dawn space probe confirmed that Ceres has liquid salt-enriched water underground. Jupiter Carl Sagan and others in the 1960s and 1970s computed conditions for hypothetical microorganisms living in the atmosphere of Jupiter. The intense radiation and other conditions, however, do not appear to permit encapsulation and molecular biochemistry, so life there is thought unlikely. In addition, as a gas giant, Jupiter has no surface, so any potential microorganisms would have to be airborne. Although there are some layers of the atmosphere that may be habitable, Jovian climate is in constant turbulence, and any microorganisms present would eventually be sucked into the deeper parts of Jupiter. In those areas, atmospheric pressure is 1,000 times that of Earth, and temperatures can reach 10,000 degrees. However, the Great Red Spot contains water clouds. Astrophysicist Máté Ádámkovics said that "where there's the potential for liquid water, the possibility of life cannot be completely ruled out. So, though it appears very unlikely, life on Jupiter is not beyond the range of our imaginations." Callisto has a thin atmosphere and a subsurface ocean, and may be a candidate for hosting life. It is more distant to the planet than other moons, so the tidal forces are weaker, but also it receives less harmful radiation. Beneath its icy surface, Europa may have a liquid ocean, which could be a habitable environment. This potential ocean was first noticed by the two Voyager spacecraft and later backed by telescope studies from Earth. Current estimations consider that this ocean may contain twice the amount of water of all Earth's oceans combined, despite Europa's smaller size. The ice crust would be between 15 and 25 miles thick and may represent an obstacle to studying this ocean, though it may be probed via possible eruption columns that reach outer space. Life would need liquid water, a number of chemical elements, and a source of energy. Although Europa may have the first two elements, it is unconfirmed whether it has all three of them. A potential source of energy would be a hydrothermal vent, which has not been detected yet. Solar light is not considered a viable energy source, as it is too weak in the Jupiter system and would also have to cross the thick ice surface. Other proposed energy sources, although still speculative, are the magnetosphere of Jupiter and kinetic energy. Unlike the oceans of Earth, the oceans of Europa would be under a permanent thick ice layer, which may make water aeration difficult. Richard Greenberg of the University of Arizona considers that the ice layer would not be a homogeneous block, but, instead, the ice would instead be in a cycle, renewing itself at the top and burying the surface ice deeper. This process would eventually drop the surface ice into the lower side in contact with the ocean. This process would allow some air from the surface to eventually reach the ocean below. Greenberg considers that the first surface oxygen to reach the oceans would have done so after a couple of billion years, allowing life to emerge and develop defenses against oxidation. He also considers that, once the process started, the amount of oxygen would even allow the development of multicellular beings and perhaps even sustain a population comparable to all the fish on Earth. On 11 December 2013, NASA reported the detection of "clay-like minerals" (specifically, phyllosilicates), often associated with organic materials, on the icy crust of Europa. The presence of the minerals may have been the result of a collision with an asteroid or comet, according to the scientists. The Europa Clipper, which would assess the habitability of Europa, launched in 2024 and is set to reach the moon in 2030. Europa's subsurface ocean is considered the best target for the discovery of life. Ganymede, the largest moon in the Solar system, is the only one that has a magnetic field of its own. The surface seems similar to Mercury and the Moon, and is likely as hostile to life as them. It is suspected that it has an ocean below the surface, and that primitive life may be possible there. This suspicion is caused because of the unusually high level of water vapor in the thin atmosphere of Ganymede. The moon likely has several layers of ice and liquid water, and finally a liquid layer in contact with the mantle. The core, the likely cause of Ganymede's magnetic field, would have a temperature near 1600 K. This particular environment is suspected to be likely to be habitable. The moon is set to be the subject of investigation by the European Space Agency's Jupiter Icy Moons Explorer, which was launched in 2023 and will reach the Jovian system in 2031. Of all the Galilean moons, Io is the closest to the planet. It is the moon with the highest volcanic activity in the Solar System, as a result of the tidal forces from the planet and its oval orbit around it. Even so, the surface is still cold: -143 Cº. The atmosphere is 200 times lighter than Earth's atmosphere, the proximity of Jupiter gives a lot of radiation, and it is completely devoid of water. However, it may have had water in the past, and perhaps lifeforms underground. Saturn Similarly to Jupiter, Saturn is not likely to host life. It is a gas giant and the temperatures, pressures, and materials found in it are too dangerous for life. The planet is hydrogen and helium for the most part, with trace amounts of ice water. Temperatures near the surface are near -150 C. The planet gets warmer on the inside, but in the depth where water may be liquid the atmospheric pressure is too high. Enceladus, the sixth-largest moon of Saturn, has some of the conditions for life, including geothermal activity and water vapor, as well as possible under-ice oceans heated by tidal effects. The Cassini–Huygens probe detected carbon, hydrogen, nitrogen and oxygen—all key elements for supporting life—during its 2005 flyby through one of Enceladus's geysers spewing ice and gas. The temperature and density of the plumes indicate a warmer, watery source beneath the surface. Of the bodies on which life is possible, living organisms could most easily enter the other bodies of the Solar System from Enceladus. Mimas, the seventh-largest moon of Saturn, is similar in size and orbit location to Enceladus. In 2024, based on orbital data from the Cassini–Huygens mission, Mimas was calculated to contain a large tidally heated subsurface ocean starting ~20–30 km below the heavily cratered but old and well-preserved surface, hinting at the potential for life. Titan, the largest moon of Saturn, is the only known moon in the Solar System with a significant atmosphere. Data from the Cassini–Huygens mission refuted the hypothesis of a global hydrocarbon ocean, but later demonstrated the existence of liquid hydrocarbon lakes in the polar regions—the first stable bodies of surface liquid discovered outside Earth. Further data from Cassini have strengthened evidence that Titan likely harbors a layer of liquid water under its ice shell. Analysis of data from the mission has uncovered aspects of atmospheric chemistry near the surface that are consistent with—but do not prove—the hypothesis that organisms there, if present, could be consuming hydrogen, acetylene and ethane, and producing methane. NASA's Dragonfly mission is slated to land on Titan in the mid-2030s with a VTOL-capable rotorcraft with a launch date set for 2027. Uranus The planet Uranus, an ice giant, is unlikely to be habitable. The local temperatures and pressures may be too extreme, and the materials too volatile. The only spacecraft to visit, and thus observe, Uranus and its moons in detail is Voyager 2 in 1986. The five major moons of Uranus, however, may have been home to tidally heated subsurface oceans at some point in their histories, based on observations of Ariel's and Miranda's variegated geology, combined with computer models of the four largest moons, with Titania, the largest, deemed the most likely. Neptune The planet Neptune, another ice giant explored by Voyager 2, is also unlikely to be habitable. The local temperatures and pressures may be too extreme, and the materials too volatile. The moon Triton, however, was thoroughly shown to have cryovolcanism on its surface, as well as deposits of water ice and relatively young and smooth geology for its age, raising the possibility of a subsurface ocean. Pluto The dwarf planet Pluto is too cold to sustain life on the surface. It has an average of -232 °C, and surface water only exists in a rocky state. The interior of Pluto may be warmer and perhaps contain a subsurface ocean. Also, the possibility of geothermal activity comes into play. That combined with the fact that Pluto has an eccentric orbit, making it sometimes closer to the sun, means that there is a slight chance that the dwarf planet could contain life. Kuiper belt The dwarf planet Makemake is not habitable, due to its extremely low temperatures. The same goes for Haumea and Eris. See also Bibliography References Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of". |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-305] | [TOKENS: 12858] |
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/w/index.php?title=Social_network&action=edit§ion=2] | [TOKENS: 1430] |
Editing Social network (section) Copy and paste: – — ° ′ ″ ≈ ≠ ≤ ≥ ± − × ÷ ← → · § Cite your sources: <ref></ref> {{}} {{{}}} | [] [[]] [[Category:]] #REDIRECT [[]] <s></s> <sup></sup> <sub></sub> <code></code> <pre></pre> <blockquote></blockquote> <ref></ref> <ref name="" /> {{Reflist}} <references /> <includeonly></includeonly> <noinclude></noinclude> {{DEFAULTSORT:}} <nowiki></nowiki> <!-- --> <span class="plainlinks"></span> Symbols: ~ | ¡ ¿ † ‡ ↔ ↑ ↓ • ¶ # ∞ ‹› «» ¤ ₳ ฿ ₵ ¢ ₡ ₢ $ ₫ ₯ € ₠ ₣ ƒ ₴ ₭ ₤ ℳ ₥ ₦ ₧ ₰ £ ៛ ₨ ₪ ৳ ₮ ₩ ¥ ♠ ♣ ♥ ♦ 𝄫 ♭ ♮ ♯ 𝄪 © ¼ ½ ¾ Latin: A a Á á À à  â Ä ä Ǎ ǎ Ă ă Ā ā à ã Å å Ą ą Æ æ Ǣ ǣ B b C c Ć ć Ċ ċ Ĉ ĉ Č č Ç ç D d Ď ď Đ đ Ḍ ḍ Ð ð E e É é È è Ė ė Ê ê Ë ë Ě ě Ĕ ĕ Ē ē Ẽ ẽ Ę ę Ẹ ẹ Ɛ ɛ Ǝ ǝ Ə ə F f G g Ġ ġ Ĝ ĝ Ğ ğ Ģ ģ H h Ĥ ĥ Ħ ħ Ḥ ḥ I i İ ı Í í Ì ì Î î Ï ï Ǐ ǐ Ĭ ĭ Ī ī Ĩ ĩ Į į Ị ị J j Ĵ ĵ K k Ķ ķ L l Ĺ ĺ Ŀ ŀ Ľ ľ Ļ ļ Ł ł Ḷ ḷ Ḹ ḹ M m Ṃ ṃ N n Ń ń Ň ň Ñ ñ Ņ ņ Ṇ ṇ Ŋ ŋ O o Ó ó Ò ò Ô ô Ö ö Ǒ ǒ Ŏ ŏ Ō ō Õ õ Ǫ ǫ Ọ ọ Ő ő Ø ø Œ œ Ɔ ɔ P p Q q R r Ŕ ŕ Ř ř Ŗ ŗ Ṛ ṛ Ṝ ṝ S s Ś ś Ŝ ŝ Š š Ş ş Ș ș Ṣ ṣ ß T t Ť ť Ţ ţ Ț ț Ṭ ṭ Þ þ U u Ú ú Ù ù Û û Ü ü Ǔ ǔ Ŭ ŭ Ū ū Ũ ũ Ů ů Ų ų Ụ ụ Ű ű Ǘ ǘ Ǜ ǜ Ǚ ǚ Ǖ ǖ V v W w Ŵ ŵ X x Y y Ý ý Ŷ ŷ Ÿ ÿ Ỹ ỹ Ȳ ȳ Z z Ź ź Ż ż Ž ž ß Ð ð Þ þ Ŋ ŋ Ə ə Greek: Ά ά Έ έ Ή ή Ί ί Ό ό Ύ ύ Ώ ώ Α α Β β Γ γ Δ δ Ε ε Ζ ζ Η η Θ θ Ι ι Κ κ Λ λ Μ μ Ν ν Ξ ξ Ο ο Π π Ρ ρ Σ σ ς Τ τ Υ υ Φ φ Χ χ Ψ ψ Ω ω {{Polytonic|}} Cyrillic: А а Б б В в Г г Ґ ґ Ѓ ѓ Д д Ђ ђ Е е Ё ё Є є Ж ж З з Ѕ ѕ И и І і Ї ї Й й Ј ј К к Ќ ќ Л л Љ љ М м Н н Њ њ О о П п Р р С с Т т Ћ ћ У у Ў ў Ф ф Х х Ц ц Ч ч Џ џ Ш ш Щ щ Ъ ъ Ы ы Ь ь Э э Ю ю Я я ́ IPA: t̪ d̪ ʈ ɖ ɟ ɡ ɢ ʡ ʔ ɸ β θ ð ʃ ʒ ɕ ʑ ʂ ʐ ç ʝ ɣ χ ʁ ħ ʕ ʜ ʢ ɦ ɱ ɳ ɲ ŋ ɴ ʋ ɹ ɻ ɰ ʙ ⱱ ʀ ɾ ɽ ɫ ɬ ɮ ɺ ɭ ʎ ʟ ɥ ʍ ɧ ʼ ɓ ɗ ʄ ɠ ʛ ʘ ǀ ǃ ǂ ǁ ɨ ʉ ɯ ɪ ʏ ʊ ø ɘ ɵ ɤ ə ɚ ɛ œ ɜ ɝ ɞ ʌ ɔ æ ɐ ɶ ɑ ɒ ʰ ʱ ʷ ʲ ˠ ˤ ⁿ ˡ ˈ ˌ ː ˑ ̪ {{IPA|}} This page is a member of 6 hidden categories (help): |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Shashthipurti] | [TOKENS: 646] |
Contents Shashthipurti Shashthipurti (Sanskrit: षष्ठीपूर्ति, romanized: Ṣaṣṭhipūrti) or Shashtyabdapurti (Sanskrit: षष्ट्यब्दपूर्ति, romanized: Ṣaṣṭyabdapūrti) is a Hindu ceremony marking the completion of sixty years of age. It also marks the completion of half the years of one's lifetime in Hindu belief, as an age of one hundred and twenty years is considered the theoretical lifespan of a human being. Etymology Shashtyabdapurti is a portmanteau derived from Sanskrit words shashthi, meaning sixty, and abdapurti, meaning cycle of sixty years. Description The rituals that comprise this ceremony include the shanti and the kranti. The Ugraratha Shanti is a prayer sent to the heavens to make the post-sixty span a spiritually fulfilling experience. After the successful completion of shanti, the kranti rituals, which signify the transition into a new life, most prominently include a ceremonial wedding and the reaffirmation of kalyana (marriage). Shashthipurti is regarded to signify a bridge between the householder's domestic concerns and vanaprastha's (the third stage of life) spiritual yearnings. During vanaprastha, the married couple is to fulfill their life's mission by staying together through observance of celibacy. The Kalyana Veduka is a reminder of the unique role they are to play in the years to come. Ceremony The Shashthipurti takes place over a period of two days. The ceremony is commenced during an auspicious period by performing "Yamuna Puja", followed by the "Ganga Puja", "Ishta Devata Vandana", "Sabha Vandana", "Punyaha with Panchagavya Sevana", "Nandi Puja", "Ritvikvarana" and Kalasha Sthapana". Kalasha sthapana of the deities - "Maha Ganapati", "Adityadi Navagraha", "Mrityunjaya","Samvatsara-Ayana-Ritu-Masa-Paksha-Yoga Devata", "Karana Devata", "Rashyadhipati (husband and wife)", "Navadurga", "Saptama Maru Devata", "Dvadasha Aditya – Dhata, Aryama, Mitra, Varuna, Indra, Vivasvan, Tvashta, Vishnu, Anshuman, Bhaga, Pusha and Parjanya", "Ayurdevata, Ishtadevata, Kuladevata". Next Avahana-Prana Pratishthapanam, Shodashopachara Puja, Mahamangalarati, Navagraha and Ganapati Homa. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/OpenAI#cite_note-110] | [TOKENS: 8773] |
Contents OpenAI OpenAI is an American artificial intelligence research organization comprising both a non-profit foundation and a controlled for-profit public benefit corporation (PBC), headquartered in San Francisco. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". OpenAI is widely recognized for its development of the GPT family of large language models, the DALL-E series of text-to-image models, and the Sora series of text-to-video models, which have influenced industry research and commercial applications. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization was founded in 2015 in Delaware but evolved a complex corporate structure. As of October 2025, following restructuring approved by California and Delaware regulators, the non-profit OpenAI Foundation holds 26% of the for-profit OpenAI Group PBC, with Microsoft holding 27% and employees/other investors holding 47%. Under its governance arrangements, the OpenAI Foundation holds the authority to appoint the board of the for-profit OpenAI Group PBC, a mechanism designed to align the entity’s strategic direction with the Foundation’s charter. Microsoft previously invested over $13 billion into OpenAI, and provides Azure cloud computing resources. In October 2025, OpenAI conducted a $6.6 billion share sale that valued the company at $500 billion. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem. Founding In December 2015, OpenAI was founded as a not for profit organization by Sam Altman, Elon Musk, Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk as the co-chairs. A total of $1 billion in capital was pledged by Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), and Infosys. However, the actual capital collected significantly lagged pledges. According to company disclosures, only $130 million had been received by 2019. In its founding charter, OpenAI stated an intention to collaborate openly with other institutions by making certain patents and research publicly available, but later restricted access to its most capable models, citing competitive and safety concerns. OpenAI was initially run from Brockman's living room. It was later headquartered at the Pioneer Building in the Mission District, San Francisco. According to OpenAI's charter, its founding mission is "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity." Musk and Altman stated in 2015 that they were partly motivated by concerns about AI safety and existential risk from artificial general intelligence. OpenAI stated that "it's hard to fathom how much human-level AI could benefit society", and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". The startup also wrote that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible", and that "because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest." Co-chair Sam Altman expected a decades-long project that eventually surpasses human intelligence. Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of great AI researchers. Brockman was able to hire nine of them as the first employees in December 2015. OpenAI did not pay AI researchers salaries comparable to those of Facebook or Google. It also did not pay stock options which AI researchers typically get. Nevertheless, OpenAI spent $7 million on its first 52 employees in 2016. OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission." OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead. In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research. Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours. In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications. Corporate structure In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit being capped at 100 times any investment. According to OpenAI, the capped-profit model allows OpenAI Global, LLC to legally attract investment from venture funds and, in addition, to grant employees stakes in the company. Many top researchers work for Google Brain, DeepMind, or Facebook, which offer equity that a nonprofit would be unable to match. Before the transition, OpenAI was legally required to publicly disclose the compensation of its top employees. The company then distributed equity to its employees and partnered with Microsoft, announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. OpenAI Global, LLC then announced its intention to commercially license its technologies. It planned to spend $1 billion "within five years, and possibly much faster". Altman stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence. The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global, LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global, LLC. In addition, minority members with a stake in OpenAI Global, LLC are barred from certain votes due to conflict of interest. Some researchers have argued that OpenAI Global, LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI. On February 29, 2024, Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous," though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. On February 10, 2025, a consortium of investors led by Elon Musk submitted a $97.4 billion unsolicited bid to buy the nonprofit that controls OpenAI, declaring willingness to match or exceed any better offer. The offer was rejected on 14 February 2025, with OpenAI stating that it was not for sale, but the offer complicated Altman's restructuring plan by suggesting a lower bar for how much the nonprofit should be valued. OpenAI, Inc. was originally designed as a nonprofit in order to ensure that AGI "benefits all of humanity" rather than "the private gain of any person". In 2019, it created OpenAI Global, LLC, a capped-profit subsidiary controlled by the nonprofit. In December 2024, OpenAI proposed a restructuring plan to convert the capped-profit into a Delaware-based public benefit corporation (PBC), and to release it from the control of the nonprofit. The nonprofit would sell its control and other assets, getting equity in return, and would use it to fund and pursue separate charitable projects, including in science and education. OpenAI's leadership described the change as necessary to secure additional investments, and claimed that the nonprofit's founding mission to ensure AGI "benefits all of humanity" would be better fulfilled. The plan has been criticized by former employees. A legal letter named "Not For Private Gain" asked the attorneys general of California and Delaware to intervene, stating that the restructuring is illegal and would remove governance safeguards from the nonprofit and the attorneys general. The letter argues that OpenAI's complex structure was deliberately designed to remain accountable to its mission, without the conflicting pressure of maximizing profits. It contends that the nonprofit is best positioned to advance its mission of ensuring AGI benefits all of humanity by continuing to control OpenAI Global, LLC, whatever the amount of equity that it could get in exchange. PBCs can choose how they balance their mission with profit-making. Controlling shareholders have a large influence on how closely a PBC sticks to its mission. On October 28, 2025, OpenAI announced that it had adopted the new PBC corporate structure after receiving approval from the attorneys general of California and Delaware. Under the new structure, OpenAI's for-profit branch became a public benefit corporation known as OpenAI Group PBC, while the non-profit was renamed to the OpenAI Foundation. The OpenAI Foundation holds a 26% stake in the PBC, while Microsoft holds a 27% stake and the remaining 47% is owned by employees and other investors. All members of the OpenAI Group PBC board of directors will be appointed by the OpenAI Foundation, which can remove them at any time. Members of the Foundation's board will also serve on the for-profit board. The new structure allows the for-profit PBC to raise investor funds like most traditional tech companies, including through an initial public offering, which Altman claimed was the most likely path forward. In January 2023, OpenAI Global, LLC was in talks for funding that would value the company at $29 billion, double its 2021 value. On January 23, 2023, Microsoft announced a new US$10 billion investment in OpenAI Global, LLC over multiple years, partially needed to use Microsoft's cloud-computing service Azure. From September to December, 2023, Microsoft rebranded all variants of its Copilot to Microsoft Copilot, and they added MS-Copilot to many installations of Windows and released Microsoft Copilot mobile apps. Following OpenAI's 2025 restructuring, Microsoft owns a 27% stake in the for-profit OpenAI Group PBC, valued at $135 billion. In a deal announced the same day, OpenAI agreed to purchase $250 billion of Azure services, with Microsoft ceding their right of first refusal over OpenAI's future cloud computing purchases. As part of the deal, OpenAI will continue to share 20% of its revenue with Microsoft until it achieves AGI, which must now be verified by an independent panel of experts. The deal also loosened restrictions on both companies working with third parties, allowing Microsoft to pursue AGI independently and allowing OpenAI to develop products with other companies. In 2017, OpenAI spent $7.9 million, a quarter of its functional expenses, on cloud computing alone. In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks. In October 2024, OpenAI completed a $6.6 billion capital raise with a $157 billion valuation including investments from Microsoft, Nvidia, and SoftBank. On January 21, 2025, Donald Trump announced The Stargate Project, a joint venture between OpenAI, Oracle, SoftBank and MGX to build an AI infrastructure system in conjunction with the US government. The project takes its name from OpenAI's existing "Stargate" supercomputer project and is estimated to cost $500 billion. The partners planned to fund the project over the next four years. In July, the United States Department of Defense announced that OpenAI had received a $200 million contract for AI in the military, along with Anthropic, Google, and xAI. In the same month, the company made a deal with the UK Government to use ChatGPT and other AI tools in public services. OpenAI subsequently began a $50 million fund to support nonprofit and community organizations. In April 2025, OpenAI raised $40 billion at a $300 billion post-money valuation, which was the highest-value private technology deal in history. The financing round was led by SoftBank, with other participants including Microsoft, Coatue, Altimeter and Thrive. In July 2025, the company reported annualized revenue of $12 billion. This was an increase from $3.7 billion in 2024, which was driven by ChatGPT subscriptions, which reached 20 million paid subscribers by April 2025, up from 15.5 million at the end of 2024, alongside a rapidly expanding enterprise customer base that grew to five million business users. The company’s cash burn remains high because of the intensive computational costs required to train and operate large language models. It projects an $8 billion operating loss in 2025. OpenAI reports revised long-term spending projections totaling approximately $115 billion through 2029, with annual expenditures projected to escalate significantly, reaching $17 billion in 2026, $35 billion in 2027, and $45 billion in 2028. These expenditures are primarily allocated toward expanding compute infrastructure, developing proprietary AI chips, constructing data centers, and funding intensive model training programs, with more than half of the spending through the end of the decade expected to support research-intensive compute for model training and development. The company's financial strategy prioritizes market expansion and technological advancement over near-term profitability, with OpenAI targeting cash-flow-positive operations by 2029 and projecting revenue of approximately $200 billion by 2030. This aggressive spending trajectory underscores both the enormous capital requirements of scaling cutting-edge AI technology and OpenAI's commitment to maintaining its position as a leader in the artificial intelligence industry. In October 2025, OpenAI completed an employee share sale of up to $10 billion to existing investors which valued the company at $500 billion. The deal values OpenAI as the most valuable privately owned company in the world—surpassing SpaceX as the world's most valuable private company. On November 17, 2023, Sam Altman was removed as CEO when its board of directors (composed of Helen Toner, Ilya Sutskever, Adam D'Angelo and Tasha McCauley) cited a lack of confidence in him. Chief Technology Officer Mira Murati took over as interim CEO. Greg Brockman, the president of OpenAI, was also removed as chairman of the board and resigned from the company's presidency shortly thereafter. Three senior OpenAI researchers subsequently resigned: director of research and GPT-4 lead Jakub Pachocki, head of AI risk Aleksander Mądry, and researcher Szymon Sidor. On November 18, 2023, there were reportedly talks of Altman returning as CEO amid pressure placed upon the board by investors such as Microsoft and Thrive Capital, who objected to Altman's departure. Although Altman himself spoke in favor of returning to OpenAI, he has since stated that he considered starting a new company and bringing former OpenAI employees with him if talks to reinstate him didn't work out. The board members agreed "in principle" to resign if Altman returned. On November 19, 2023, negotiations with Altman to return failed and Murati was replaced by Emmett Shear as interim CEO. The board initially contacted Anthropic CEO Dario Amodei (a former OpenAI executive) about replacing Altman, and proposed a merger of the two companies, but both offers were declined. On November 20, 2023, Microsoft CEO Satya Nadella announced Altman and Brockman would be joining Microsoft to lead a new advanced AI research team, but added that they were still committed to OpenAI despite recent events. Before the partnership with Microsoft was finalized, Altman gave the board another opportunity to negotiate with him. About 738 of OpenAI's 770 employees, including Murati and Sutskever, signed an open letter stating they would quit their jobs and join Microsoft if the board did not rehire Altman and then resign. This prompted OpenAI investors to consider legal action against the board as well. In response, OpenAI management sent an internal memo to employees stating that negotiations with Altman and the board had resumed and would take some time. On November 21, 2023, after continued negotiations, Altman and Brockman returned to the company in their prior roles along with a reconstructed board made up of new members Bret Taylor (as chairman) and Lawrence Summers, with D'Angelo remaining. According to subsequent reporting, shortly before Altman’s firing, some employees raised concerns to the board about how he had handled the safety implications of a recent internal AI capability discovery. On November 29, 2023, OpenAI announced that an anonymous Microsoft employee had joined the board as a non-voting member to observe the company's operations; Microsoft resigned from the board in July 2024. In February 2024, the Securities and Exchange Commission subpoenaed OpenAI's internal communication to determine if Altman's alleged lack of candor misled investors. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. In August 2023, it was announced that OpenAI had acquired the New York-based start-up Global Illumination, a company that deploys AI to develop digital infrastructure and creative tools. In June 2024, OpenAI acquired Multi, a startup focused on remote collaboration. In March 2025, OpenAI reached a deal with CoreWeave to acquire $350 million worth of CoreWeave shares and access to AI infrastructure, in return for $11.9 billion paid over five years. Microsoft was already CoreWeave's biggest customer in 2024. Alongside their other business dealings, OpenAI and Microsoft were renegotiating the terms of their partnership to facilitate a potential future initial public offering by OpenAI, while ensuring Microsoft's continued access to advanced AI models. On May 21, OpenAI announced the $6.5 billion acquisition of io, an AI hardware start-up founded by former Apple designer Jony Ive in 2024. In September 2025, OpenAI agreed to acquire the product testing startup Statsig for $1.1 billion in an all-stock deal and appointed Statsig's founding CEO Vijaye Raji as OpenAI's chief technology officer of applications. The company also announced development of an AI-driven hiring service designed to rival LinkedIn. OpenAI acquired personal finance app Roi in October 2025. In October 2025, OpenAI acquired Software Applications Incorporated, the developer of Sky, a macOS-based natural language interface designed to operate across desktop applications. The Sky team joined OpenAI, and the company announced plans to integrate Sky’s capabilities into ChatGPT. In December 2025, it was announced OpenAI had agreed to acquire Neptune, an AI tooling startup that helps companies track and manage model training, for an undisclosed amount. In January 2026, it was announced OpenAI had acquired healthcare technology startup Torch for approximately $60 million. The acquisition followed the launch of OpenAI’s ChatGPT Health product and was intended to strengthen the company’s medical data and healthcare artificial intelligence capabilities. OpenAI has been criticized for outsourcing the annotation of data sets to Sama, a company based in San Francisco that employed workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to moderate toxic content, notably from ChatGPT's training data and outputs. However, these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The investigation uncovered that OpenAI began sending snippets of data to Sama as early as November 2021. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management. In 2024, OpenAI began collaborating with Broadcom to design a custom AI chip capable of both training and inference, targeted for mass production in 2026 and to be manufactured by TSMC on a 3 nm process node. This initiative intended to reduce OpenAI's dependence on Nvidia GPUs, which are costly and face high demand in the market. In January 2024, Arizona State University purchased ChatGPT Enterprise in OpenAI's first deal with a university. In June 2024, Apple Inc. signed a contract with OpenAI to integrate ChatGPT features into its products as part of its new Apple Intelligence initiative. In June 2025, OpenAI began renting Google Cloud's Tensor Processing Units (TPUs) to support ChatGPT and related services, marking its first meaningful use of non‑Nvidia AI chips. In September 2025, it was revealed that OpenAI signed a contract with Oracle to purchase $300 billion in computing power over the next five years. In September 2025, OpenAI and NVIDIA announced a memorandum of understanding that included a potential deployment of at least 10 gigawatts of NVIDIA systems and a $100 billion investment from NVIDIA in OpenAI. OpenAI expected the negotiations to be completed within weeks. As of January 2026, this has not been realized, and the two sides are rethinking the future of their partnership. In October 2025, OpenAI announced a multi-billion dollar deal with AMD. OpenAI committed to purchasing six gigawatts worth of AMD chips, starting with the MI450. OpenAI will have the option to buy up to 160 million shares of AMD, about 10% of the company, depending on development, performance and share price targets. In December 2025, Disney said it would make a $1 billion investment in OpenAI, and signed a three-year licensing deal that will let users generate videos using Sora—OpenAI's short-form AI video platform. More than 200 Disney, Marvel, Star Wars and Pixar characters will be available to OpenAI users. In early 2026, Amazon entered advanced discussions to invest up to $50 billion in OpenAI as part of a potential artificial intelligence partnership. Under the proposed agreement, OpenAI’s models could be integrated into Amazon’s digital assistant Alexa and other internal projects. OpenAI provides LLMs to the Artificial Intelligence Cyber Challenge and to the Advanced Research Projects Agency for Health. In October 2024, The Intercept revealed that OpenAI's tools are considered "essential" for AFRICOM's mission and included in an "Exception to Fair Opportunity" contractual agreement between the United States Department of Defense and Microsoft. In December 2024, OpenAI said it would partner with defense-tech company Anduril to build drone defense technologies for the United States and its allies. In 2025, OpenAI's Chief Product Officer, Kevin Weil, was commissioned lieutenant colonel in the U.S. Army to join Detachment 201 as senior advisor. In June 2025, the U.S. Department of Defense awarded OpenAI a $200 million one-year contract to develop AI tools for military and national security applications. OpenAI announced a new program, OpenAI for Government, to give federal, state, and local governments access to its models, including ChatGPT. Services In February 2019, GPT-2 was announced, which gained attention for its ability to generate human-like text. In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named the API, would form the heart of its first commercial product. Eleven employees left OpenAI, mostly between December 2020 and January 2021, in order to establish Anthropic. In 2021, OpenAI introduced DALL-E, a specialized deep learning model adept at generating complex digital images from textual descriptions, utilizing a variant of the GPT-3 architecture. In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days. According to anonymous sources cited by Reuters in December 2022, OpenAI Global, LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024. After ChatGPT was launched, Google announced a similar chatbot, Bard, amid internal concerns that ChatGPT could threaten Google’s position as a primary source of online information. On February 7, 2023, Microsoft announced that it was building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products. On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus. On November 6, 2023, OpenAI launched GPTs, allowing individuals to create customized versions of ChatGPT for specific purposes, further expanding the possibilities of AI applications across various industries. On November 14, 2023, OpenAI announced they temporarily suspended new sign-ups for ChatGPT Plus due to high demand. Access for newer subscribers re-opened a month later on December 13. In December 2024, the company launched the Sora model. It also launched OpenAI o1, an early reasoning model that was internally codenamed strawberry. Additionally, ChatGPT Pro—a $200/month subscription service offering unlimited o1 access and enhanced voice features—was introduced, and preliminary benchmark results for the upcoming OpenAI o3 models were shared. On January 23, 2025, OpenAI released Operator, an AI agent and web automation tool for accessing websites to execute goals defined by users. The feature was only available to Pro users in the United States. OpenAI released deep research agent, nine days later. It scored a 27% accuracy on the benchmark Humanity's Last Exam (HLE). Altman later stated GPT-4.5 would be the last model without full chain-of-thought reasoning. In July 2025, reports indicated that AI models by both OpenAI and Google DeepMind solved mathematics problems at the level of top-performing students in the International Mathematical Olympiad. OpenAI's large language model was able to achieve gold medal-level performance, reflecting significant progress in AI's reasoning abilities. On October 6, 2025, OpenAI unveiled its Agent Builder platform during the company's DevDay event. The platform includes a visual drag-and-drop interface that lets developers and businesses design, test, and deploy agentic workflows with limited coding. On October 21, 2025, OpenAI introduced ChatGPT Atlas, a browser integrating the ChatGPT assistant directly into web navigation, to compete with existing browsers such as Google Chrome and Apple Safari. On December 11, 2025, OpenAI announced GPT-5.2. This model will be better at creating spreadsheets, building presentations, perceiving images, writing code and understanding long context. On January 27, 2026, OpenAI introduced Prism, a LaTeX-native workspace meant to assist scientists to help with research and writing. The platform utilizes GPT-5.2 as a backend to automate the process of drafting for scientific papers, including features for managing citations, complex equation formatting, and real-time collaborative editing. In March 2023, the company was criticized for disclosing particularly few technical details about products like GPT-4, contradicting its initial commitment to openness and making it harder for independent researchers to replicate its work and develop safeguards. OpenAI cited competitiveness and safety concerns to justify this repudiation. OpenAI's former chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models was increasingly risky, and that the safety reasons for not open-sourcing the most potent AI models would become "obvious" in a few years. In September 2025, OpenAI published a study on how people use ChatGPT for everyday tasks. The study found that "non-work tasks" (according to an LLM-based classifier) account for more than 72 percent of all ChatGPT usage, with a minority of overall usage related to business productivity. In July 2023, OpenAI launched the superalignment project, aiming within four years to determine how to align future superintelligent systems. OpenAI promised to dedicate 20% of its computing resources to the project, although the team denied receiving anything close to 20%. OpenAI ended the project in May 2024 after its co-leaders Ilya Sutskever and Jan Leike left the company. In August 2025, OpenAI was criticized after thousands of private ChatGPT conversations were inadvertently exposed to public search engines like Google due to an experimental "share with search engines" feature. The opt-in toggle, intended to allow users to make specific chats discoverable, resulted in some discussions including personal details such as names, locations, and intimate topics appearing in search results when users accidentally enabled it while sharing links. OpenAI announced the feature's permanent removal on August 1, 2025, and the company began coordinating with search providers to remove the exposed content, emphasizing that it was not a security breach but a design flaw that heightened privacy risks. CEO Sam Altman acknowledged the issue in a podcast, noting users often treat ChatGPT as a confidant for deeply personal matters, which amplified concerns about AI handling sensitive data. Management In 2018, Musk resigned from his Board of Directors seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars. OpenAI stated that Musk's financial contributions were below $45 million. On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest with his investments in AI companies via Greylock Partners, and his co-founding of the AI startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI. In May 2024, Chief Scientist Ilya Sutskever resigned and was succeeded by Jakub Pachocki. Co-leader Jan Leike also departed amid concerns over safety and trust. OpenAI then signed deals with Reddit, News Corp, Axios, and Vox Media. Paul Nakasone then joined the board of OpenAI. In August 2024, cofounder John Schulman left OpenAI to join Anthropic, and OpenAI's president Greg Brockman took extended leave until November. In September 2024, CTO Mira Murati left the company. In November 2025, Lawrence Summers resigned from the board of directors. Governance and legal issues In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the FTC issued a civil investigative demand to OpenAI to investigate whether the company's data security and privacy practices to develop ChatGPT were unfair or harmed consumers (including by reputational harm) in violation of Section 5 of the Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. In July 2023, the FTC launched an investigation into OpenAI over allegations that the company scraped public data and published false and defamatory information. They asked OpenAI for comprehensive information about its technology and privacy safeguards, as well as any steps taken to prevent the recurrence of situations in which its chatbot generated false and derogatory content about people. The agency also raised concerns about ‘circular’ spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a House of Lords committee. In February 2025, OpenAI CEO Sam Altman stated that the company is interested in collaborating with the People's Republic of China, despite regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from industrial espionage. Some industry observers noted similarities between DeepSeek's model distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government. Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered innovation. Before May 2024, OpenAI required departing employees to sign a lifelong non-disparagement agreement forbidding them from criticizing OpenAI and acknowledging the existence of the agreement. Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement. OpenAI was sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad in July 2023. In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president, Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024, The Intercept as well as Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included The Mercury News, The Denver Post, The Orange County Register, St. Paul Pioneer Press, Chicago Tribune, Orlando Sentinel, Sun Sentinel, and New York Daily News. In June 2023, a lawsuit claimed that OpenAI scraped 300 billion words online without consent and without registering as a data broker. It was filed in San Francisco, California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with News Corp to integrate news content from The Wall Street Journal, the New York Post, The Times, and The Sunday Times into its AI platform. Meanwhile, other publications like The New York Times chose to sue OpenAI and Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview, Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of conspiracy theories alleging that he had been deliberately silenced. California Congressman Ro Khanna endorsed calls for an investigation. On April 24, 2025, Ziff Davis sued OpenAI in Delaware federal court for copyright infringement. Ziff Davis is known for publications such as ZDNet, PCMag, CNET, IGN and Lifehacker. In April 2023, the EU's European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024 NOYB filed a complaint with the Austrian Datenschutzbehörde against OpenAI for violating the European General Data Protection Regulation. A text created with ChatGPT gave a false date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed. OpenAI was criticized for lifting its ban on using ChatGPT for "military and warfare". Up until January 10, 2024, its "usage policies" included a ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons". In August 2025, the parents of a 16-year-old boy who died by suicide filed a wrongful death lawsuit against OpenAI (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage. In December 2025, Stein-Erik Soelberg, who was 56 years old at the time, allegedly murdered his mother Suzanne Adams. In the months prior the paranoid, delusional man often discussed his ideas with ChatGPT. Adam's estate then sued OpenAI claiming that the company shared responsibility due to the risk of chatbot psychosis despite the fact that chatbot psychosis is not a real medical diagnosis. OpenAI responded saying they will make ChatGPT safer for users disconnected from reality. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Latin_America] | [TOKENS: 18239] |
Contents Latin America Latin America (Spanish: América Latina or Latinoamérica; Portuguese: América Latina; French: Amérique latine) is the cultural region of the Americas where Romance languages are predominantly spoken, primarily Spanish and Portuguese.[d] Latin America is defined according to cultural identity, not geography, and as such it includes countries in both North and South America. Most countries south of the United States tend to be included: Mexico and the countries of Central America, South America and the Caribbean. Commonly, it refers to Hispanic America, plus Brazil. Related terms are the narrower Hispanic America, which exclusively refers to Spanish-speaking nations, and the broader Ibero-America, which includes all Iberic countries in the Americas. English and Dutch-speaking countries and territories, although in the same geographical region, are excluded (Suriname, Guyana, the Falkland Islands, Jamaica, Trinidad and Tobago, Belize, etc.). The term Latin America was first introduced in 1856 at a Paris conference titled, literally, Initiative of the Americas: Idea for a Federal Congress of the Republics (Iniciativa de la América. Idea de un Congreso Federal de las Repúblicas).[e] Chilean politician Francisco Bilbao coined the term to unify countries with shared cultural and linguistic heritage. It gained further prominence during the 1860s under the rule of Napoleon III, whose government sought to justify France's intervention in the Second Mexican Empire. Etymology and definitions Latin America is a geographical region where Spanish or Portuguese is the national language, those that share a common language, culture and traditions. As whole can be traced back to the 1830s, in the writing of the French Saint-Simonian Michel Chevalier, who postulated that a part of the Americas was inhabited by people of a "Latin race", and that it could, therefore, ally itself with "Latin Europe", ultimately overlapping the Latin Church, in a struggle with "Teutonic Europe" and "Anglo-Saxon America" with its Anglo-Saxonism, as well as "Slavic Europe" with its Pan-Slavism. Some scholarship has identified political origins of the term. Two historians, Uruguayan Arturo Ardao and Chilean Miguel Rojas Mix, found evidence that the term "Latin America" was used earlier than Phelan claimed, and the first use of the term was in opposition to imperialist projects in the Americas. Ardao wrote about this subject in his book Génesis de la idea y el nombre de América latina (Genesis of the Idea and the Name of Latin America, 1980), and Miguel Rojas Mix in his article "Bilbao y el hallazgo de América latina: Unión continental, socialista y libertaria" (Bilbao and the Finding of Latin America: a Continental, Socialist, and Libertarian Union, 1986). Michel Gobat notes that "Arturo Ardao, Miguel Rojas Mix, and Aims McGuinness have revealed [that] the term 'Latin America' had already been used in 1856 by Central Americans and South Americans protesting US expansion into the Southern Hemisphere". Edward Shawcross summarizes Ardao's and Rojas Mix's findings: "Ardao identified the term in a poem by a Colombian diplomat and intellectual resident in France, José María Torres Caicedo, published on 15 February 1857 in a French based Spanish-language newspaper, while Rojas Mix located it in a speech delivered in France by the radical liberal Chilean politician Francisco Bilbao in June 1856". By the late 1850s, the term was being used in California (which had become a part of the United States), in local newspapers such as El Clamor Público by Californios writing about América latina and latinoamérica, and identifying as Latinos as the abbreviated term for their "hemispheric membership in la raza latina". The words "Latin" and "America" were first found to be combined in a printed work to produce the term "Latin America" in 1856 at a conference by the Chilean politician Francisco Bilbao in Paris. The conference had the title "Initiative of the America. The idea for a Federal Congress of Republics." The following year, Colombian writer José María Torres Caicedo also used the term in his poem "The Two Americas". Both authors advocated for the political and economic union of all Latin American countries, arguing that regional unity was the only effective means of defending their territories against further foreign interventions by the United States. They also rejected European imperialism, warning that the resurgence of non-democratic governments in Europe posed an additional threat to the stability and autonomy of Latin American nations. In criticizing European political developments, they employed the same term to characterize the deteriorating state of European governance at the time: "despotism." Several years later, during the French invasion of Mexico, Bilbao wrote another work, "Emancipation of the Spirit in the Americas", where he asked all Latin American countries to support the Mexican cause against France, and rejected French imperialism in Asia, Africa, Europe and the Americas. He asked Latin American intellectuals to search for their "intellectual emancipation" by abandoning all French ideas, claiming that France was: "Hypocrite, because she [France] calls herself protector of the Latin race just to subject it to her exploitation regime; treacherous, because she speaks of freedom and nationality, when, unable to conquer freedom for herself, she enslaves others instead!" Therefore, as Michel Gobat puts it, the term Latin America itself had an "anti-imperial genesis," and their creators were far from supporting any form of imperialism in the region, or in any other place of the globe. Historian Mauricio Tenorio-Trillo explores at length the "allure and power" of the idea of Latin America. He remarks at the outset, "The idea of 'Latin America' ought to have vanished with the obsolescence of racial theory... But it is not easy to declare something dead when it can hardly be said to have existed," going on to say, "The term is here to stay, and it is important." Following in the tradition of Chilean writer Francisco Bilbao, who excluded Brazil, Argentina and Paraguay from his early conceptualization of Latin America, Chilean historian Jaime Eyzaguirre has criticized the term Latin America for "disguising" and "diluting" the Spanish character of a region (i.e. Hispanic America) with the inclusion of nations that, according to him, do not share the same pattern of conquest and colonization. Quebec and Acadia, Francophone parts of North America, are generally excluded from the definition of Latin America. Latin America can be subdivided into several subregions based on geography, politics, democracy, demographics and culture. The basic geographical subregions are North America, Central America, the Caribbean and South America; the latter contains further politico-geographical subdivisions such as the Southern Cone, the Guianas and the Andean states. It may be subdivided on linguistic grounds into Spanish America and Portuguese America, and by some definitions, French America. *: Not a sovereign state History Before the arrival of Europeans in the late 15th and early 16th centuries, the region was home to many indigenous peoples, including advanced civilizations, most notably from South: the Olmec, Maya, Muisca, Aztecs and Inca. The region came under control of the kingdoms of Spain and Portugal which established colonies, and imposed Roman Catholicism and their languages. These countries brought African slaves to their colonies as laborers, exploiting large, settled societies and their resources. The Spanish Crown regulated immigration, allowing only Christians to travel to the New World. The colonization process led to significant native population declines due to disease, forced labor, and violence. They imposed their culture, destroying native codices and artwork. Colonial-era religion played a crucial role in everyday life, with the Spanish Crown ensuring religious purity and aggressively prosecuting perceived deviations like witchcraft. Colombia became the first independent Latin American country, gaining its independence from Spain on 20 July 1810 and starting off the wars of independence in the region. In the early nineteenth century nearly all of areas of Spanish America attained independence by armed struggle, with the exceptions of Cuba and Puerto Rico. Brazil, which had become a monarchy separate from Portugal, became a republic in the late nineteenth century. Political independence from European monarchies did not result in the abolition of black slavery in the new nations, and it resulted in political and economic instability in Spanish America immediately after independence. Yet, as regional Caudillos started to rise in power, nation-builders started to view themselves as more modern than their former European colonizers. Leaders began to shift away from aristocracy toward republicanism and democracy, which allowed all citizens, not just the Creole elites, to have a voice in politics. This shift helped unify many of the Latin American nations as all people, even illiterate people, would gather in their communities to talk about political ideals and how they should be used in their nation. Great Britain and the United States exercised significant influence in the post-independence era, resulting in a form of neo-colonialism, where political sovereignty remained in place, but foreign powers exercised considerable power in the economic sphere. Newly independent nations faced domestic and interstate conflicts, struggling with economic instability and social inequality. The 20th century brought U.S. intervention and the Cold War's impact on the region, with revolutions in countries like Cuba influencing Latin American politics. The late 20th and early 21st centuries saw shifts towards left-wing governments, followed by conservative resurgences, and a recent resurgence of left-wing politics in several countries. In many countries in the early 2000s, left-wing political parties rose to power, known as the Pink tide. The presidencies of Hugo Chávez (1999–2013) in Venezuela, Ricardo Lagos and Michelle Bachelet in Chile, Lula da Silva and Dilma Rousseff of the Workers Party (PT) in Brazil, Néstor Kirchner and his wife Cristina Fernández in Argentina, Tabaré Vázquez and José Mujica in Uruguay, Evo Morales in Bolivia, Daniel Ortega in Nicaragua, Rafael Correa in Ecuador, Fernando Lugo in Paraguay, Manuel Zelaya in Honduras (removed from power by a coup d'état), Mauricio Funes and Salvador Sánchez Cerén in El Salvador are all part of this wave of left-wing politicians who often declare themselves socialists, Latin Americanists, or anti-imperialists, often implying opposition to US policies towards the region. An aspect of this has been the creation of the eight-member ALBA alliance, or "The Bolivarian Alliance for the Peoples of Our America" (Spanish: Alianza Bolivariana para los Pueblos de Nuestra América) by some of these countries. Following the pink tide, there was a Conservative wave across Latin America. In Mexico, the rightwing National Action Party (PAN) won the presidential election of 2000 with its candidate Vicente Fox, ending the 71-year rule of the Institutional Revolutionary Party. He was succeed six-years later by another conservative, Felipe Calderón (2006–2012), who attempted to crack down on the Mexican drug cartels and instigated the Mexican drug war. Several right-wing leaders rose to power, including Argentina's Mauricio Macri and Brazil's Michel Temer, following the impeachment of the country's first female president. In Chile, the conservative Sebastián Piñera succeeded the socialist Michelle Bachelet in 2017. In 2019, center-right Luis Lacalle Pou ended a 15-year leftist rule in Uruguay, after defeating the Broad Front candidate. Economically, the 2000s commodities boom caused positive effects for many Latin American economies. Another trend was the rapidly increasing importance of their relations with China. However, with the Great Recession beginning in 2008, there was an end to the commodity boom, resulting in economic stagnation or recession in some countries. A number of left-wing governments of the Pink tide lost support. The worst-hit was Venezuela, which is facing severe social and economic upheaval. Charges against a major Brazilian conglomerate, Odebrecht, have raised allegations of corruption across the region's governments (see Operation Car Wash). This bribery ring has become the largest corruption scandal in Latin American history. As of July 2017, the highest ranking politicians charged were former Brazilian President Luiz Inácio Lula da Silva, who was arrested, and former Peruvian presidents Ollanta Humala and Alejandro Toledo, who fled to the United States and was extradited back to Peru. The COVID-19 pandemic proved a political challenge for many unstable Latin American democracies, with scholars identifying a decline in civil liberties as a result of opportunistic emergency powers. This was especially true for countries with strong presidential regimes, such as Brazil. Geography The environment of Latin America has been changed by human use in the expanding of agriculture, new agricultural technologies, including the Green Revolution, extraction of minerals, growth of cities, and redirection of rivers by the construction of dams for irrigation, drinking water, and hydroelectric power. In the twentieth century, there is a growing movement to protect nature and many governments have sought recognition of natural sites by the UNESCO World Heritage Sites. Brazil, Mexico, and Peru currently have the greatest number of natural sites. Economy According to Goldman Sachs' BRICS review of emerging economies, by 2050 the largest economies in the world will be as follows: China, United States, India, Japan, Germany, United Kingdom, Mexico and Brazil. The four countries with the strongest agricultural sector in South America are Brazil, Argentina, Chile and Colombia. Currently: In Central America, the following stand out: Brazil is the world's largest exporter of chicken meat: 3.77 million tons in 2019. The country had the second largest herd of cattle in the world, 22.2% of the world herd. The country was the second largest producer of beef in 2019, responsible for 15.4% of global production. It was also the third largest world producer of milk in 2018. This year[when?], the country produced 35.1 billion liters. In 2019, Brazil was the fourth largest pork producer in the world, with almost four million tons. In 2018, Argentina was the fourth largest producer of beef in the world, with a production of 3 million tons (behind only United States, Brazil and China). Uruguay is also a major meat producer. In 2018, it produced 589 thousand tons of beef. In the production of chicken meat, Mexico is among the ten largest producers in the world, Argentina among the 15 largest and Peru and Colombia among the 20 largest. In beef production, Mexico is one of the ten largest producers in the world and Colombia is one of the 20 largest producers. In the production of pork, Mexico is among the 15 largest producers in the world. In the production of honey, Argentina is among the five largest producers in the world, Mexico among the ten largest and Brazil among the 15 largest. In terms of cow's milk production, Mexico is among the 15 largest producers in the world and Argentina among the 20 largest. Mining is one of the most important economic sectors in Latin America, especially for Chile, Peru and Bolivia, whose economies are highly dependent on this sector. The continent has large productions of: Brazil stands out in the extraction of In terms of gemstones, Brazil is the world's largest producer of amethysts, topaz, and agates and one of the main producers of tourmaline, emeralds, aquamarines, garnets and opals. Chile contributes about a third of the world's copper production. In addition, Chile was, in 2019, the world's largest producer of iodine and rhenium, the second largest producer of lithium and molybdenum, the sixth largest producer of silver, the seventh largest producer of salt, the eighth largest producer of potash, the thirteenth-largest producer of sulfur and the thirteenth largest producer of iron ore in the world. In 2019, Peru was the second largest world producer of copper and silver, 8th largest world producer of gold, third largest world producer of lead, second largest world producer of zinc, fourth largest world producer of tin, fifth largest world producer of boron, and fourth largest world producer of molybdenum. In 2019, Bolivia was the eighth largest world producer of silver; fourth largest world producer of boron; fifth largest world producer of antimony; fifth largest world producer of tin; sixth largest world producer of tungsten; seventh largest producer of zinc, and the eighth largest producer of lead. In 2019, Mexico was the world's largest producer of silver (representing almost 23% of world production, producing more than 200 million ounces in 2019); ninth largest producer of gold, the eighth largest producer of copper, the world's fifth largest producer of lead, the world's sixth largest producer of zinc, the world's fifth largest producer of molybdenum, the world's third largest producer of mercury, the world's fifth largest producer of bismuth, the world's 13th largest producer of manganese and the 23rd largest world producer of phosphate. It is also the eighth largest world producer of salt. In 2019, Argentina was the fourth largest world producer of lithium, the ninth largest world producer of silver, the 17th largest world producer of gold and the seventh largest world producer of boron. Colombia is the world's largest producer of emeralds. In the production of gold, between 2006 and 2017, the country produced 15 tons per year until 2007, when its production increased significantly, breaking a record of 66.1 tons extracted in 2012. In 2017, it extracted 52.2 tons. The country is among the 25 largest gold producers in the world. In the production of silver, in 2017 the country extracted 15,5 tons. In the production of oil, Brazil was the tenth largest oil producer in the world in 2019, with 2.8 million barrels a day. Mexico was the twelfth largest, with 2.1 million barrels a day, Colombia in 20th place with 886 thousand barrels a day, Venezuela was the twenty-first place, with 877 thousand barrels a day, Ecuador in 28th with 531 thousand barrels a day and Argentina. 29th with 507 thousand barrels a day. Since Venezuela and Ecuador consume little oil and export most of their production, they are part of OPEC. Venezuela had a big drop in production after 2015 (when it produced 2.5 million barrels a day), falling in 2016 to 2.2 million, in 2017 to 2 million, in 2018 to 1.4 million and in 2019 to 877 thousand, due to lack of investment. In the production of natural gas, in 2018, Argentina produced 1,524 bcf (billions of cubic feet), Mexico produced 999, Venezuela 946, Brazil 877, Bolivia 617, Peru 451, Colombia 379. In the production of coal, the continent had three of the 30 largest world producers in 2018: Colombia (12th), Mexico (24th) and Brazil (27th). The World Bank annually lists the top manufacturing countries by total manufacturing value. According to the 2019 list: In Latin America, few countries stand out in industrial activity: Brazil, Argentina, Mexico and, less prominently, Chile. Begun late, the industrialization of these countries received a great boost from World War II: this prevented the countries at war from buying the products they were used to importing and exporting what they produced. At that time, benefiting from the abundant local raw material, the low wages paid to the labor force and a certain specialization brought by immigrants, countries such as Brazil, Mexico and Argentina, as well as Venezuela, Chile, Colombia and Peru, were able to implement important industrial parks. In general, in these countries there are industries that require little capital and simple technology for their installation, such as the food processing and textile industries. The basic industries (steel, etc.) also stand out, as well as the metallurgical and mechanical industries.[This paragraph needs citation(s)] The industrial parks of Brazil, Mexico, Argentina and Chile, however, present much greater diversity and sophistication, producing advanced technology items. In the rest of Latin American countries, mainly in Central America, the processing industries of primary products for export predominate.[This paragraph needs citation(s)] In the food industry, in 2019, Brazil was the second largest exporter of processed foods in the world. In 2016, the country was the second largest producer of pulp in the world and the eighth largest producer of paper. In the footwear industry, in 2019, Brazil ranked fourth among world producers. In 2019, the country was the eighth largest producer of vehicles and the ninth largest producer of steel in the world. In 2018, the chemical industry of Brazil was the eighth largest in the world. In the textile industry, Brazil, although it was among the five largest world producers in 2013, is very little integrated into world trade. In the aviation sector, Brazil has Embraer, the third largest aircraft manufacturer in the world, behind Boeing and Airbus. Transport in Latin America is basically carried out using the road mode, the most developed in the region. There is also a considerable infrastructure of ports and airports. The railway and fluvial sector, although it has potential, is usually treated in a secondary way. Brazil has more than 1.7 million km of roads, of which 215,000 km are paved, and about 14,000 km are divided highways. The two most important highways in the country are BR-101 and BR-116. Argentina has more than 600,000 km of roads, of which about 70,000 km are paved, and about 2,500 km are divided highways. The three most important highways in the country are Route 9, Route 7 and Route 14. Colombia has about 210,000 km of roads, and about 2,300 km are divided highways. Chile has about 82,000 km of roads, 20,000 km of which are paved, and about 2,000 km are divided highways. The most important highway in the country is the Route 5 (Pan-American Highway) These 4 countries are the ones with the best road infrastructure and with the largest number of double-lane highways, in South America. The roadway network in Mexico has an extent of 366,095 km (227,481 mi), of which 116,802 km (72,577 mi) are paved, Of these, 10,474 km (6,508 mi) are multi-lane expressways: 9,544 km (5,930 mi) are four-lane highways and the rest have 6 or more lanes. Due to the Andes Mountains, Amazon River and Amazon Forest, there have always been difficulties in implementing transcontinental or bioceanic highways. Practically the only route that existed was the one that connected Brazil to Buenos Aires, in Argentina and later to Santiago, in Chile. However, in recent years, with the combined effort of countries, new routes have started to emerge, such as Brazil-Peru (Interoceanic Highway), and a new highway between Brazil, Paraguay, northern Argentina and northern Chile (Bioceanic Corridor). There are more than 2,000 airports in Brazil. The country has the second largest number of airports in the world, behind only the United States. São Paulo International Airport, located in the Metropolitan Region of São Paulo, is the largest and busiest in the country – the airport connects São Paulo to practically all major cities around the world. Brazil has 44 international airports, such as those in Rio de Janeiro, Brasília, Belo Horizonte, Porto Alegre, Florianópolis, Cuiabá, Salvador, Recife, Fortaleza, Belém and Manaus, among others. Argentina has important international airports such as Buenos Aires, Cordoba, Bariloche, Mendoza, Salta, Puerto Iguazú, Neuquén and Ushuaia, among others. Chile has important international airports such as Santiago, Antofagasta, Puerto Montt, Punta Arenas and Iquique, among others. Colombia has important international airports such as Bogotá, Medellín, Cartagena, Cali and Barranquilla, among others. Peru has important international airports such as Lima, Cuzco and Arequipa. Other important airports are those in the capitals of Uruguay (Montevideo), Paraguay (Asunción), Bolivia (La Paz) and Ecuador (Quito). The 10 busiest airports in South America in 2017 were: São Paulo-Guarulhos (Brazil), Bogotá (Colombia), São Paulo-Congonhas (Brazil), Santiago (Chile), Lima (Peru), Brasília (Brazil), Rio de Janeiro (Brazil), Buenos Aires-Aeroparque (Argentina), Buenos Aires-Ezeiza (Argentina), and Minas Gerais (Brazil). There are 1,580 airports in Mexico, the fourth-largest number of airports by country in the world. The seven largest airports—which absorb 90% of air travel—are (in order of air traffic): Mexico City, Cancún, Guadalajara, Monterrey, Tijuana, Acapulco, and Puerto Vallarta. Considering all of Latin America, the 10 busiest airports in 2024 were: Bogotá (Colombia), Mexico City (Mexico), São Paulo-Guarulhos (Brazil), Cancún (Mexico), Santiago (Chile), Lima (Peru), São Paulo-Congonhas (Brazil), Tocumen (Panama), Guadalajara (Mexico) and Brasilia (Brazil). About ports, Brazil has some of the busiest ports in South America, such as Port of Santos, Port of Rio de Janeiro, Port of Paranaguá, Port of Itajaí, Port of Rio Grande, Port of São Francisco do Sul and Suape Port. Argentina has ports such as Port of Buenos Aires and Port of Rosario. Chile has important ports in Valparaíso, Caldera, Mejillones, Antofagasta, Iquique, Arica and Puerto Montt. Colombia has important ports such as Buenaventura, Cartagena Container Terminal and Puerto Bolivar. Peru has important ports in Callao, Ilo and Matarani. The 15 busiest ports in South America are: Port of Santos (Brazil), Port of Bahia de Cartagena (Colombia), Callao (Peru), Guayaquil (Ecuador), Buenos Aires (Argentina), San Antonio (Chile), Buenaventura (Colombia), Itajaí (Brazil), Valparaíso (Chile), Montevideo (Uruguay), Paranaguá (Brazil), Rio Grande (Brazil), São Francisco do Sul (Brazil), Manaus (Brazil) and Coronel (Chile). The four major seaports concentrating around 60% of the merchandise traffic in Mexico are Altamira and Veracruz in the Gulf of Mexico, and Manzanillo and Lázaro Cárdenas in the Pacific Ocean. Considering all of Latin America, the 10 largest ports in terms of movement are: Colon (Panama), Santos (Brazil), Manzanillo (Mexico), Bahia de Cartagena (Colombia), Pacifico (Panama), Callao (Peru), Guayaquil (Ecuador), Buenos Aires (Argentina), San Antonio (Chile) and Buenaventura (Colombia). The Brazilian railway network has an extension of about 30,000 kilometers. It is basically used for transporting ores. The Argentine rail network, with 47,000 km of tracks, was one of the largest in the world and continues to be the most extensive in Latin America. It came to have about 100,000 km of rails, but the lifting of tracks and the emphasis placed on motor transport gradually reduced it. It has four different trails and international connections with Paraguay, Bolivia, Chile, Brazil and Uruguay. Chile has almost 7,000 km of railways, with connections to Argentina, Bolivia and Peru. Colombia has only about 3,500 km of railways. Among the main Brazilian waterways, two stand out: Hidrovia Tietê-Paraná (which has a length of 2,400 km, 1,600 on the Paraná River and 800 km on the Tietê River, draining agricultural production from the states of Mato Grosso, Mato Grosso do Sul, Goiás and part of Rondônia, Tocantins and Minas General) and Hidrovia do Solimões-Amazonas (it has two sections: Solimões, which extends from Tabatinga to Manaus, with approximately 1600 km, and Amazonas, which extends from Manaus to Belém, with 1650 km. Almost entirely passenger transport from the Amazon plain is done by this waterway, in addition to practically all cargo transportation that is directed to the major regional centers of Belém and Manaus). In Brazil, this transport is still underutilized: the most important waterway stretches, from an economic point of view, are found in the Southeast and South of the country. Its full use still depends on the construction of locks, major dredging works and, mainly, of ports that allow intermodal integration. In Argentina, the waterway network is made up of the La Plata, Paraná, Paraguay and Uruguay rivers. The main river ports are Zárate and Campana. The port of Buenos Aires is historically the first in individual importance, but the area known as Up-River, which stretches along 67 km of the Santa Fé portion of the Paraná River, brings together 17 ports that concentrate 50% of the total exports of the country. As of 2023, Latin America and the Caribbean generates 60% of its electricity from renewable energy - double the global average of 30%. Despite this, fossil fuels still play a substantial role, especially in transportation and industry, with oil and gas constituting a notable portion. Approximately two-thirds of the region's energy mix comes from fossil fuels, Of the region's total energy production, 43% is hydroelectric, 8% wind and 6% is solar. The Brazilian government has undertaken an ambitious program to reduce dependence on imported petroleum. Imports previously accounted for more than 70% of the country's oil needs but Brazil became self-sufficient in oil in 2006–2007. Brazil was the 10th largest oil producer in the world in 2019, with 2.8 million barrels / day. Production manages to supply the country's demand. In the beginning of 2020, in the production of oil and natural gas, the country exceeded 4 million barrels of oil equivalent per day, for the first time. In January this year, 3.168 million barrels of oil per day and 138.753 million cubic meters of natural gas were extracted. Brazil is one of the main world producers of hydroelectric power. In 2019, Brazil had 217 hydroelectric plants in operation, with an installed capacity of 98,581 MW, 60.16% of the country's energy generation. In the total generation of electricity, in 2019 Brazil reached 170,000 megawatts of installed capacity, more than 75% from renewable sources (the majority, hydroelectric). In 2013, the Southeast Region used about 50% of the load of the National Integrated System (SIN), being the main energy consuming region in the country. The region's installed electricity generation capacity totaled almost 42,500 MW, which represented about a third of Brazil's generation capacity. The hydroelectric generation represented 58% of the region's installed capacity, with the remaining 42% corresponding basically to the thermoelectric generation. São Paulo accounted for 40% of this capacity; Minas Gerais by about 25%; Rio de Janeiro by 13.3%; and Espírito Santo accounted for the rest. The South Region owns the Itaipu Dam, which was the largest hydroelectric plant in the world for several years, until the inauguration of Three Gorges Dam in China. It remains the second largest operating hydroelectric in the world. Brazil is the co-owner of the Itaipu Plant with Paraguay: the dam is located on the Paraná River, located on the border between countries. It has an installed generation capacity of 14 GW for 20 generating units of 700 MW each. North Region has large hydroelectric plants, such as Belo Monte Dam and Tucuruí Dam, which produce much of the national energy. Brazil's hydroelectric potential has not yet been fully exploited, so the country still has the capacity to build several renewable energy plants in its territory. As of July 2022,[ref] according to ONS, total installed capacity of wind power was 22 GW, with average capacity factor of 58%. While the world average wind production capacity factors is 24.7%, there are areas in Northern Brazil, specially in Bahia State, where some wind farms record with average capacity factors over 60%; the average capacity factor in the Northeast Region is 45% in the coast and 49% in the interior. In 2019, wind energy represented 9% of the energy generated in the country. In 2019, it was estimated that the country had an estimated wind power generation potential of around 522 GW (this, only onshore), enough energy to meet three times the country's current demand. In 2021 Brazil was the 7th country in the world in terms of installed wind power (21 GW), and the 4th largest producer of wind energy in the world (72 TWh), behind only China, USA and Germany. Nuclear energy accounts for about 4% of Brazil's electricity. The nuclear power generation monopoly is owned by Eletronuclear (Eletrobrás Eletronuclear S/A), a wholly owned subsidiary of Eletrobrás. Nuclear energy is produced by two reactors at Angra. It is located at the Central Nuclear Almirante Álvaro Alberto (CNAAA) on the Praia de Itaorna in Angra dos Reis, Rio de Janeiro. It consists of two pressurized water reactors, Angra I, with capacity of 657 MW, connected to the power grid in 1982, and Angra II, with capacity of 1,350 MW, connected in 2000. A third reactor, Angra III, with a projected output of 1,350 MW, is planned to be finished. As of October 2022,[ref] according to ONS, total installed capacity of photovoltaic solar was 21 GW, with average capacity factor of 23%. Some of the most irradiated Brazilian States are MG ("Minas Gerais"), BA ("Bahia") and GO ("Goiás"), which have indeed world irradiation level records. In 2019, solar power represented 1.27% of the energy generated in the country. In 2021, Brazil was the 14th country in the world in terms of installed solar power (13 GW), and the 11th largest producer of solar energy in the world (16.8 TWh). In 2020, Brazil was the 2nd largest country in the world in the production of energy through biomass (energy production from solid biofuels and renewable waste), with 15,2 GW installed. After Brazil, Mexico is the country in Latin America that most stands out in energy production. In 2020, the country was the 14th largest petroleum producer in the world, and in 2018 it was the 12th largest exporter. In natural gas, the country was, in 2015, the 21st largest producer in the world, and in 2007 it was the 29th largest exporter. Mexico was also the world's 24th largest producer of coal in 2018. In renewable energies, in 2020, the country ranked 14th in the world in terms of installed wind energy (8.1 GW), 20th in the world in terms of installed solar energy (5.6 GW) and 19th in the world in terms of installed hydroelectric power (12.6 GW). In third place, Colombia stands out: In 2020, the country was the 20th largest petroleum producer in the world, and in 2015 it was the 19th largest exporter. In natural gas, the country was, in 2015, the 40th largest producer in the world. Colombia's biggest highlight is in coal, where the country was, in 2018, the world's 12th largest producer and the 5th largest exporter. In renewable energies, in 2020, the country ranked 45th in the world in terms of installed wind energy (0.5 GW), 76th in the world in terms of installed solar energy (0.1 GW) and 20th in the world in terms of installed hydroelectric power (12.6 GW). Venezuela, which was one of the world's largest oil producers (about 2.5 million barrels/day in 2015) and one of the largest exporters, due to its political problems, has had its production drastically reduced in recent years: in 2016, it dropped to 2.2 million, in 2017 to 2 million, in 2018 to 1.4 million and in 2019 to 877 thousand, reaching only 300,000 barrels/day at a given point. The country also stands out in hydroelectricity, where it was the 14th country in the world in terms of installed capacity in 2020 (16,5 GW). Argentina was, in 2017, the 18th largest producer in the world, and the largest producer in Latin America, of natural gas, in addition to being the 28th largest oil producer; although the country has the Vaca Muerta field, which holds close to 16 billion barrels of technically recoverable shale oil, and is the second largest shale natural gas deposit in the world, the country lacks the capacity to exploit the deposit: it is necessary capital, technology and knowledge that can only come from offshore energy companies, who view Argentina and its erratic economic policies with considerable suspicion, not wanting to invest in the country. In renewable energies, in 2020, the country ranked 27th in the world in terms of installed wind energy (2.6 GW), 42nd in the world in terms of installed solar energy (0.7 GW) and 21st in the world in terms of installed hydroelectric power (11.3 GW). The country has great future potential for the production of wind energy in the Patagonia region. Chile, although currently not a major energy producer, has great future potential for solar energy production in the Atacama Desert region. Paraguay stands out today in hydroelectric production thanks to the Itaipu Power Plant. Trinidad and Tobago and Bolivia stand out in the production of natural gas, where they were, respectively, the 20th and 31st largest in the world in 2015. Ecuador, because it consumes little energy, is part of OPEC and was the 27th largest oil producer in the world in 2020, being the 22nd largest exporter in 2014. Income from tourism is key to the economy of several Latin American countries. Mexico is the only Latin American country to be ranked in the top 10 worldwide in the number of tourist visits. It received by far the largest number of international tourists, with 39.3 million visitors in 2017, followed by Argentina, with 6.7 million; then Brazil, with 6.6 million; Chile, with 6.5 million; Dominican Republic, with 6.2 million; Cuba with 4.3 million; Peru and Colombia with 4.0 million. The World Tourism Organization reports the following destinations as the top six tourism earners for the year 2017: Mexico, with US$21,333 million; the Dominican Republic, with US$7,178 million; Brazil, with US$6,024 million; Colombia, with US$4,773 million; Argentina, with US$4,687 million; and Panama, with US$4,258 million. Places such as Cancún, Riviera Maya, Chichen Itza, Cabo San Lucas, Mexico City, Acapulco, Puerto Vallarta, Guanajuato City, San Miguel de Allende, Guadalajara in Mexico, Punta Cana, Santo Domingo in Dominican Republic, Punta del Este in Uruguay, San Juan, Ponce in Puerto Rico, Panama City in Panama, Poás Volcano National Park in Costa Rica, Viña del Mar in Chile, Rio de Janeiro, Florianópolis, Iguazu Falls, São Paulo, Armação dos Búzios, Salvador, Bombinhas, Angra dos Reis, Balneário Camboriú, Paraty, Ipojuca, Natal, Cairu, Fortaleza and Itapema in Brazil; Buenos Aires, Bariloche, Salta, Jujuy, Perito Moreno Glacier, Valdes Peninsula, Guarani Jesuit Missions in the cities of Misiones and Corrientes, Ischigualasto Provincial Park, Ushuaia and Patagonia in Argentina; Isla Margarita, Angel Falls, Los Roques archipelago, Gran Sabana in Venezuela; Machu Picchu, Lima, Nazca Lines, Cuzco in Peru; Lake Titicaca, Salar de Uyuni, La Paz, Jesuit Missions of Chiquitos in Bolivia; Tayrona National Natural Park, Santa Marta, Bogotá, Cali, Medellín, Cartagena, San Andrés in Colombia, and the Galápagos Islands in Ecuador are popular among international visitors in the region. The major trade blocs (or agreements) in the region are the Pacific Alliance and Mercosur. Minor blocs or trade agreements are the G3 Free Trade Agreement, the Dominican Republic – Central America Free Trade Agreement (DR-CAFTA), the Caribbean Community (CARICOM) and the Andean Community of Nations (CAN). However, major reconfigurations are taking place along opposing approaches to integration and trade; Venezuela has officially withdrawn from both the CAN and G3 and it has been formally admitted into the Mercosur (pending ratification from the Paraguayan legislature).[when?] The president-elect of Ecuador has manifested his intentions of following the same path. This bloc nominally opposes any Free Trade Agreement (FTA) with the United States, although Uruguay has manifested its intention otherwise. Chile, Peru, Colombia and Mexico are the only four Latin American nations that have an FTA with the United States and Canada, both members of the North American Free Trade Agreement (NAFTA). China's economic influence in Latin America increased substantially in the 21st century. Imports from China valued $8.3 billion in 2000, but by 2022 its value was $450 billion and had grown to be the largest trading partner of South America, as well as the second-largest for the broader Latin America. In particular, many of the investments are related to the Belt and Road Initiative or energy. China has also provided loans to several Latin American countries; this has raised concerns about the possibility of debt traps." Specifically, Venezuela, Brazil, Ecuador, and Argentina received the most loans from China during 2005–2016. Inequality Wealth inequality in Latin America remains a serious issue despite strong economic growth and improved social indicators. A report released in 2013 by the UN Department of Economic and Social Affairs entitled Inequality Matters: Report of the World Social Situation, observed that: 'Declines in the wage share have been attributed to the impact of labour-saving technological change and to a general weakening of labour market regulations and institutions.' Such declines are likely to disproportionately affect individuals in the middle and bottom of the income distribution, as they rely mostly on wages for income. In addition, the report noted that 'highly-unequal land distribution has created social and political tensions and is a source of economic inefficiency, as small landholders frequently lack access to credit and other resources to increase productivity, while big owners may not have had enough incentive to do so. According to the United Nations ECLAC, Latin America is the most unequal region in the world. In 2025, The Economist said that Latin America’s "tax and welfare systems are shockingly bad at reducing inequality." Inequality in Latin America has deep historical roots in the Latin European racially based Casta system instituted in Latin America during colonial times that has been difficult to eradicate because of the differences between initial endowments and opportunities among social groups have constrained the poorest's social mobility, thus causing poverty to transmit from generation to generation, and become a vicious cycle. Inequality has been reproduced and transmitted through generations because Latin American political systems allow a differentiated access on the influence that social groups have in the decision-making process, and it responds in different ways to the least favored groups that have less political representation and capacity of pressure. Recent economic liberalisation also plays a role as not everyone is equally capable of taking advantage of its benefits. Differences in opportunities and endowments tend to be based on race, ethnicity, rurality, and gender. Because inequality in gender and location are near-universal, race and ethnicity play a larger, more integral role in discriminatory practices in Latin America. The differences have a strong impact on the distribution of income, capital and political standing. One indicator of inequality is access to and quality of education. During the first phase of globalization in Latin America, educational inequality was on the rise, peaking around the end of the 19th century. In comparison with other developing regions, Latin America then had the highest level of educational inequality, which is certainly a contributing factor for its current general high inequality. During the 20th century, however, educational inequality started decreasing. Latin America has the highest levels of income inequality in the world. The following table lists all the countries in Latin America indicating a valuation of the country's Human Development Index, GDP at purchasing power parity per capita, measurement of inequality through the Gini index, measurement of poverty through the Human Poverty Index, a measure of extreme poverty based on people living on less than 1.25 dollars a day, life expectancy, murder rates and a measurement of safety through the Global Peace Index. Green cells indicate the best performance in each category, and red the lowest. Demographics Urbanization accelerated starting in the mid-twentieth century, especially in capital cities, or in the case of Brazil, traditional economic and political hubs founded in the colonial era. In Mexico, the rapid growth and modernization in country's north has seen the growth of Monterrey, in Nuevo León. The following is a list of the ten largest metropolitan areas in Latin America. Entries in "bold" indicate they are ranked the highest. Latin American populations are diverse, with descendants of the Indigenous peoples, Europeans, Africans initially brought as slaves, and Asians, as well as new immigrants. Mixing of groups was a fact of life at contact of the Old World and the New, but colonial regimes established legal and social discrimination against non-white populations simply on the basis of perceived ethnicity and skin color. Social class was usually linked to a person's racial category, with European-born Spaniards and Portuguese on top. During the colonial era, with a dearth initially of European women, European men and Indigenous women and African women produced what were considered mixed-race children. In Spanish America, the so-called Sociedad de castas or Sistema de castas was constructed by white elites to try to rationalize the processes at work. In the sixteenth century the Spanish crown sought to protect Indigenous populations from exploitation by white elites for their labor and land. The crown created the República de indios [es] to paternalistically govern and protect Indigenous peoples. It also created the República de Españoles, which included not only European whites, but all non-Indigenous peoples, such as Black, mulattoes, and mixed-race castas who were not dwelling in Indigenous communities. In the religious sphere, the Indigenous were deemed perpetual neophytes in the Catholic faith, which meant Indigenous men were not eligible to be ordained as Catholic priests; however, Indigenous were also excluded from the jurisdiction of the Inquisition. Catholics saw military conquest and religious conquest as two parts of the assimilation of Indigenous populations, suppressing Indigenous religious practices and eliminating the Indigenous priesthood. Some worship continued underground. Jews and other non-Catholics, such as Protestants (all called "Lutherans") were banned from settling and were subject to the Inquisition. Considerable mixing of populations occurred in cities, while the countryside was largely Indigenous. At independence in the early nineteenth century, in many places in Spanish America formal racial and legal distinctions disappeared, although slavery was not uniformly abolished. Significant black populations exist in Brazil and Spanish Caribbean islands such as Cuba and Puerto Rico and the circum-Caribbean mainland (Venezuela, Colombia, Panama), as long as in the southern part of South America and Central America (Honduras, Costa Rica, Nicaragua, Ecuador, and Peru) a legacy of their use in plantations. All these areas also have significant white populations. In Brazil, coastal Indigenous peoples largely died out in the early sixteenth century, with Indigenous populations surviving far from cities, sugar plantations, and other European enterprises. Many mixed-race people in much of Latin America are tri-racial, usually of European, African, and Indigenous blood, where European (mostly Spanish/Portuguese) tends to be the strongest of the three. In most of Brazil and the Spanish Caribbean, the average ancestral mix is European and African blood, with much smaller amounts of indigenous blood. While the opposite is true in many mainland Spanish-speaking Latin American countries like Venezuela, Colombia, and Panama, where the average ancestral mix is of European and indigenous blood, with smaller amounts of African. But in Mexico, and other places in northern Central America and southern South America, mixed race people tend to be completely of European and indigenous blood. Dominican Republic, Puerto Rico, Cuba, and Brazil have dominant Mulatto/Triracial populations ("Pardo" in Brazil), in Brazil and Cuba, there is equally large white populations and smaller black populations, while Dominican Republic and Puerto Rico are more Mulatto/Triracial dominated, with significant black and white minorities. Parts of Central America and northern South America are more diverse in that they are predominantly made up of Mestizos and Whites but also have large numbers of Mulattos, Black, and Indigenous people, especially Colombia, Venezuela, and Panama. The Southern Cone region—encompassing Argentina, Uruguay, and Chile—is predominantly White due to the massive European immigration that occurred from the late 19th century to the mid-20th century. The rest of Latin America, including Mexico, northern Central America—Guatemala, El Salvador, Honduras— and central South America—Peru, Ecuador, Bolivia, Paraguay—are dominated by mestizos but also have large white and indigenous minorities. Black people make up the majority of the French Caribbean, but it is generally not considered part of Latin America. In the nineteenth century, a number of Latin American countries sought immigrants from Europe and Asia. With the abolition of black slavery in 1888, the Brazilian monarchy fell in 1889. By then, another source of cheap labor to work on coffee plantations was found in Japan. Chinese male immigrants arrived in Cuba, Mexico, Peru and elsewhere. With political turmoil in Europe during the mid-nineteenth century and widespread poverty, Germans, Spaniards, and Italians immigrated to Latin America in large numbers, welcomed by Latin American governments both as a source of labor as well as a way to increase the size of their white populations. In Argentina, many Afro-Argentines married Euro-Argentines. In twentieth-century Brazil, sociologist Gilberto Freyre proposed that Brazil was a "racial democracy", with less discrimination against black people than in the United States. Even if a system of legal racial segregation was never implemented in Latin America, unlike the United States, subsequent research has shown that in Brazil there's discrimination against darker citizens, and that whites remain the elites in the country. In Mexico, the mestizo population was considered the true embodiment of "the cosmic race", according to Mexican intellectual José Vasconcelos, thus erasing other populations. There was considerable discrimination against Asians, with calls for the expulsion of Chinese in northern Mexico during the Mexican Revolution (1910–1920) and racially motivated massacres. In a number of Latin American countries, Indigenous groups have organized explicitly as Indigenous, to claim human rights and influence political power. With the passage of anti-colonial resolutions in the United Nations General Assembly and the signing of resolutions for Indigenous rights, the Indigenous are able to act to guarantee their existence within nation-states with legal standing. Spanish is the predominant language of Latin America. It is spoken as first language by about 60% of the population. Portuguese is spoken by about 30%, and about 10% speak other languages such as Quechua, Mayan languages, Guaraní, Aymara, Nahuatl, English, French, Dutch and Italian. Portuguese is spoken mostly in Brazil, the largest and most populous country in the region. Spanish is the official language of most of the other countries and territories on the Latin American mainland, as well as in Cuba, Puerto Rico (where it is co-official with English), and the Dominican Republic. French is spoken in Haiti and in the French overseas departments of Guadeloupe, Martinique, and French Guiana. It is also spoken by some Panamanians of Afro-Antillean descent. Dutch is the official language in Suriname, Aruba, Curaçao and Bonaire. (As Dutch is a Germanic language, the territories are not necessarily considered part of Latin America.) However, the native and co-official language of Aruba, Bonaire, and Curaçao, is Papiamento, a creole language largely based on Portuguese and Spanish that has had a considerable influence from Dutch and other Portuguese-based creole languages. Amerindian languages are widely spoken in Peru, Guatemala, Bolivia, Paraguay and Mexico, and to a lesser degree, in Panama, Ecuador, Brazil, Colombia, Venezuela, Argentina, and Chile. In other Latin American countries, the population of speakers of Indigenous languages tend to be very small or even non-existent, for example in Uruguay. Mexico is possibly contains more Indigenous languages than any other Latin American country, but the most-spoken Indigenous language there is Nahuatl. In Peru, Quechua is an official language, alongside Spanish and other Indigenous languages in the areas where they predominate. In Ecuador, while Quichua holds no official status, it is a recognized language under the country's constitution; however, it is only spoken by a few groups in the country's highlands. In Bolivia, Aymara, Quechua and Guaraní hold official status alongside Spanish. Guaraní, like Spanish, is an official language of Paraguay, and is spoken by a majority of the population, which is, for the most part, bilingual, and it is co-official with Spanish in the Argentine province of Corrientes. In Nicaragua, Spanish is the official language, but on the country's Caribbean coast English and Indigenous languages such as Miskito, Sumo, and Rama also hold official status. Colombia recognizes all Indigenous languages spoken within its territory as official, though fewer than 1% of its population are native speakers of these languages. Nahuatl is one of the 62 Native languages spoken by Indigenous people in Mexico, which are officially recognized by the government as "national languages" along with Spanish. Other European languages spoken in Latin America include: English, by half of the current population in Puerto Rico, as well as in nearby countries that may or may not be considered Latin American, like Belize and Guyana, and spoken by descendants of British settlers in Argentina and Chile. German is spoken in southern Brazil, southern Chile, portions of Argentina, Venezuela and Paraguay; Italian in Brazil, Argentina, Venezuela, and Uruguay; Ukrainian, Polish, and Russian in southern Brazil and Argentina; and Welsh, in southern Argentina. Non-European or Asian languages include Japanese in Brazil, Peru, Bolivia, and Paraguay, Korean in Brazil, Argentina, Paraguay, and Chile, Arabic in Argentina, Brazil, Colombia, Venezuela, and Chile, and Chinese throughout South America. Countries like Venezuela, Argentina and Brazil have their own dialects or variations of German and Italian. In several nations, especially in the Caribbean region, creole languages are spoken. The most widely-spoken creole language in Latin America and the Caribbean is Haitian Creole, the predominant language of Haiti, derived primarily from French and certain West African tongues, with Amerindian, English, Portuguese and Spanish influences as well. Creole languages of mainland Latin America, similarly, are derived from European languages and various African tongues. The aforementioned Papiamento, commonly spoken on the Dutch Caribbean ABC Islands, is a Portuguese-based creole. The Garifuna language is spoken along the Caribbean coast in Honduras, Guatemala, Nicaragua and Belize, mostly by the Garifuna people, a mixed-race Zambo people who were the result of mixing between Indigenous Caribbeans and escaped Black slaves. Primarily an Arawakan language, it has influences from Caribbean and European languages. Archaeologists have deciphered over 15 pre-Columbian distinct writing systems from Mesoamerican societies. Ancient Maya had the most sophisticated textually written language, but since texts were largely confined to the religious and administrative elite, traditions were passed down orally. Oral traditions also prevailed in other major Indigenous groups including, but not limited to the Aztecs and other Nahuatl speakers, Quechua and Aymara of the Andean regions, the Quiché of Central America, the Tupi-Guaraní in today's Brazil, the Guaraní in Paraguay and the Mapuche in Chile. The vast majority of Latin Americans are Christians (90%), mostly Roman Catholics belonging to the Latin Church. About 70% of the Latin American population considers itself Catholic. In 2012 Latin America constitutes in absolute terms the second world's largest Christian population, after Europe. According to the detailed Pew multi-country survey in 2014, 69% of the Latin American population is Catholic and 19% is Protestant. Protestants are 26% in Brazil and over 40% in much of Central America. More than half of these are converts from Roman Catholicism. Latin America has the highest rates of non-marital childbearing in the world, with 55–74% of all children in the region born to unmarried parents. In most countries in this traditionally Catholic region, children born outside marriage are now the norm. In 2014, non-marital births were 74% in Colombia, 70% in Paraguay, 69% in Peru, 63% in the Dominican Republic, 58% in Argentina, and 55% in Mexico. In Brazil, non-marital births increased to 65.8% in 2009, up from 56.2% in 2000. In Chile, non-marital births increased to 70.7% in 2013, up from 48.3% in 2000. Latin America also has relatively high teenage pregnancy rates, much higher than North America, Europe, and much of Asia, but lower than sub-Saharan Africa. The entire hemisphere was settled by migrants from Asia, Europe, and Africa. Indigenous Amerindian populations settled throughout the hemisphere before the arrival of Europeans in the late fifteenth and sixteenth centuries, and the forced migration of slaves from Africa. In the post-independence period, a number of Latin American countries sought to attract European immigrants as a source of labor as well as to deliberately change the proportions of racial and ethnic groups within their borders. Chile, Argentina, and Brazil actively recruited labor from Catholic southern Europe, where populations were poor and sought better economic opportunities. Many nineteenth-century immigrants went to the United States and Canada, but a significant number arrived in Latin America. Although Mexico tried to attract immigrants, it largely failed. As black slavery was abolished in Brazil in 1888, coffee growers recruited Japanese migrants to work in coffee plantations. There is a significant population of Japanese descent in Brazil. Cuba and Peru recruited Chinese labor in the late nineteenth century. Some Chinese immigrants who were excluded from immigrating to the United States settled in northern Mexico. When the United States acquired its southwest by conquest in the Mexican–American War, Latin American populations did not cross the border to the United States, the border crossed them. In the twentieth century there have been several types of migration. One is the movement of rural populations within a given country to cities in search of work, causing many Latin American cities to grow significantly. Another is international movement of populations, often fleeing repression or war. Other international migration is for economic reasons, often unregulated or undocumented. Mexicans immigrated to the U.S. during the violence of the Mexican Revolution (1910–1920) and the religious Cristero War (1926–29); during World War II, Mexican men worked in the U.S. in the bracero program. Economic migration from Mexico followed the crash of the Mexican economy in the 1980s. Spanish refugees fled to Mexico following the fascist victory in the Spanish Civil War (1936–38), with some 50,000 exiles finding refuge at the invitation of President Lázaro Cárdenas. Following World War II a larger wave of refugees to Latin America, many of them Jews, settled in Argentina, Brazil, Chile, Cuba, and Venezuela. Some were only transiting through the region, but others stayed and created communities. A number of Nazis escaped to Latin America, living under assumed names, in an attempt to avoid attention and prosecution. In the aftermath of the Cuban Revolution, middle class and elite Cubans moved to the U.S., particularly to Florida. Some fled Chile for the U.S. and Europe after the 1973 military coup. Colombians migrated to Spain and the United Kingdom during the region's political turmoil, compounded by the rise of narcotrafficking and guerrilla warfare. During the Central American wars of the 1970s to the 1990s, many Salvadorans, Guatemalans, and Hondurans migrated to the U.S. to escape narcotrafficking, gangs, and poverty. As living conditions deteriorated in Venezuela under Hugo Chávez and Nicolás Maduro, many left for neighboring Colombia and Ecuador. In the 1990s, economic stress in Ecuador during the La Década Perdida triggered considerable migration to Spain and to the U.S. Some Latin American countries seek to strengthen links between migrants and their states of origin, while promoting their integration in the receiving state. These emigrant policies focus on the rights, obligations and opportunities for participation of emigrated citizens who already live outside the borders of the country of origin. Research on Latin America shows that the extension of policies towards migrants is linked to a focus on civil rights and state benefits that can positively influence integration in recipient countries. In addition, the tolerance of dual citizenship has spread more in Latin America than in any other region of the world. Despite significant progress, education access and school completion remains unequal in Latin America. The region has made great progress in educational coverage; almost all children attend primary school, and access to secondary education has increased considerably. Quality issues such as poor teaching methods, lack of appropriate equipment, and overcrowding exist throughout the region. These issues lead to adolescents dropping out of the educational system early. Most educational systems in the region have implemented various types of administrative and institutional reforms that have enabled reach for places and communities that had no access to education services in the early 1990s. School meal programs are also employed to expand access to education, and at least 23 countries in the Latin America and Caribbean region have large-scale school feeding activities, altogether reaching 88% of primary school-age children in the region. Compared to prior generations, Latin American youth have seen an increase in their levels of education. On average, they have completed two more years of school than their parents. However, there are still 23 million children in the region between the ages of 4 and 17 outside of the formal education system. Estimates indicate that 30% of preschool age children (ages 4–5) do not attend school, and for the most vulnerable populations, the poor and rural, this proportion exceeds 40 percent. Among primary school age children (ages 6 to 12), attendance is almost universal; however there is still a need to enroll five million more children in the primary education system. These children mostly live in remote areas, are Indigenous or Afro-descendants and live in extreme poverty. Among people between the ages of 13 and 17 years, only 80% are full-time students, and only 66% of these advance to secondary school. These percentages are lower among vulnerable population groups: only 75% of the poorest youth between the ages of 13 and 17 years attend school. Tertiary education has the lowest coverage, with only 70% of people between the ages of 18 and 25 years outside of the education system. Currently, more than half of low income or rural children fail to complete nine years of education. List of countries by life expectancy at birth for 2023 according to the World Bank Group. Latin America has made notable progress in developing healthcare throughout the recent decades, and this is mostly driven even by both domestic policy and international collaborations. Some developments that were utilized: Water supply and sanitation in Latin America is characterized by insufficient access and in many cases by poor service quality, with detrimental impacts on public health. Water and sanitation services are provided by a vast array of mostly local service providers under an often fragmented policy and regulatory framework. Financing of water and sanitation remains a serious challenge. Latin America is home to some of the few countries of the world with a complete ban on abortion and minimal policies on reproductive rights, but it also contains some of the most progressive reproductive rights movements in the world. Debates on reproductive rights in the region occur over abortion, sexual autonomy, reproductive healthcare, and access to contraceptive measures. Modern reproductive rights movements most notably include the Green Wave (Marea Verde), which has led to much reproductive legislation reform. Cuba has been a regional leader for more liberal reproductive laws, while other countries like El Salvador and Honduras have increased restrictions on reproductive rights. HIV/AIDS has been a public health concern for Latin America due to a remaining prevalence of the disease. As of 2024, approximately 2.5 million people in Latin America were living with HIV. This is an increase from the 2018 estimation that 2.2 million people had HIV in Latin America and the Caribbean, making the HIV prevalence rate approximately 0.4% in Latin America. In 2024, there were approximately 120,000 new HIV infections in Latin America, nearly all of which were among adults aged 15 and older. That same year, the region experienced 27,000 AIDS-related deaths. Overall, approximately 71% of all people living with HIV in Latin America are on an antiretroviral treatment (ART), though this regional treatment coverage percentage is lower for children (age 0-14) and pregnant women. Some demographic groups in Latin America have higher prevalence rates for HIV/ AIDS including men who have sex with men having a prevalence rate of 10.6%, and transgender women having one of the highest rates within the population with a prevalence rate of 17.7%. Female sex workers and drug users also have higher prevalence for the disease than the general population (4.9% and 1%-49.7% respectively). Latin America has been cited by numerous sources to be the most dangerous region in the world. Studies have shown that Latin America contains the majority of the world's most dangerous cities. Crime and violence prevention and public security are now important issues for governments and citizens in the Latin America region. Homicide rates in Latin America are the highest in the world. From the early 1980s through the mid-1990s, homicide rates increased by 50 percent. Latin America experienced more than 2.5 million murders between 2000 and 2017. There were a total of 63,880 murders in Brazil in 2018. The most frequent victims of such homicides are young men, 69 percent of them between the ages of 15 and 19. Countries in Latin America and the Caribbean with the highest homicide rate per year per 100,000 inhabitants in 2015 were: El Salvador 109, Honduras 64, Venezuela 57, Jamaica 43, Belize 34.4, St. Kitts and Nevis 34, Guatemala 34, Trinidad and Tobago 31, the Bahamas 30, Brazil 26.7, Colombia 26.5, the Dominican Republic 22, St. Lucia 22, Guyana 19, Mexico 16, Puerto Rico 16, Ecuador 13, Grenada 13, Costa Rica 12, Bolivia 12, Nicaragua 12, Panama 11, Antigua and Barbuda 11, and Haiti 10. Most of the countries with the highest homicide rates are in Africa and Latin America. Countries in Central America, like El Salvador and Honduras, top the list of homicides in the world. Brazil has more overall homicides than any country in the world, at 50,108, accounting for one in 10 globally. Crime-related violence is the biggest threat to public health in Latin America, striking more victims than HIV/AIDS or any other infectious disease. Countries with the lowest homicide rate per year per 100,000 inhabitants as of 2015 were: Chile 3, Peru 7, Argentina 7, Uruguay 8 and Paraguay 9. Culture Latin American culture is a mixture of many influences: Beyond the tradition of Indigenous art, the development of Latin American visual art owed much to the influence of Spanish, Portuguese and French Baroque painting, which in turn often followed the trends of the Italians. In general, artistic Eurocentrism began to wane in the early twentieth century with the increased appreciation for indigenous forms of representation. From the early twentieth century, the art of Latin America was greatly inspired by the Constructivist Movement. The movement rapidly spread from Russia to Europe and then into Latin America. Joaquín Torres García and Manuel Rendón have been credited with bringing the Constructivist Movement into Latin America from Europe. An important artistic movement generated in Latin America is muralism represented by Diego Rivera, David Alfaro Siqueiros, José Clemente Orozco and Rufino Tamayo in Mexico, Santiago Martinez Delgado and Pedro Nel Gómez in Colombia and Antonio Berni in Argentina. Some of the most impressive Muralista works can be found in Mexico, Colombia, New York City, San Francisco, Los Angeles and Philadelphia. Painter Frida Kahlo, one of the most famous Mexican artists, painted about her own life and the Mexican culture in a style combining Realism, Symbolism and Surrealism. Kahlo's work commands the highest selling price of all Latin American paintings. The Venezuelan Armando Reverón, whose work begins to be recognized internationally, is one of the most important artists of the 20th century in South America; he is a precursor of Arte Povera and Happening. In the 60s kinetic art emerged in Venezuela. Its main representatives are Jesús Soto, Carlos Cruz-Diez, Alejandro Otero and Gego. Colombian sculptor and painter Fernando Botero has gained regional and international recognition for his works which, on first examination, are noted for their exaggerated proportions and the corpulence of the human and animal figures. The Ecuadorian Oswaldo Guayasamín, considered one of the most important and seminal artists in Ecuador and South America. In his life, he made over 13,000 paintings and held more than 180 exhibitions all over the world, including Paris, Barcelona, New York, Buenos Aires, Moscow, Prague, and Rome. He brought his unique style of expressionism and cubism to the collection of Ecuador artwork during the Age of Anger which relates to the period of the Cold War when the United States opposed communist presence in South America. Social criticism of human and social inequality was central to his artwork. Latin American film is both rich and diverse. Historically, the main centers of production have been Mexico, Argentina, Brazil, and Cuba. Latin American film flourished after sound was introduced in cinema, which added a linguistic barrier to the export of Hollywood film south of the border. Mexican cinema began in the silent era from 1896 to 1929 and flourished in the Golden Era of the 1940s. It boasted a huge industry comparable to Hollywood at the time, with stars such as María Félix, Dolores del Río, and Pedro Infante. In the 1970s, Mexico was the location for many cult horror and action movies. More recently, films such as Amores Perros (2000) and Y tu mamá también (2001) enjoyed box office and critical acclaim and propelled Alfonso Cuarón and Alejandro González Iñárritu to the front rank of Hollywood directors. Iñárritu in 2010 directed Biutiful and Birdman (2014), Alfonso Cuarón directed Harry Potter and the Prisoner of Azkaban in 2004 and Gravity in 2013. A close friend of both, Guillermo del Toro, a top rank Hollywood director in Hollywood and Spain, directed Pan's Labyrinth (2006) and produced El Orfanato (2007). Carlos Carrera (The Crime of Father Amaro), and screenwriter Guillermo Arriaga are also some of the best known modern Mexican film makers. Rudo y Cursi released in December (2008) in Mexico, was directed by Carlos Cuarón. Argentine cinema has also been prominent since the first half of the 20th century and today averages over 60 full-length titles yearly. The industry suffered during the 1976–1983 military dictatorship; but re-emerged to produce the Academy Award winner The Official Story in 1985. A wave of imported US films again damaged the industry in the early 1990s, though it soon recovered, thriving even during the Argentine economic crisis around 2001. Many Argentine movies produced during recent years have been internationally acclaimed, including Nueve reinas (2000), Son of the Bride (2001), El abrazo partido (2004), El otro (2007), the 2010 Foreign Language Academy Award winner El secreto de sus ojos, Wild Tales (2014) and Argentina, 1985 (2022). In Brazil, the Cinema Novo movement created a particular way of making movies with critical and intellectual screenplays, clearer photography related to the light of the outdoors in a tropical landscape, and a political message. The modern Brazilian film industry has become more profitable inside the country, and some of its productions have received prizes and recognition in Europe and the United States, with movies such as Central do Brasil (1999), Cidade de Deus (2002) and Tropa de Elite (2007). Puerto Rican cinema has produced some notable films, such as Una Aventura Llamada Menudo, Los Diaz de Doris and Casi Casi. An influx of Hollywood films affected the local film industry in Puerto Rico during the 1980s and 1990s, but several Puerto Rican films have been produced since and it has been recovering. Cuban cinema has enjoyed much official support since the Cuban revolution and important film-makers include Tomás Gutiérrez Alea. Venezuelan television has also had a great impact in Latin America, is said that whilst "Venezuelan cinema began sporadically in the 1950s[, it] only emerged as a national-cultural movement in the mid-1970s" when it gained state support and auteurs could produce work. International co-productions with Latin America and Spain continued into this era and beyond, and Venezuelan films of this time were counted among the works of New Latin American Cinema. This period is known as Venezuela's Golden Age of cinema, having massive popularity even though it was a time of much social and political upheaval. One of the most famous Venezuelan films, even to date, is the 1976 film Soy un delincuente by Clemente de la Cerda, which won the Special Jury Prize at the 1977 Locarno International Film Festival. Soy un delincuente was one of nine films for which the state gave substantial funding to produce, made in the year after the Venezuelan state began giving financial support to cinema in 1975. The support likely came from increased oil wealth in the early 1970s, and the subsequent 1973 credit incentive policy. At the time of its production the film was the most popular film in the country, and took a decade to be usurped from this position, even though it was only one in a string of films designed to tell social realist stories of struggle in the 1950s and '60s. Equally famous is the 1977 film El Pez que Fuma (Román Chalbaud). In 1981 FONCINE (the Venezuelan Film Fund) was founded, and this year it provided even more funding to produce seventeen feature films. A few years later in 1983 with Viernes Negro, oil prices dropped and Venezuela entered a depression which prevented such extravagant funding, but film production continued; more transnational productions occurred, many more with Spain due to Latin America's poor economic fortune in general, and there was some in new cinema, as well: Fina Torres' 1985 Oriana won the Caméra d'Or Prize at the 1985 Cannes Film Festival as the best first feature. Film production peaked in 1984–5,:37 with 1986 considered Venezuelan cinema's most successful year by the state, thanks to over 4 million admissions to national films, according to Venezuelanalysis. The Venezuelan capital of Caracas hosted the Ibero-American Forum on Cinematography Integration in 1989, from which the pan-continental IBERMEDIA was formed; a union which provides regional funding. Pre-Columbian cultures were primarily oral, although the Aztecs and Maya, for instance, produced elaborate codices. Oral accounts of mythological and religious beliefs were also sometimes recorded after the arrival of European colonizers, as was the case with the Popol Vuh. Moreover, a tradition of oral narrative survives to this day, for instance among the Quechua-speaking population of Peru and the Quiché (K'iche') of Guatemala. From the very moment of Europe's discovery of the continents, early explorers and conquistadores produced written accounts and crónicas of their experience – such as Columbus's letters or Bernal Díaz del Castillo's description of the conquest of Mexico. During the colonial period, written culture was often in the hands of the church, within which context Sor Juana Inés de la Cruz wrote memorable poetry and philosophical essays. Towards the end of the 18th century and the beginning of the 19th, a distinctive criollo literary tradition emerged, including the first novels such as Lizardi's El Periquillo Sarniento (1816). The 19th century was a period of "foundational fictions" in critic Doris Sommer's words, novels in the Romantic or Naturalist traditions that attempted to establish a sense of national identity, and which often focussed on the Indigenous question or the dichotomy of "civilization or barbarism" (for which see, say, Domingo Sarmiento's Facundo (1845), Juan León Mera's Cumandá (1879), or Euclides da Cunha's Os Sertões (1902)). The 19th century also witnessed the realist work of Machado de Assis, who made use of surreal devices of metaphor and playful narrative construction, much admired by critic Harold Bloom. At the turn of the 20th century, modernismo emerged, a poetic movement whose founding text was Nicaraguan poet Rubén Darío's Azul (1888). This was the first Latin American literary movement to influence literary culture outside of the region, and was also the first truly Latin American literature, in that national differences were no longer so much at issue. José Martí, for instance, though a Cuban patriot, also lived in Mexico and the United States and wrote for journals in Argentina and elsewhere. However, what really put Latin American literature on the global map was no doubt the literary boom of the 1960s and 1970s, distinguished by daring and experimental novels (such as Julio Cortázar's Rayuela (1963)) that were frequently published in Spain and quickly translated into English. The Boom's defining novel was Gabriel García Márquez's Cien años de soledad (1967), which led to the association of Latin American literature with magic realism, though other important writers of the period such as the Peruvian Mario Vargas Llosa and Carlos Fuentes do not fit so easily within this framework. Arguably, the Boom's culmination was Augusto Roa Bastos's monumental Yo, el supremo (1974). In the wake of the Boom, influential precursors such as Juan Rulfo, Alejo Carpentier, and above all Jorge Luis Borges were also rediscovered. Contemporary literature in the region is vibrant and varied, ranging from the best-selling Paulo Coelho and Isabel Allende to the more avant-garde and critically acclaimed work of writers such as Diamela Eltit, Giannina Braschi, Ricardo Piglia, or Roberto Bolaño. There has also been considerable attention paid to the genre of testimonio, texts produced in collaboration with subaltern subjects such as Rigoberta Menchú. Finally, a new breed of chroniclers is represented by the more journalistic Carlos Monsiváis and Pedro Lemebel. The region boasts six Nobel Prize winners: in addition to the two Chilean poets Gabriela Mistral (1945) and Pablo Neruda (1971), there is also the Guatemalan novelist Miguel Ángel Asturias (1967), the Colombian writer Gabriel García Márquez (1982), the Mexican poet and essayist Octavio Paz (1990), and the Peruvian novelist Mario Vargas Llosa (2010). Latin America has produced many successful worldwide artists in terms of recorded global music sales. Among the most successful have been Juan Gabriel (Mexico) only Latin American musician to have sold over 200 million records worldwide, Gloria Estefan (Cuba), Carlos Santana, Luis Miguel (Mexico) of whom have sold over 90 million records, Shakira (Colombia) and Vicente Fernández (Mexico) with over 50 million records sold worldwide. Enrique Iglesias, although not a Latin American, has also contributed for the success of Latin music. Other notable successful mainstream acts through the years, include RBD, Celia Cruz, Soda Stereo, Thalía, Ricky Martin, Maná, Marc Anthony, Ricardo Arjona, Selena, and Menudo. Latin Caribbean music, such as merengue, bachata, salsa, and more recently reggaeton, from such countries as the Dominican Republic, Puerto Rico, Cuba, and Panama, has been strongly influenced by African rhythms and melodies. Another well-known Latin American musical genre includes the Argentine and Uruguayan tango (with Carlos Gardel as the greatest exponent), as well as the distinct nuevo tango, a fusion of tango, acoustic and electronic music popularized by bandoneón virtuoso Ástor Piazzolla. Samba, North American jazz, European classical music and choro combined to form bossa nova in Brazil, popularized by guitarist João Gilberto with singer Astrud Gilberto and pianist Antonio Carlos Jobim. Other influential Latin American sounds include the Colombian cumbia and vallenato, the Chilean cueca, the Ecuadorian boleros and rockoleras, the Honduran punta, the Mexican ranchera and the mariachi, which is the epitome of Mexican soul, the Nicaraguan palo de Mayo, the Peruvian marinera and tondero, the Uruguayan candombe and the various styles of music from pre-Columbian traditions that are widespread in the Andean region. The classical composer Heitor Villa-Lobos (1887–1959) worked on the recording of Native musical traditions within his homeland of Brazil. The traditions of his homeland heavily influenced his classical works. Also notable is the recent work of the Cuban Leo Brouwer, Uruguayan-American Miguel del Águila, guitar works of the Venezuelan Antonio Lauro and the Paraguayan Agustín Barrios. Latin America has also produced world-class classical performers such as the Chilean pianist Claudio Arrau, Brazilian pianist Nelson Freire and the Argentine pianist and conductor Daniel Barenboim. Brazilian opera soprano Bidu Sayão, one of Brazil's most famous musicians, was a leading artist of the Metropolitan Opera in New York City from 1937 to 1952. Arguably, the main contribution to music entered through folklore, where the true soul of the Latin American and Caribbean countries is expressed. Musicians such as Yma Súmac, Chabuca Granda, Atahualpa Yupanqui, Violeta Parra, Víctor Jara, Jorge Cafrune, Facundo Cabral, Mercedes Sosa, Jorge Negrete, Luiz Gonzaga, Caetano Veloso, Susana Baca, Chavela Vargas, Simon Diaz, Julio Jaramillo, Toto la Momposina, Gilberto Gil, Maria Bethânia, Nana Caymmi, Nara Leão, Gal Costa, Ney Matogrosso as well as musical ensembles such as Inti Illimani and Los Kjarkas are magnificent examples of the heights that this soul can reach. Latin pop, including many forms of rock, is popular in Latin America today (see Spanish language rock and roll). A few examples are Café Tacuba, Soda Stereo, Maná, Los Fabulosos Cadillacs, Rita Lee, Mutantes, Secos e Molhados Legião Urbana, Titãs, Paralamas do Sucesso, Cazuza, Barão Vermelho, Skank, Miranda!, Cansei de Ser Sexy or CSS, and Bajo Fondo. More recently, reggaeton, which blends Jamaican reggae and dancehall with Latin America genres such as bomba and plena, as well as hip hop, is becoming more popular, in spite of the controversy surrounding its lyrics, dance steps (Perreo) and music videos. It has become very popular among populations with a "migrant culture" influence – both Latino populations in the United States, such as southern Florida and New York City, and parts of Latin America where migration to the United States is common, such as Trinidad and Tobago, Dominican Republic, Colombia, Ecuador, El Salvador, and Mexico. The following is a list of the ten countries with the most UNESCO World Heritage Sites in Latin America. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/History_of_the_Jews_in_Lebanon] | [TOKENS: 4373] |
Contents History of the Jews in Lebanon The history of the Jews in Lebanon encompasses the presence of Jews in present-day Lebanon stretching back to biblical times. While Jews have been present in Lebanon since ancient times, their numbers had dwindled during the Muslim era. Through the medieval ages, Jewish people often faced persecution, but retained their religious and cultural identity. In the early 20th century, for a brief period under the French Mandate of Lebanon and 1926 Constitution of Lebanon, the Jewish community was constitutionally protected. However, after 1948, the security of Jews remained fragile, and the main synagogue in Beirut was bombed in the early 1950s. In the wake of the 1967 Arab–Israeli War, there was mass emigration of around 6,000 Lebanese Jews from Lebanon to Israel and Western countries. The Lebanese Civil War, which started in 1975, brought immense suffering for the remaining Lebanese Jewish community, and some 200 were killed in ensuing anti-Jewish pogroms, leading to a mass displacement of over 1,800 of the remaining Lebanese Jews. The final exodus of Lebanese Jews happened in August 1982, when Israeli forces invaded Lebanon and lay siege to Beirut. Over 100 Jewish families were displaced after Israeli forces bombarded the Jewish quarter, leaving it nearly abandoned, and shelled its synagogue which suffered extensive damage. By 2005, the Jewish quarter of Beirut, Wadi Abu Jamil only held 40 to 200 Jews. History In Late Antiquity (the Talmudic period in Jewish historiography), Jewish sages, chazal, are recorded as having traveled from the Land of Israel to Tyre, where they taught halakha (Jewish law), answered halakhic questions, and provided biblical commentary. Rabbinic traditions reference the activities of Yaakov of Kfar Naboria, a sage from the fourth generation of the amoraim, in Tyre. Another notable amora active in Tyre was R. Mana bar Tanchum. According to the Talmud, "Hiyya bar Abba went to Tyre and discovered that R. Mana bar Tanchum permitted Turmusin (Lupin beans)." Rabbi Simeon bar Yochai identified two routes that could be traveled on Shabbat without crossing the Shabbat techum: from Tiberias to Sepphoris (in Israel) and from Tyre to Sidon (in Lebanon). Evidence of Jewish migration from Tyre to Galilee is found in a Greek-language inscription on a stone lintel at a synagogue in Sepphoris, likely dating to the fifth century CE. The inscription references the archisynagogos of Tyre and Sidon, implying that Jews from both Tyre and Sidon had settled in Sepphoris, establishing a community centered around the local synagogue, complete with its own leaders. As governor of Syria under Caliph 'Uthman from 639 to 661 CE, Mu'awiya settled Jews in Tripoli. During the early Islamic period, Tyre was home to a substantial Jewish population (estimated at around 4,000 prior to the Arab conquest) and benefited from the Mu'āwiya’s redevelopment of coastal cities. In the 11th century, Tyre's economic and Jewish communal significance grew, particularly with the temporary relocation of the Land of Israel yeshiva to the city between roughly 1077 and 1093. Regional instability under the Fatimid Caliphate, including a failed rebellion in 1063 and the sack of Tyre in 1093, led to a period of destruction in the city. The Damascus Jewish community ransomed captives from Tyre in the aftermath. Tyre remained a Jewish center during the Crusader period, with Benjamin of Tudela reporting 400–500 Jewish households, a number of synagogues, and a scholarly community. The Jewish presence largely ended with the Mamluk conquest in 1291, after which Tyre declined and became a secondary stop for pilgrims. During the Ottoman period, Deir al-Qamar hosted a Jewish community. It was first documented in Rabbi Joseph Soffer's visit circa 1759. The community is cited in Responsa literature from the same period, indicating it had communal institutions such as a beth din and synagogue. Rabbi Joseph Schwartz provides additional context, estimating the community at 80 householders mostly engaged in trade, also leasing lands for iron production and owning vineyards and olive plantations. Following hardships in the late 19th century, the Jewish community of Deir al-Qamar dispersed, selling their properties. In the 1837 Safed earthquake, the Jews of Deir al-Qamar aided the Jews of Safed. In 1847, they faced a blood libel incident. The community's demise came during the 1860 civil conflict in Mount Lebanon, leading to their displacement and the eventual sale of their synagogue in 1893. In a book published in 1847, the Scottish missionary John Wilson describes several Jewish communities he encountered during his travels in Lebanon. One of these was the community of Hasbaya, which he reports to have been of Sephardic background and numbering around 100 people. According to the community's own account, all of its members were locally born, except for one individual who had been born in Acre; their ancestors, who had predominantly arrived from Austria, had settled in the Wadi al-Taym area approximately a century earlier. Wilson notes that most members of the Hasbaya community were traders, with only a few owning businesses, and that they were in need of Hebrew books. Their leader, Abraham ben David, also served as the butcher, teacher, and prayer leader. Wilson also documents the Jewish community of Sidon, which he estimates to have comprised between 350 and 400 people, likewise all of Sephardic origin. The community maintained one school with three teachers and forty pupils, who read the Scriptures but did not study the Talmud. The community reportedly possessed twenty-five Torah books, one of which was said to be five hundred years old. Its leader was Issachar Abulafia. In 1911, Jews from Italy, Greece, Syria, Iraq, Turkey, Egypt and Iran moved to Beirut, expanding the community there with more than 5,000 additional members. Articles 9 and 10 of the 1926 Constitution of Lebanon guaranteed the freedom of religion and provided each religious community, including the Jewish community, the right to manage its own civil matters, including education, and thus the Jewish community was constitutionally protected, a fact that did not apply to other Jewish communities in the region. The Jewish community prospered under the French mandate and Greater Lebanon, exerting considerable influence throughout Lebanon and beyond. They allied themselves with Pierre Gemayel's Phalangist Party (a right wing, Maronite group modelled after similar movements in Italy and Germany, and Franco's Phalangist movement in Spain.) and played an instrumental role in the establishment of Lebanon as an independent state.[citation needed] During the Greater Lebanon period, two Jewish newspapers were founded, the Arabic language Al-Alam al-Israili (the Israelite World) and the French language Le Commerce du Levant, an economic periodical which continued to be in circulation until June 2021.[citation needed] The Jewish community of Beirut evolved in three distinct phases. Until 1908, the Jewish population in Beirut grew by migration from the Syrian interior and from other Ottoman cities like İzmir, Salonica, Istanbul, and Baghdad. Commercial growth in the thriving port-city, consular protection, and relative safety and stability in Beirut all accounted for the Jewish migration. Thus, from a few hundred at the beginning of the 19th century, the Jewish community grew to 2,500 by the end of the century, and to 3,500 by World War I. While the number of Jews grew considerably, the community remained largely unorganized. During this period, the community lacked some of the fundamental institutions such as communal statutes, elected council, welfare and taxation mechanisms. In this period, the most organized and well-known Jewish institution in the city was probably the private Tiferet Israel (The Glory of Israel) boarding-school founded by Zaki Cohen in 1874. The school attracted Jewish students from prosperous families like Shloush (Jaffa), Moyal (Jaffa), and Sassoon (Baghdad). Its founder, influenced by the Ottoman reforms and by local cultural trends, aspired to create a modern yet Jewish school. It offered both secular and strictly Jewish subjects as well as seven languages. It also offered commercial subjects. The school was closed at the beginning of the 20th century due to financial hardships.[citation needed] The Young Turk Revolution (1908) sparked the organization process. Within six years, the Beirut community created a general assembly, an elected twelve-member council, drafted communal statutes, appointed a chief rabbi, and appointed committees to administer taxation and education. The process involved tension and even conflicts within the community, but eventually, the community council established its rule and authority in the community. The chief rabbi received his salary from the community and was de facto under the council's authority.[citation needed] With the establishment of Greater Lebanon (1920), the Jewish community of Beirut became part of a new political entity. The French mandate rulers adopted local political traditions of power-sharing and recognized the autonomy of the various religious communities. Thus, the Jewish community was one of Lebanon's sixteen communities and enjoyed a large measure of autonomy, more or less along the lines of the Ottoman millet system. During the third phase of its development, the community founded two major institutions: the Maghen Abraham Synagogue (1926), and the renewed Talmud-Torah Selim Tarrab community school (1927). The community also maintained welfare services like the Biqur-Holim, Ozer-Dalim, and Mattan-Basseter societies. The funding for all these institutions came from contributions of able community members, who contributed on Jewish holidays and celebrations, through subscription of prominent members, fund-raising events and lotteries the community organized. In fact, the community was financially independent and did not rely on European Jewish philanthropy.[citation needed] The development of the Jewish yishuv in Palestine influenced the Jewish leadership, who usually showed sympathy and active support for Zionism. The Jewish leadership in Beirut during this time aligned itself ideologically with the American-Based B'nai B'rith organization through its local proxy (Arzei Ha-Levanon Lodge) which was staffed by local community leaders. The B'nai B'rith lodge in Beirut attracted the social and economic elite. It embarked on community progress and revival through social activism, Jewish solidarity, and philanthropic values. Unlike the Alliance, who mainly aspired to empower the Jewish individual through modern education, the B'nai B'rith strove to empower both the individual and the community as a whole. In Beirut, unlike other Jewish communities, most of the community council members were also B'nai B'rith members, hence there existed an overlap between the council and the lodge. Of course, the Alliance school was popular in the community as it focused on French and prepared students for higher education. Since there was no Jewish high school in Beirut, many Jewish students attended foreign (Christian) schools, either secular or religious. The Jewish community was one of the smaller communities in the country, and hence it was not entitled for a guaranteed representation in the parliament. Being excluded from Lebanese political life, the Jewish leadership aspired to improve the community's public standing by consolidating and improving the community as a whole. Overall, the French mandate period was characterized by growth, development, and stability.[citation needed] In the 20th century, the Jewish community in Lebanon showed little involvement or interest in politics. They were generally traditional as opposed to religious and were not involved in the feuds of the larger religious groups in the country. Broadly speaking, they tended to support Lebanese nationalism and felt an affinity toward France. French authorities at the time discouraged expressions of Zionism (which they saw as a tool of their British rival), and the community was mostly apathetic to it. A few community leaders, such as Joseph Farhi, fervently supported the Zionist cause, and there was a level of support for the concept of a Jewish state in Palestine. The Jews in Lebanon had good contacts with those in Palestine, and there were regular visits between Beirut and Jerusalem. Accounts by the Alliance Israélite Universelle, which established schools that most Jewish children in the country attended, spoke of active Zionism while the Jewish Agency lamented the lack of national sentiment. The World Zionist Organization was also disappointed with the lack of more active support, and the community did not send a delegation to the World Zionist Congress.[citation needed] A young Lebanese Jew named Joseph Azar, who took it upon himself to advance the Zionist cause with other individuals in October 1930, said in a report for the Jewish Agency that: "Before the disturbance of August 1929 the Jews...of Lebanon manifested much sympathy for the Zionist cause and worked actively for the sake of Palestine. They had established associations which collected money for (sic) Keren Kayemeth and (sic) Keren Heyesod." He said that after 1929, the Jews "started to fear from (sic) anything having any connection with Zionism and ceased to hold meetings and collect money." He also said that the Jewish Communal Council in Beirut "endeavored to prevent anything having a Jewish national aspect because they feared that this might wound the feelings of the Muslims." Other sources suggested that such charity work was not so much motivated by Zionism as it was by an interest to help Jews in need.[citation needed] The Maccabi organization was recognized officially by Lebanese authorities and was an active center for Jewish cultural affairs in Beirut and Saida. The Maccabi taught Hebrew language and Jewish history, and was the focus point of the small Zionist movement in the country. There was also a pro-Zionist element within the Maronite community in Lebanon.[citation needed] After the 1929 riots in Jerusalem, the Grand Mufti of Jerusalem was expelled from Palestine and he chose to settle in Lebanon, where continued to mobilize resistance against Zionist claims to Palestine. During the riots, some Muslim nationalists and editors of a major Greek-Orthodox newspaper (both of whom saw the fate of the emerging Lebanese state as one within a broader Arab context) sought to incite the disturbances in Lebanon, where until that point most ethno-religious groups were aloof to the forecoming conflict in Palestine. It also seemed to have an effect on the cryptic response given by Interior Minister Habib Abou Chahla to Joseph Farhi when, on behalf of the Jewish community, he requested that they receive a seat in the newly expanded Lebanese Parliament.[citation needed] Outside of Beirut, the attitudes toward Jews were usually more hostile. In November 1945, fourteen Jews were killed in anti-Jewish riots in Tripoli. Further anti-Jewish events occurred in 1948 following the 1948 Arab–Israeli War. The ongoing insecurity combined with the greater opportunities that Beirut offered prompted most of the remaining Jews of Tripoli to relocate to Beirut. The Jewish community was traditionally located in Wadi Abu Jamil and Ras Beirut, with other communities in Chouf, Deir al-Qamar, Aley, Bhamdoun, and Hasbaya. Lebanon was the only Arab country whose Jewish population increased after the declaration of the State of Israel in 1948, reaching around 10,000 people. However, after the Lebanon Crisis of 1958, many Lebanese Jews left the country, especially for Israel, France, United States, Canada and Latin America (mostly to Brazil). The main synagogue in Beirut was bombed in the early 1950s,[citation needed] and the Lebanese Chamber of Deputies witnessed heated debates on the status of Lebanese Jewish army officers. The discussions culminated in a unanimous resolution to expel and exclude them from the Lebanese Army. The two Jewish army officers were discharged, but a few Jews continued to work for the government. The Jewish population of Beirut, which stood at 9,000 in 1948, dwindled to 2,500 by 1969. The Lebanese Civil War, which started in 1975, was much worse for the Lebanese Jewish community, and some 200 were killed in pogroms. Most of the 1,800 remaining Lebanese Jews migrated in 1976, fearing that the growing Syrian presence in Lebanon would restrict their freedom to emigrate. Beginning in 1975 and 1976, Jews began to leave their original neighborhood of Wadi Abu Jamil for Christian areas. In the 1970s and 1980s, Jews largely lived in relative harmony in their environment, though the last rabbi remaining in Lebanon left the country in 1977. In 1982, during the 1982 Israeli invasion of Lebanon, 11 leaders of the Jewish community were captured and killed by Islamic extremists. The community buildings also suffered during those days. During the Israeli Army's advance toward Beirut, Yasser Arafat assigned Palestinian gunmen to stand guard at the Maghen Abraham Synagogue, an important symbol of the community, located near Parliament. The synagogue was bombarded by the Israeli Air Force, perhaps on the presumption that it was being used as a weapons depot by Palestinians. During the Israeli invasion, some of the Lebanese Jews who had emigrated to Israel returned as invading troops. Jews were targeted in the later years of the Lebanese civil war. Isaac Sasson, a leader of the Lebanese Jewish community, was kidnapped at gunpoint March 31, 1985, on his way from the Beirut International Airport, after a trip to Abu Dhabi. Earlier, kidnappers had also seized Eli Hallak, 60-year-old physician; Haim Cohen, a 39-year-old Jew; Isaac Tarrab; Yeheda Benesti; Salim Jammous; and Elie Srour. Cohen, Tarrab, and Srour were killed by their captors, a Shiite Muslim group called The Organization of the Oppressed on Earth, which is believed to have been part of or had links to Hezbollah. The others' fates remain unknown, but they are believed to have also been killed.[citation needed] 1982 war with Israel further reduced the number of Jews in the country. Much of the emigration was to countries with existing well-established Lebanese or Lebanese Jewish diaspora communities, such as Brazil, France, Switzerland, Canada and the United States. Almost all Lebanese Jews had fled the country by 2005, and the Jewish quarter of Beirut, Wadi Abu Jamil, was virtually abandoned, after the assassination of Prime Minister Rafik Hariri, and there were only around 40 Jews left in Beirut, mostly elderly. In 2006, there were only about 40 Jews left in Lebanon. In 2010, work began to restore an old synagogue in Beirut, the Maghen Abraham Synagogue, as the synagogue had fallen into disrepair several years earlier. Solidere agreed to provide funds for the renovation because political officials believed it would portray Lebanon as an open society tolerant of Judaism. None of the Jews involved in the project agreed to be identified. The international media and even some members of the Jewish community (in and out of Lebanon) questioned who would pray at the synagogue. The decision received acclaim from both the Sunni and Shi'ite Muslims. Hezbollah leader Hassan Nasrallah welcomed the authority's decision of restoring the synagogue. The self-declared head of the Jewish Community Council, Isaac Arazi, who left Lebanon in 1983, eventually came forward but refused to show his face on camera in a television interview, fearing that his business would suffer if clients knew they had been dealing with a Jew. Arazi died in 2023. Jews in Lebanon live mostly in or around Beirut. The community has been described as elderly and apprehensive. There are no services at Beirut's synagogues. In 2015, the estimated total Jewish population in Syria and Lebanon combined was 100. In 2020, there were only about 29 Jews in Lebanon. In 2022, there were 4,500 Jews registered on election rolls, but the majority had died or had left the country. Only 27 people were registered as "Israelites", the designation for Jews in official registers. Out of fear and weariness, most signs of Jewish life in Lebanon had fallen into disuse by 2022, with synagogues deserted, Magen Davids removed, Jewish cemeteries abandoned. Jews largely hide their identity. In Saida, where Jews have had a presence since the first century BC, there is no indication of a Jewish presence. The country still hosts a few buildings that were originally synagogues, such as one located in Tripoli and another in the southern city of Sidon. Lebanese Jewish-born notables Jewish community presidents The Jewish community presidents include: Jewish community vice presidents Chief rabbis Between the years of 1799 and 1978, a series of Chief Rabbis led the Lebanese Jewish community. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/History_of_the_Jews_in_Malaysia] | [TOKENS: 1183] |
Contents History of the Jews in Malaysia The history of the Jews in Malaysia dates back to the 1700s. Jews have lived in Malaysia either as immigrants or born in the country. The state of Penang was once home to a Jewish community, until the late 1970s, by which time most had emigrated. Growing racial and religious hostility in Malaysia as a result of the Israeli–Palestinian conflict has caused many Malaysian Jews to leave the country. The Malaysian Jewish community consists of Sephardic Jews who live discreetly amongst the Kristang people (Malacca-Portuguese), Mizrahi Jews (the majority of whom are Baghdadi Jews), Cochin Jews, and Ashkenazi Jews. History Jews could be found well into the 18th century in the cosmopolitan bazaars of Malacca. Malacca was the first and largest Jewish settlement in Malaysian Jewish history. Due to persecution by the Portuguese Inquisition in the region, many of the Jews assimilated into the Kristang community during the period. The arrival of Baghdadi Jews in Penang probably occurred at the turn of the 19th century as the fledgling British-ruled entrepot grew and attracted Jewish trading families such as the Sassoons and Meyers from India. There was also significant emigration of Jews from the Ottoman province of Baghdad as a result of the persecutions of the governor, Dawud Pasha, whose rule lasted from 1817 to 1831. The first Baghdadi Jew known by name to have settled in Penang was Ezekiel Aaron Manasseh, who emigrated from Baghdad in 1895. Menasseh claimed to have been the only practising Jew in Malaya for 30 years until after World War I, when a significant number of Baghdadi Jews began to settle in Malaya. Statistics from the same period showed a somewhat different picture: During the Japanese invasion of Malaya, the Penang Jewish community was evacuated to Singapore, and many were interned by the Japanese during the subsequent occupation of both Malaya and Singapore. After the war, a majority had emigrated to Singapore, Australia, Israel and the United States. By 1963, only 20 Penang Jewish families remained in the country. Penang's only synagogue, located on 28, Nagore Road, was opened in 1929 but closed down in 1976 as the community could no longer fulfill the minyan, a quorum of ten or more adult Jews assembled for purposes of fulfilling a public religious obligation. In 2008, it was reported that approximately 100 Jewish refugees from Russia were residing in Malaysia. The original Penang Jewish community ceased to exist with the death of Mordecai (Mordy) David Mordecai on 15 July 2011. The rest of the Penang Jews have either embraced Christianity or else have emigrated to other countries, especially with the rise of anti-semitic sentiments related to anti-Israel policies in reaction to the conflict in Palestine pursued by the Malaysian government since the 1970s. Yahudi Road (or Jew Road) in Penang, where the majority of the Penang Jewish population lived, has since been renamed Jalan Zainal Abidin, erasing another legacy of the Jewish presence in Malaysia. The only significant presence remaining is the Jewish cemetery and the old synagogue, previously occupied by a photo studio whose owner, aware of the building's historical significance, had undertaken to preserve the exterior. Besides that, there are also a minority of former Kristang Christians who had rediscovered their Sephardic Jewish roots and returned to Judaism. This led to the establishment of the Kristang Community for Cultural Judaism (KCCJ) in 2010 which is no longer in operation due to political reasons. In 2021, the book The Last Jews of Penang by Zayn Gregory with illustrations by Arif Rafhan was released by Matahari Books chronicling snapshots of Jewish life in George Town. Penang Jewish Cemetery The Penang Jewish Cemetery, established in 1805, is believed to be the oldest single Jewish cemetery in the country. It forms a 38,087 square feet (3,538.4 m2) cleaver-shaped plot of land situated alongside Jalan Zainal Abidin (formerly Yahudi Road), a small link road located between Burmah and Macalister Roads in George Town. The cemetery used to be a green lung, but much of the lawn has been cemented over. The oldest Jewish tombstone is dated 9 July 1835 dedicated to a Mrs. Shoshan Levi and is believed to mark the grave of the English Jewish benefactress who donated the land where the current cemetery stands. Most of the graves take the form of a triangular vaulted-lid casket, resembling ossuaries commonly found in Israel. There are approximately 107 graves located in the cemetery, with the most recent tombstone dated 2011, incidentally the grave of the last ethnic Jew on the island. It is the only cemetery established solely for the once small and thriving Jewish community in Peninsular Malaysia, although there may be a few Jewish graves in other non-Jewish cemeteries. The graves of the Cohens are located separately from the main group of graves on the north-eastern corner of the cemetery and it includes the grave of Eliaho Hayeem Victor Cohen, a Lieutenant with the 9th Jat Regiment of the British Indian Army killed in an accident on 10 October 1941. It is the only grave in the cemetery that is maintained by the Commonwealth War Graves Commission. The cemetery is still officially open for burials, and is itself managed by a board of trustees established in 1885. Notable Malaysian Jews Gallery See also References Literature External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Kivy_(framework)#Kv_language] | [TOKENS: 318] |
Contents Kivy (framework) Kivy is a free and open source Python framework for developing mobile apps and other multitouch application software with a natural user interface (NUI). It is distributed under the terms of the MIT License, and can run on Android, iOS, Linux, macOS, and Windows. Kivy is the main framework developed by the Kivy organization, alongside Python for Android, Kivy for iOS, and several other libraries meant to be used on all platforms. In 2012, Kivy got a $5000 grant from the Python Software Foundation for porting it to Python 3.3. Kivy also supports the Raspberry Pi which was funded through Bountysource. The framework contains all the elements for building an application such as: Kivy is an evolution of the PyMT project. Code example Here is an example of the Hello world program with just one button: Kv language The Kv language is a language dedicated to describing user interface and interactions in Kivy framework. As with other user interface markup languages, it is possible to easily create a whole UI and attach interaction. For example, to create a Loading dialog that includes a file browser, and a Cancel / Load button, one could first create the base widget in Python and then construct the UI in Kv. In main.py: And in the associated Kv: Alternatively, the layout (here, Box Layout) and the buttons can be loaded directly in the main.py file. Related projects Google Summer of Code Kivy participated in Google Summer of Code under the Python Software Foundation. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Life_on_Venus] | [TOKENS: 3974] |
Contents Life on Venus The possibility of life on Venus is a subject of interest in astrobiology due to Venus's proximity and similarities to Earth. To date, no definitive evidence has been found of past or present life there. In the early 1960s, studies conducted via spacecraft demonstrated that the current Venusian environment is extreme compared to Earth's. Studies continue to question whether life could have existed on the planet's surface before a runaway greenhouse effect took hold, and whether a relict biosphere could persist high in the modern Venusian atmosphere. With extreme surface temperatures reaching nearly 735 K (462 °C; 863 °F) and an atmospheric pressure 92 times that of Earth, the conditions on Venus make water-based life as we know it unlikely on the surface of the planet. However, a few scientists have speculated that thermoacidophilic extremophile microorganisms might exist in the temperate, acidic upper layers of the Venusian atmosphere. In September 2020, research was published that reported the presence of phosphine in the planet's atmosphere, a potential biosignature. However, doubts have been cast on these observations. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported, though whether these gases are present is still unclear. On 2 June 2021, NASA announced two new related missions to Venus: DAVINCI and VERITAS. Surface conditions Because Venus is completely covered in clouds, human knowledge of surface conditions was largely speculative until the space probe era. Until the mid-20th century, the surface environment of Venus was believed to be similar to Earth, hence it was widely believed that Venus could harbor life. In 1870, the British astronomer Richard A. Proctor said the existence of life on Venus was impossible near its equator, but possible near its poles. Microwave observations published by C. Mayer et al. in 1958 indicated a high-temperature source (600 K). Strangely, millimetre-band observations made by A. D. Kuzmin indicated much lower temperatures. Two competing theories explained the unusual radio spectrum, one suggesting the high temperatures originated in the ionosphere, and another suggesting a hot planetary surface. In 1962, Mariner 2, the first successful mission to Venus, measured the planet's temperature for the first time, and found it to be "about 500 degrees Celsius (900 degrees Fahrenheit)." Since then, increasingly clear evidence from various space probes showed Venus has an extreme climate, with a greenhouse effect generating a constant temperature of about 500 °C (932 °F) on the surface. The atmosphere contains sulfuric acid clouds. In 1968, NASA reported that air pressure on the Venusian surface was 75 to 100 times that of Earth. This was later revised to 92 bars, almost 100 times that of Earth and similar to that of more than 1,000 m (3,300 ft) deep in Earth's oceans. In such an environment, and given the hostile characteristics of the Venusian weather, life as we know it is highly unlikely to occur. Past habitability potential Scientists have speculated that if liquid water existed on its surface before the runaway greenhouse effect heated the planet, microbial life may have formed on Venus, but it may no longer exist. Assuming the process that delivered water to Earth was common to all the planets near the habitable zone, it has been estimated that liquid water could have existed on its surface for up to 600 million years during and shortly after the Late Heavy Bombardment, which could be enough time for simple life to form, but this figure can vary from as little as a few million years to as much as a few billion. A study published in September 2019 concluded that Venus may have had surface water and a habitable condition for around 3 billion years and may have been in this condition until 700 to 750 million years ago. If correct, this would have been an ample amount of time for the formation of life, and for microbial life to evolve to become aerial. Since then, there have been more studies and climate models, with different conclusions. There has been very little analysis of Venusian surface material, so it is possible that evidence of past life, if it ever existed, could be found with a probe capable of enduring Venus's current extreme surface conditions. However, the resurfacing of the planet in the past 500 million years means that it is unlikely that ancient surface rocks remain, especially those containing the mineral tremolite which, theoretically, could have encased some biosignatures. Studies reported on 26 October 2023 suggest Venus, for the first time, may have had plate tectonics during ancient times, and, as a result, may have had a more habitable environment, and possibly one capable of life forms. It has been speculated that life on Venus may have come to Earth through lithopanspermia, via the ejection of icy bolides that facilitated the preservation of multicellular life on long interplanetary voyages. "Current models indicate that Venus may have been habitable. Complex life may have evolved on the highly irradiated Venus, and transferred to Earth on asteroids. This model fits the pattern of pulses of highly developed life appearing, diversifying and going extinct with astonishing rapidity through the Cambrian and Ordovician periods, and also explains the extraordinary genetic variety which appeared over this period." This theory, however, is a fringe one, and is seen as being unlikely. Between 700 and 750 million years ago, a near-global resurfacing event triggered the release of carbon dioxide from rock on the planet, which transformed its climate. In addition, according to a study from researchers at the University of California, Riverside, Venus would be able to support life if Jupiter had not altered its orbit around the Sun. Present habitability of its atmosphere Although there is little possibility of existing life near the surface of Venus, the altitudes about 50 km (31 mi) above the surface have a mild temperature, and hence there are still some opinions in favor of such a possibility in the atmosphere of Venus. The idea was first brought forward by German physicist Heinz Haber in 1950. In September 1967, Carl Sagan and Harold Morowitz published an analysis of the issue of life on Venus in the journal Nature. In the analysis of mission data from the Venera, Pioneer Venus and Magellan missions, it was discovered that carbonyl sulfide, hydrogen sulfide and sulfur dioxide were present together in the upper atmosphere. Venera also detected large amounts of toxic chlorine just below the Venusian cloud cover. Carbonyl sulfide is difficult to produce inorganically, but it can be produced by volcanism. Sulfuric acid is produced in the upper atmosphere by the Sun's photochemical action on carbon dioxide, sulfur dioxide, and water vapor. The re-analysis of Pioneer Venus data in 2020 has found part of chlorine and all of hydrogen sulfide spectral features are instead phosphine-related, meaning lower than thought concentration of chlorine and non-detection of hydrogen sulfide. Solar radiation constrains the atmospheric habitable zone to between 51 km (65 °C) and 62 km (−20 °C) altitude, within the acidic clouds. It has been speculated that clouds in the atmosphere of Venus could contain chemicals that can initiate forms of biological activity and have zones where photophysical and chemical conditions allow for Earth-like phototrophy. It has been speculated that any hypothetical microorganisms inhabiting the atmosphere, if present, could employ ultraviolet light (UV) emitted by the Sun as an energy source, which could be an explanation for the dark lines (called "unknown UV absorber") observed in the UV photographs of Venus. The existence of this "unknown UV absorber" prompted Carl Sagan to publish an article in 1963 proposing the hypothesis of microorganisms in the upper atmosphere as the agent absorbing the UV light. In August 2019, astronomers reported a newly discovered long-term pattern of UV light absorbance and albedo changes in the atmosphere of Venus and its weather, that is caused by "unknown absorbers" that may include unknown chemicals or even large colonies of microorganisms high up in the atmosphere. In January 2020, astronomers reported evidence that suggests Venus is currently (within 2.5 million years from present) volcanically active, and the residue from such activity may be a potential source of nutrients for possible microorganisms in the Venusian atmosphere. In 2021, it was suggested the color of "unknown UV absorber" match that of "red oil", a known substance comprising a mix of organic carbon compounds dissolved in concentrated sulfuric acid. Research published in September 2020 indicated the detection of phosphine (PH3) in Venus's atmosphere by Atacama Large Millimeter Array (ALMA) telescope that was not linked to any known abiotic method of production present or possible under Venusian conditions. However, the claimed detection of phosphine was disputed by several subsequent studies. A molecule like phosphine is not expected to persist in the Venusian atmosphere since, under the ultraviolet radiation, it will eventually react with water and carbon dioxide. PH3 is associated with anaerobic ecosystems on Earth, and may indicate life on anoxic planets. Related studies suggested that the initially claimed concentration of phosphine (20 ppb) in the clouds of Venus indicated a "plausible amount of life," and further, that the typical predicted biomass densities were "several orders of magnitude lower than the average biomass density of Earth's aerial biosphere." As of 2019[update], no known abiotic process generates phosphine gas on terrestrial planets (as opposed to gas giants) in appreciable quantities. The phosphine can be generated by geological process of weathering olivine lavas containing inorganic phosphides, but this process requires an ongoing and massive volcanic activity. Therefore, detectable amounts of phosphine could indicate life. In July 2021, a volcanic origin was proposed for phosphine, by extrusion from the mantle. In a statement published on October 5, 2020, on the website of the International Astronomical Union's commission F3 on astrobiology, the authors of the September 2020 paper about phosphine were accused of unethical behaviour and criticized for being unscientific and misleading the public. Members of that commission have since distanced themselves from the IAU statement, claiming that it had been published without their knowledge or approval. The statement was removed from the IAU website shortly thereafter. The IAU's media contact Lars Lindberg Christensen stated that IAU did not agree with the content of the letter, and that it had been published by a group within the F3 commission, not IAU itself. By late October 2020, the review of data processing of the data collected by both ALMA used in original publication of September 2020, and later James Clerk Maxwell Telescope (JCMT) data, has revealed background calibration errors resulting in multiple spurious lines, including the spectral feature of phosphine. Re-analysis of data with a proper subtraction of background either does not result in the detection of the phosphine or detects it with concentration of 1ppb, 20 times below original estimate. On 16 November 2020, ALMA staff released a corrected version of the data used by the scientists of the original study published on 14 September. On the same day, authors of this study published a re-analysis as a preprint using the new data that concludes the planet-averaged PH3 abundance to be ~7 times lower than what they detected with data of the previous ALMA processing, to likely vary by location and to be reconcilable with the JCMT detection of ~20 times this abundance if it varies substantially in time. They also respond to points raised in a critical study by Villanueva et al. that challenged their conclusions and find that so far the presence of no other compound can explain the data. The authors reported that more advanced processing of the JCMT data was ongoing. Re-analysis of the in situ data gathered by Pioneer Venus Multiprobe in 1978 has also revealed the presence of phosphine and its dissociation products in the atmosphere of Venus. In 2021, a further analysis detected trace amounts of ethane, hydrogen sulfide, nitrite, nitrate, hydrogen cyanide, and possibly ammonia. The phosphine signal was also detected in data collected using the JCMT, though much weaker than that found using ALMA. In October 2020, a reanalysis of archived infrared spectrum measurement in 2015 did not reveal any phosphine in the Venusian atmosphere, placing an upper limit of phosphine volume concentration 5 parts per billion (a quarter of value measured in radio band in 2020). However, the wavelength used in these observations (10 microns) would only have detected phosphine at the very top of the clouds of the atmosphere of Venus. BepiColombo, launched in 2018 to study Mercury, flew by Venus on October 15, 2020, and on August 10, 2021. Johannes Benkhoff, project scientist, believed BepiColombo's MERTIS (Mercury Radiometer and Thermal Infrared Spectrometer) could possibly detect phosphine, but "we do not know if our instrument is sensitive enough". In 2022, observations of Venus using the SOFIA airborne infrared telescope failed to detect phosphine, with an upper limit on the concentration of 0.8 ppb announced for Venusian altitudes 75–110 km. A subsequent reanalysis of the SOFIA data using nonstandard calibration techniques resulted in a phosphine detection at the concentration level ~ 1 ppb,. If present, phosphine appears to be more abundant in pre-morning parts of the Venusian atmosphere. In 2024, the existence of phosphine on Venus was further corroborated with new observations by the JCMT-Venus programme. ALMA restarted 17 March 2021 after a year-long shutdown in response to the COVID-19 pandemic and may enable further observations that could provide insights for the ongoing investigation. Despite controversies, NASA is in the beginning stages of sending a future mission to Venus. The Venus Emissivity, Radio Science, InSAR, Topography, and Spectroscopy mission (VERITAS) would carry radar to view through the clouds to get new images of the surface, of much higher quality than those last photographed thirty-one years ago. The other, Deep Atmosphere Venus Investigation of Noble gases, Chemistry, and Imaging Plus (DAVINCI+) would actually go through the atmosphere, sampling the air as it descends, to hopefully detect the phosphine. In June 2021, NASA announced DAVINCI+ and VERITAS would be selected from four mission concepts picked in February 2020 as part of the NASA's Discovery 2019 competition for launch in the 2028–2030 time frame. There is also an ongoing long-term monitoring campaign with JCMT to study phosphine and other molecules in Venus's atmosphere. According to new research announced in January 2021, the spectral line at 266.94 GHz attributed to phosphine in the clouds of Venus was more likely to have been produced by sulfur dioxide in the mesosphere. That claim was refuted in April 2021 for being inconsistent with the available data. The detection of PH3 in the Venusian atmosphere with ALMA was recovered to ~7 ppb. By August 2021 it was found the suspected contamination by sulfur dioxide was contributing only 10% to the tentative signal in phosphine spectral line band in ALMA spectra taken in 2019, and about 50% in ALMA spectra taken in 2017. Speculative biochemistry of Venusian life Conventional water-based biochemistry was claimed to be impossible in Venusian conditions. In June 2021, calculations of water activity levels in Venusian clouds based on data from space probes showed these to be two magnitudes too low at the examined places for any known extremophile bacteria to survive. Alternative calculations based on the estimation of energy costs of obtaining hydrogen in Venus conditions compared to Earth conditions indicate only minor (6.5%) additional energy expenditure during Venusian photosynthesis of glucose. In August 2021, it was suggested that even saturated hydrocarbons are unstable in ultra-acid conditions of Venusian clouds, making cellular membranes of Venusian life concepts problematic. Instead, it was proposed that Venusian "life" may be based on self-replicating molecular components of "red oil" – a known class of substances consisting of a mixture of polycyclic carbon compounds dissolved in concentrated sulfuric acid. Conversely, in September 2024 it was reported what while short-chain fatty acids are unstable in concentrated sulfuric acid, it is possible to construct their acid-stable analogs capable of bilayer membrane formation by replacing carboxylic groups with sulfate, amine or phosphate groups. Also, 19 of the 20 protein-making amino acids (with the exception of tryptophan) and all nucleic acids are stable under Venusian cloud conditions. In December 2021, it was suggested Venusian life – as the chemically most plausible cause – may photochemically produce ammonia from available chemicals, resulting in life-bearing droplets becoming a slurry of ammonium sulfite with a less acidic pH of 1. These droplets would deplete sulfur dioxide in upper cloud layers as they settle down, explaining the observed distribution of sulfur dioxide in the atmosphere of Venus, and may make the clouds no more acidic than some extreme terrestrial environments that harbor life. Speculative life cycles of Venusian life The hypothesis paper in 2020 has suggested the microbial life of Venus may have a two-stage life cycle. The metabolically active part of such a cycle would have to happen within cloud droplets to avoid a fatal loss of liquid. After such droplets grow large enough to sink under the force of gravity, organisms would fall with them into hotter lower layers and desiccate, becoming small and light enough to be raised again to the habitable layer by gravity waves at a timescale of approximately a year. The hypothesis paper in 2021 has criticized the concept above, pointing to the large stagnancy of lower haze layers in Venus making return from the haze layer to relatively habitable clouds problematic even for small particles. Instead, an in-cloud evolution model was proposed where organisms are evolving to become maximally absorptive (dark) for the given amount of biomass and the darker, solar-heated areas of cloud are kept afloat by thermal updrafts initiated by organisms itself. Alternatively, microorganisms can be kept aloft by negative photophoresis effect. Habitability of Venus for humans While the surface of Venus is very inhospitable to humans, conditions at altitudes of 50 km above the surface have been identified to be not only hospitable for indigenous but also for human life, more so than anywhere else in the Solar System other than Earth. Conditions, from atmospheric pressure, gravity, and temperature to radiation, but except chemical, are all very much like conditions on Earth at surface level. Because of this prospect, floating habitats for humans at such altitudes have been suggested for humans going to Venus. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Color] | [TOKENS: 6750] |
Contents Color Color (or colour in Commonwealth English; see spelling differences) is the visual perception based on the electromagnetic spectrum. Though color is not an inherent property of matter, color perception is related to an object's light absorption, emission, reflection and transmission. For most humans, visible wavelengths of light are the ones perceived in the visible light spectrum, with three types of cone cells (trichromacy). Other animals may have a different number of cone cell types or have eyes sensitive to different wavelengths, such as bees that can distinguish ultraviolet, and thus have a different color sensitivity range. Animal perception of color originates from different light wavelength or spectral sensitivity in cone cell types, which is then processed by the brain. Colors have perceived properties such as hue, colorfulness, and lightness. Colors can also be additively mixed (mixing light) or subtractively mixed (mixing pigments). If one color is mixed in the right proportions, because of metamerism, they may look the same as another stimulus with a different reflection or emission spectrum. For convenience, colors can be organized in a color space, which when being abstracted as a mathematical color model can assign each region of color with a corresponding set of numbers. As such, color spaces are an essential tool for color reproduction in print, photography, computer monitors, and television. Some of the most well-known color models and color spaces are RGB, CMYK, HSL/HSV, CIE Lab, and YCbCr/YUV. Because the perception of color is an important aspect of human life, different colors have been associated with emotions, activity, and nationality. Names of color regions in different cultures can have different, sometimes overlapping areas. In visual arts, color theory is used to govern the use of colors in an aesthetically pleasing and harmonious way. The theory of color includes the color complements; color balance; and classification of primary colors, secondary colors, and tertiary colors. The study of colors in general is called color science. Physical properties Electromagnetic radiation is characterized by its wavelength (or frequency) and its intensity. When the wavelength is within the visible spectrum (the range of wavelengths humans can perceive, approximately from 390 nm to 700 nm), it is known as "visible light". Most light sources emit light at many different wavelengths; a source's spectrum is a distribution giving its intensity at each wavelength. Although the spectrum of light arriving at the eye from a given direction determines the color sensation in that direction, there are many more possible spectral combinations than color sensations. In fact, one may formally define a color as a class of spectra that give rise to the same color sensation, although such classes would vary widely among different species, and to a lesser extent among individuals within the same species. In each such class, the members are called metamers of the color in question. This effect can be visualized by comparing the light sources' spectral power distributions and the resulting colors. The familiar colors of the rainbow in the spectrum—named using the Latin word for appearance or apparition by Isaac Newton in 1671—include all those colors that can be produced by visible light of a single wavelength only, the pure spectral or monochromatic colors. The spectrum above shows approximate wavelengths (in nm) for spectral colors in the visible range. Spectral colors have 100% purity, and are fully saturated. A complex mixture of spectral colors can be used to describe any color, which is the definition of a light power spectrum. The spectral colors form a continuous spectrum, and how it is divided into distinct colors linguistically is a matter of culture and historical contingency. Despite the ubiquitous ROYGBIV mnemonic used to remember the spectral colors in English, the inclusion or exclusion of colors is contentious, with disagreement often focused on indigo and cyan. Even if the subset of color terms is agreed, their wavelength ranges and borders between them may not be. The intensity of a spectral color, relative to the context in which it is viewed, may alter its perception considerably. For example, a low-intensity orange-yellow is brown, and a low-intensity yellow-green is olive green. Additionally, hue shifts towards yellow or blue happen if the intensity of a spectral light is increased; this is called Bezold–Brücke shift. In color models capable of representing spectral colors, such as CIELUV, a spectral color has the maximal saturation. In Helmholtz coordinates, this is described as 100% purity. The physical color of an object depends on how it absorbs and scatters light. Most objects scatter light to some degree and do not reflect or transmit light specularly like glasses or mirrors. A transparent object allows almost all light to transmit or pass through, thus transparent objects are perceived as colorless. Conversely, an opaque object does not allow light to transmit through and instead absorbs or reflects the light it receives. Like transparent objects, translucent objects allow light to transmit through, but translucent objects are seen colored because they scatter or absorb certain wavelengths of light via internal scattering. The absorbed light is often dissipated as heat. Color vision Although Aristotle and other ancient scientists had already written on the nature of light and color vision, it was not until Isaac Newton that light was identified as the source of the color sensation. In 1810, Johann Wolfgang von Goethe published his comprehensive Theory of Colors in which he provided a rational description of color experience, which "tells us how it originates, not what it is".[full citation needed] In 1801, Thomas Young proposed his trichromatic theory, based on the observation that any color could be matched with a combination of three lights. This theory was later refined by James Clerk Maxwell and Hermann von Helmholtz. As Helmholtz puts it, "the principles of Newton's law of mixture were experimentally confirmed by Maxwell in 1856. Young's theory of color sensations, like so much else that this marvelous investigator achieved in advance of his time, remained unnoticed until Maxwell directed attention to it." At the same time as Helmholtz, Ewald Hering developed the opponent process theory of color, noting that color blindness and afterimages typically come in opponent pairs (red-green, blue-orange, yellow-violet, and black-white). Ultimately these two theories were synthesized in 1957 by Hurvich and Jameson, who showed that retinal processing corresponds to the trichromatic theory, while processing at the level of the lateral geniculate nucleus corresponds to the opponent theory. In 1931, the International Commission on Illumination (CIE), an international group of experts, developed a mathematical color model which mapped out the space of observable colors, allowing every individual color able to be specified with a set of three numbers. The ability of the human eye to distinguish colors is based upon the varying sensitivity of different cells in the retina to light of different wavelengths. Humans are trichromatic—the retina contains three types of color receptor cells, or cones. One type, relatively distinct from the other two, is most responsive to light that is perceived as blue or blue-violet, with wavelengths around 450 nm; cones of this type are sometimes called short-wavelength cones or S cones (or misleadingly, blue cones). The other two types are closely related genetically and chemically: middle-wavelength cones, M cones, or green cones are most sensitive to light perceived as green, with wavelengths around 540 nm, while the long-wavelength cones, L cones, or red cones, are most sensitive to light that is perceived as greenish yellow, with wavelengths around 570 nm. Light, no matter how complex its composition of wavelengths, is reduced to three color components by the eye. Each cone type adheres to the principle of univariance, which is that each cone's output is determined by the amount of light that falls on it over all wavelengths. For each location in the visual field, the three types of cones yield three signals based on the extent to which each is stimulated. These amounts of stimulation are sometimes called tristimulus values. The response curve as a function of wavelength varies for each type of cone. Because the curves overlap, some tristimulus values do not occur for any incoming light combination. For example, it is not possible to stimulate only the mid-wavelength (so-called "green") cones; the other cones will inevitably be stimulated to some degree at the same time. The set of all possible tristimulus values determines the human color space. It has been estimated that humans can distinguish roughly 10 million different colors. The other type of light-sensitive cell in the eye, the rod, has a different response curve. In normal situations, when light is bright enough to strongly stimulate the cones, rods play virtually no role in vision at all. On the other hand, in dim light, the cones are understimulated leaving only the signal from the rods, resulting in a colorless response (furthermore, the rods are barely sensitive to light in the "red" range). In certain conditions of intermediate illumination, the rod response and a weak cone response can together result in color discriminations not accounted for by cone responses alone. These effects, combined, are summarized also in the Kruithof curve, which describes the change of color perception and pleasingness of light as a function of temperature and intensity. While the mechanisms of color vision at the level of the retina are well-described in terms of tristimulus values, color processing after that point is organized differently. A dominant theory of color vision proposes that color information is transmitted out of the eye by three opponent processes, or opponent channels, each constructed from the raw output of the cones: a red–green channel, a blue–yellow channel, and a black–white "luminance" channel. This theory has been supported by neurobiology, and accounts for the structure of our subjective color experience. Specifically, it explains why humans cannot perceive a "reddish green" or "yellowish blue", and it predicts the color wheel: it is the collection of colors for which at least one of the two color channels measures a value at one of its extremes. The exact nature of color perception beyond the processing already described, and indeed the status of color as a feature of the perceived world or rather as a feature of our perception of the world—a type of qualia—is a matter of complex and continuing philosophical dispute. From the V1 blobs, color information is sent to cells in the second visual area, V2. The cells in V2 that are most strongly color tuned are clustered in the "thin stripes" that, like the blobs in V1, stain for the enzyme cytochrome oxidase (separating the thin stripes are interstripes and thick stripes, which seem to be concerned with other visual information like motion and high-resolution form). Neurons in V2 then synapse onto cells in the extended V4. This area includes not only V4, but two other areas in the posterior inferior temporal cortex, anterior to area V3, the dorsal posterior inferior temporal cortex, and posterior TEO. Area V4 was initially suggested by Semir Zeki to be exclusively dedicated to color, and he later showed that V4 can be subdivided into subregions with very high concentrations of color cells separated from each other by zones with lower concentration of such cells though even the latter cells respond better to some wavelengths than to others, a finding confirmed by subsequent studies. The presence in V4 of orientation-selective cells led to the view that V4 is involved in processing both color and form associated with color but it is worth noting that the orientation selective cells within V4 are more broadly tuned than their counterparts in V1, V2, and V3. Color processing in the extended V4 occurs in millimeter-sized color modules called globs. This is the part of the brain in which color is first processed into the full range of hues found in color space. A color vision deficiency causes an individual to perceive a smaller gamut of colors than the standard observer with normal color vision. The effect can be mild, having lower "color resolution" (i.e. anomalous trichromacy), moderate, lacking an entire dimension or channel of color (e.g. dichromacy), or complete, lacking all color perception (i.e. monochromacy). Most forms of color blindness derive from one or more of the three classes of cone cells either being missing, having a shifted spectral sensitivity or having lower responsiveness to incoming light. In addition, cerebral achromatopsia is caused by neural anomalies in those parts of the brain where visual processing takes place. Some colors that appear distinct to an individual with normal color vision will appear metameric to the color blind. The most common form of color blindness is congenital red–green color blindness, affecting ~8% of males. Individuals with the strongest form of this condition (dichromacy) will experience blue and purple, green and yellow, teal, and gray as colors of confusion, i.e. metamers. Outside of humans, which are mostly trichromatic (having three types of cones), most mammals are dichromatic, possessing only two cones. However, outside of mammals, most vertebrates are tetrachromatic, having four types of cones. This includes most birds, reptiles, amphibians, and bony fish. An extra dimension of color vision means these vertebrates can see two distinct colors that a normal human would view as metamers. Some invertebrates, such as the mantis shrimp, have an even higher number of cones (12) that could lead to a richer color gamut than even imaginable by humans. The existence of human tetrachromats is a contentious notion. As many as half of all human females have 4 distinct cone classes, which could enable tetrachromacy. However, a distinction must be made between retinal (or weak) tetrachromats, which express four cone classes in the retina, and functional (or strong) tetrachromats, which are able to make the enhanced color discriminations expected of tetrachromats. In fact, there is only one peer-reviewed report of a functional tetrachromat. It is estimated that while the average person is able to see one million colors, someone with functional tetrachromacy could see a hundred million colors. In certain forms of synesthesia, perceiving letters and numbers (grapheme–color synesthesia) or hearing sounds (chromesthesia) will evoke a perception of color. Behavioral and functional neuroimaging experiments have demonstrated that these color experiences lead to changes in behavioral tasks and lead to increased activation of brain regions involved in color perception, thus demonstrating their reality, and similarity to real color percepts, albeit evoked through a non-standard route. Synesthesia can occur genetically, with 4% of the population having variants associated with the condition. Synesthesia has also been known to occur with brain damage, drugs, and sensory deprivation. The philosopher Pythagoras experienced synesthesia and provided one of the first written accounts of the condition in approximately 550 BCE. He created mathematical equations for musical notes that could form part of a scale, such as an octave. After exposure to strong light in their sensitivity range, photoreceptors of a given type become desensitized. For a few seconds after the light ceases, they will continue to signal less strongly than they otherwise would. Colors observed during that period will appear to lack the color component detected by the desensitized photoreceptors. This effect is responsible for the phenomenon of afterimages, in which the eye may continue to see a bright figure after looking away from it, but in a complementary color. Afterimage effects have also been used by artists, including Vincent van Gogh. When an artist uses a limited color palette, the human visual system tends to compensate by seeing any gray or neutral color as the color which is missing from the color wheel. For example, in a limited palette consisting of red, yellow, black, and white, a mixture of yellow and black will appear as a variety of green, a mixture of red and black will appear as a variety of purple, and pure gray will appear bluish. The trichromatic theory is strictly true when the visual system is in a fixed state of adaptation. In reality, the visual system is constantly adapting to changes in the environment and compares the various colors in a scene to reduce the effects of the illumination. If a scene is illuminated with one light, and then with another, as long as the difference between the light sources stays within a reasonable range, the colors in the scene appear relatively constant to us. This was studied by Edwin H. Land in the 1970s and led to his retinex theory of color constancy. Both phenomena are readily explained and mathematically modeled with modern theories of chromatic adaptation and color appearance (e.g. CIECAM02, iCAM). There is no need to dismiss the trichromatic theory of vision, but rather it can be enhanced with an understanding of how the visual system adapts to changes in the viewing environment. Reproduction Color reproduction is the science of creating colors for the human eye that faithfully represent the desired color. It focuses on how to construct a spectrum of wavelengths that will best evoke a certain color in an observer. Most colors are not spectral colors, meaning they are mixtures of various wavelengths of light. However, these non-spectral colors are often described by their dominant wavelength, which identifies the single wavelength of light that produces a sensation most similar to the non-spectral color. Dominant wavelength is roughly akin to hue. There are many color perceptions that by definition cannot be pure spectral colors due to desaturation or because they are purples (mixtures of red and violet light, from opposite ends of the spectrum). Some examples of necessarily non-spectral colors are the achromatic colors (black, gray, and silver, white) and colors such as pink, tan, and magenta. Two different light spectra that have the same effect on the three color receptors in the human eye will be perceived as the same color. They are metamers of that color. This is exemplified by the white light emitted by fluorescent lamps, which typically has a spectrum of a few narrow bands, while daylight has a continuous spectrum. The human eye cannot tell the difference between such light spectra just by looking into the light source, although the color rendering index of each light source may affect the color of objects illuminated by these metameric light sources. Similarly, most human color perceptions can be generated by a mixture of three colors called primaries. This is used to reproduce color scenes in photography, printing, television, and other media. There are a number of methods or color spaces for specifying a color in terms of three particular primary colors. Each method has its advantages and disadvantages depending on the particular application. No mixture of colors, however, can produce a response truly identical to that of a spectral color, although one can get close, especially for the longer wavelengths, where the CIE 1931 color space chromaticity diagram has a nearly straight edge. For example, mixing green light (530 nm) and blue light (460 nm) produces cyan light that is slightly desaturated, because response of the red color receptor would be greater to the green and blue light in the mixture than it would be to a pure cyan light at 485 nm that has the same intensity as the mixture of blue and green. Because of this, and because the primaries in color printing systems generally are not pure themselves, the colors reproduced are never perfectly saturated spectral colors, and so spectral colors cannot be matched exactly. However, natural scenes rarely contain fully saturated colors, thus such scenes can usually be approximated well by these systems. The range of colors that can be reproduced with a given color reproduction system is called the gamut. The CIE chromaticity diagram can be used to describe the gamut. Another problem with color reproduction systems is connected with the initial measurement of color, or colorimetry. The characteristics of the color sensors in measurement devices (e.g. cameras, scanners) are often very far from the characteristics of the receptors in the human eye. A color reproduction system "tuned" to a human with normal color vision may give very inaccurate results for other observers, according to color vision deviations to the standard observer. The different color response of different devices can be problematic if not properly managed. For color information stored and transferred in digital form, color management techniques, such as those based on ICC profiles, can help to avoid distortions of the reproduced colors. Color management does not circumvent the gamut limitations of particular output devices, but can assist in finding good mapping of input colors into the gamut that can be reproduced. Additive color is light created by mixing together light of two or more different colors. Red, green, and blue are the additive primary colors normally used in additive color systems such as projectors, televisions, and computer terminals. Subtractive coloring uses dyes, inks, pigments, or filters to absorb some wavelengths of light and not others. The color that a surface displays comes from the parts of the visible spectrum that are not absorbed and therefore remain visible. Without pigments or dye, fabric fibers, paint base and paper are usually made of particles that scatter white light (all colors) well in all directions. When a pigment or ink is added, wavelengths are absorbed or "subtracted" from white light, so light of another color reaches the eye. If the light is not a pure white source (the case of nearly all forms of artificial lighting), the resulting spectrum will appear a slightly different color. Red paint, viewed under blue light, may appear black. Red paint is red because it scatters only the red components of the spectrum. If red paint is illuminated by blue light, it will be absorbed by the red paint, creating the appearance of a black object. The subtractive model also predicts the color resulting from a mixture of paints, or similar medium such as fabric dye, whether applied in layers or mixed together prior to application. In the case of paint mixed before application, incident light interacts with many different pigment particles at various depths inside the paint layer before emerging. Structural colors are colors caused by interference effects rather than by pigments. Color effects are produced when a material is scored with fine parallel lines, formed of one or more parallel thin layers, or otherwise composed of microstructures on the scale of the color's wavelength. If the microstructures are spaced randomly, light of shorter wavelengths will be scattered preferentially to produce Tyndall effect colors: the blue of the sky (Rayleigh scattering, caused by structures much smaller than the wavelength of light, in this case, air molecules), the luster of opals, and the blue of human irises. If the microstructures are aligned in arrays, for example, the array of pits in a CD, they behave as a diffraction grating: the grating reflects different wavelengths in different directions due to interference phenomena, separating mixed "white" light into light of different wavelengths. If the structure is one or more thin layers then it will reflect some wavelengths and transmit others, depending on the layers' thickness. Structural color is studied in the field of thin-film optics. The most ordered or the most changeable structural colors are iridescent. Structural color is responsible for the blues and greens of the feathers of many birds (the blue jay, for example), as well as certain butterfly wings and beetle shells. Variations in the pattern's spacing often give rise to an iridescent effect, as seen in peacock feathers, soap bubbles, films of oil, and mother of pearl, because the reflected color depends upon the viewing angle. Numerous scientists have carried out research in butterfly wings and beetle shells, including Isaac Newton and Robert Hooke. Since 1942, electron micrography has been used, advancing the development of products that exploit structural color, such as "photonic" cosmetics. Optimal colors Optimal colors are the most chromatic colors that surfaces can have. That is, optimal colors are the theoretical limit for the color of objects*. For now, we are unable to produce objects with such colors, at least not without recurring to more complex physical phenomena. *(with classical reflection. Phenomena like fluorescence or structural coloration may cause the color of objects to lie outside the optimal color solid) The plot of the gamut bounded by optimal colors in a color space is called the optimal color solid or Rösch–MacAdam color solid. The reflectance spectrum of a color is the amount of light of each wavelength that it reflects, in proportion to a given maximum, which is total reflection of light of that wavelength, and has the value of 1 (100%). If the reflectance spectrum of a color is 0 (0%) or 1 (100%) across the entire visible spectrum, and it has no more than two transitions between 0 and 1, or 1 and 0, then it is an optimal color. With the current state of technology, we are unable to produce any material or pigment with these properties. Thus four types of "optimal color" spectra are possible: The first type produces colors that are similar to the spectral colors and follow roughly the horseshoe-shaped portion of the CIE xy chromaticity diagram (the spectral locus), but are, in surfaces, more chromatic, although less spectrally pure. The second type produces colors that are similar to (but, in surfaces, more chromatic and less spectrally pure than) the colors on the straight line in the CIE xy chromaticity diagram (the line of purples), leading to magenta or purple-like colors. The third type produces the colors located in the "warm" sharp edge of the optimal color solid (this will be explained later in the article). The fourth type produces the colors located in the "cool" sharp edge of the optimal color solid. In optimal color solids, the colors of the visible spectrum are theoretically black, because their reflectance spectrum is 1 (100%) in only one wavelength, and 0 in all of the other infinite visible wavelengths that there are, meaning that they have a lightness of 0 with respect to white, and will also have 0 chroma, but, of course, 100% of spectral purity. In short: In optimal color solids, spectral colors are equivalent to black (0 lightness, 0 chroma), but have full spectral purity (they are located in the horseshoe-shaped spectral locus of the chromaticity diagram). In linear color spaces, such as LMS or CIE 1931 XYZ, the set of rays that start at the origin (black, (0, 0, 0)) and pass through all the points that represent the colors of the visible spectrum, and the portion of a plane that passes through the violet half-line and the red half-line (both ends of the visible spectrum), generate the "spectrum cone". The black point (coordinates (0, 0, 0)) of the optimal color solid (and only the black point) is tangent to the "spectrum cone", and the white point ((1, 1, 1)) (only the white point) is tangent to the "inverted spectrum cone", with the "inverted spectrum cone" being symmetrical to the "spectrum cone" with respect to the middle gray point ((0.5, 0.5, 0.5)). This means that, in linear color spaces, the optimal color solid is centrally symmetric. In most color spaces, the surface of the optimal color solid is smooth, except for two points (black and white); and two sharp edges: the "warm" edge, which goes from black, to red, to orange, to yellow, to white; and the "cool" edge, which goes from black, to deep violet, to blue, to cyan, to white. This is due to the following: If the portion of the reflectance spectrum of a color is spectral red (which is located at one end of the spectrum), it will be seen as black. If the size of the portion of total reflectance is increased, now covering from the red end of the spectrum to the yellow wavelengths, it will be seen as red or orange. If the portion is expanded even more, covering some green wavelengths, it will be seen as yellow. If it is expanded even more, it will cover more wavelengths than the yellow semichrome does, approaching white, until it is reached when the full spectrum is reflected. The described process is called "cumulation". Cumulation can be started at either end of the visible spectrum (we just described cumulation starting from the red end of the spectrum, generating the "warm" sharp edge), cumulation starting at the violet end of the spectrum will generate the "cool" sharp edge. On modern computers, it is possible to calculate an optimal color solid with great precision in seconds. Usually, only the MacAdam limits (the optimal colors, the boundary of the Optimal color solid) are computed, because all the other (non-optimal) possible surface colors exist inside the boundary. Each hue has a maximum chroma point, semichrome, or full color; objects cannot have a color of that hue with a higher chroma. They are the most chromatic, vibrant colors that objects can have. They were called semichromes or full colors by the German chemist and philosopher Wilhelm Ostwald in the early 20th century. If B is the complementary wavelength of wavelength A, then the straight line that connects A and B passes through the achromatic axis in a linear color space, such as LMS or CIE 1931 XYZ. If the reflectance spectrum of a color is 1 (100%) for all the wavelengths between A and B, and 0 for all the wavelengths of the other half of the color space, then that color is a maximum chroma color, semichrome, or full color (this is the explanation to why they were called semichromes). Thus, maximum chroma colors are a type of optimal color. As explained, full colors are far from being monochromatic (physically, not perceptually). If the spectral purity of a semichrome is increased, its chroma decreases, because it will approach the visible spectrum, ergo, it will approach black. In perceptually uniform color spaces, the lightness of the full colors varies from around 30% in the violetish blue hues, to around 90% in the yellowish hues. The chroma of each maximum chroma point also varies depending on the hue; in optimal color solids plotted in perceptually uniform color spaces, semichromes like red, green, violet, and magenta have a high chroma, while semichromes like yellow, orange, and cyan have a slightly lower chroma. Cultural perspective The meanings and associations of colors can play a major role in works of art, including literature. Individual colors have a variety of cultural associations such as national colors (in general described in individual color articles and color symbolism). The field of color psychology attempts to identify the effects of color on human emotion and activity. Chromotherapy is a pseudoscientific therapy attributed to various Eastern traditions. Colors have different associations in different countries and cultures. Different colors have been demonstrated to have effects on cognition. For example, researchers at the University of Linz in Austria demonstrated that the color red significantly decreases cognitive functioning in men. The combination of the colors red and yellow together can induce hunger, which has been capitalized on by a number of chain restaurants. Color plays a role in memory development too. A photograph that is in black and white is slightly less memorable than one in color. Studies also show that wearing bright colors makes one more memorable to people they meet. Colors vary in several different ways, including hue (shades of red, orange, yellow, green, blue, and violet, etc.), saturation, brightness. Some color words are derived from the name of an object of that color, such as "orange" or "salmon", while others are abstract, like "red". In the 1969 study Basic Color Terms: Their Universality and Evolution, Brent Berlin and Paul Kay describe a pattern in naming "basic" colors (like "red" but not "red-orange" or "dark red" or "blood red", which are "shades" of red). All languages that have two "basic" color names distinguish dark/cool colors from bright/warm colors. The next colors to be distinguished are usually red and then yellow or green. All languages with six "basic" colors include black, white, red, green, blue, and yellow. The pattern holds up to a set of twelve: black, gray, white, pink, red, orange, yellow, green, blue, purple, brown, and azure (distinct from blue in Russian and Italian, but not English). See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Video_chat] | [TOKENS: 9674] |
Contents Videotelephony Videotelephony, also known as videoconferencing, video calling, or telepresence, is the use of audio and video for simultaneous two-way communication. Videophones were standalone devices for video calling (compare Telephone). As smartphones and computers have become capable of video calling, the demand for a separate category of videophones has disappeared. Videoconferencing implies group communication and is used in telepresence, the goal of which is to create the illusion that remote participants are in the same room. The concept of videotelephony was conceived in the late 19th century, and versions were demonstrated to the public starting in the 1930s. In April, 1930, reporters gathered at AT&T corporate headquarters on Broadway in New York City for the first public demonstration of two-way video telephony. The event linked the headquarters building with a Bell laboratories building on West Street. Early demonstrations were installed at booths in post offices and shown at world expositions. AT&T demonstrated Picturephone at the 1964 World’s Fair in New York City. In 1970, AT&T launched Picturephone as the first commercial personal videotelephone system. In addition to videophones, there existed image phones which exchanged still images between units every few seconds over conventional telephone lines. The development of advanced video codecs, more powerful CPUs, and high-bandwidth Internet service in the early 2000s allowed the new category of smartphones to provide high-quality low-cost color service between users almost anywhere in the world, eliminating the videophone as a separate product concept. Applications of videotelephony include sign language transmission for deaf and speech-impaired people, distance education, telemedicine, and overcoming mobility issues. News media organizations have used videotelephony for broadcasting.[citation needed] History The concept of videotelephony was first conceived in the late 1870s, both in the United States and in Europe, although the basic sciences to permit its very earliest trials would take nearly a half century to be discovered.[citation needed] The prerequisite knowledge arose from intensive research and experimentation in several telecommunication fields, notably electrical telegraphy, telephony, radio, and television. Simple analog videophone communication could be established as early as the invention of the television. Such an antecedent usually consisted of two closed-circuit television systems connected via coax cable or radio. An example of that was the German Reich Postzentralamt (post office) videotelephone network serving Berlin and several German cities via coaxial cables between 1936 and 1940. Gregorio Y. Zara, a Filipino scientist, invented the first videophone in 1954, which was patented in 1955 as a "photo phone signal separator network." He is recognized as the Father of Videoconferencing for his pioneering contribution to the development of videotelephony technology. The development of videotelephony as a subscription service started in the latter half of the 1920s in the United Kingdom and the United States, spurred notably by John Logie Baird and AT&T's Bell Labs. This occurred in part, at least with AT&T, to serve as an adjunct supplementing the use of the telephone. A number of organizations believed that videotelephony would be superior to plain voice communications. Attempts at using normal telephony networks to transmit slow-scan video, such as the first systems developed by AT&T Corporation, first researched in the 1950s, failed mostly due to the poor picture quality and the lack of efficient video compression techniques. During the first crewed space flights, NASA used two radio-frequency (UHF or VHF) video links, one in each direction. TV channels routinely use this type of videotelephony when reporting from distant locations. The news media were to become regular users of mobile links to satellites using specially equipped trucks, and much later via special satellite videophones in a briefcase. This technique was very expensive, though, and was not adopted for applications such as telemedicine, distance education, and business meetings. Decades of research and development culminated in the 1970 commercial launch of AT&T's Picturephone service, available in select cities. However, the system was a commercial failure, chiefly due to consumer apathy, high subscription costs, and lack of network effect—with only a few hundred Picturephones in the world, users had extremely few contacts they could actually call, and interoperability with other videophone systems would not exist for decades. In the 1980s, digital telephony transmission networks became possible, such as with ISDN networks. During this time, there was also research into other forms of digital video and audio communication. Many of these technologies, such as the Media space, are not as widely used today as videoconferencing but were still an important area of research. The first dedicated systems started to appear as ISDN networks were expanding throughout the world. One of the first commercial videoconferencing systems sold to companies came from PictureTel Corp., which had an initial public offering in November, 1984. In 1984, Concept Communication in the United States created a circuit board for standard personal computers that doubled the video frame rate of typical digital videotelephone systems from 15 to 30 frames per second, and reduced the cost from $100,000 to $12,000. The company also secured a patent for a codec for full-motion videoconferencing, first demonstrated at AT&T Bell Labs in 1986. Very expensive videoconferencing systems continued to rapidly evolve throughout the 1980s and 1990s. Proprietary equipment, software, and network requirements gave way to standards-based technologies that were available for anyone to purchase at a reasonable cost. While videoconferencing technology was initially used primarily within internal corporate communication networks, one of the first community service uses of the technology started in 1992 through a unique partnership with PictureTel and IBM, which at the time were promoting a jointly developed desktop based videoconferencing product known as the PCS/1. Over the next 15 years, Project DIANE (Diversified Information and Assistance Network) grew to use a variety of videoconferencing platforms to create a multi-state cooperative public service and distance education network consisting of several hundred schools, libraries, science museums, zoos and parks, and many other community-oriented organizations.[citation needed] Advances in video compression allowed digital video streams to be transmitted over the Internet, which was previously difficult due to the impractically high bandwidth requirements of uncompressed video. The DCT algorithm was the basis for the first practical video coding standard that was useful for online videoconferencing, H.261, standardised by the ITU-T in 1988, and subsequent H.26x video coding standards. In 1992 CU-SeeMe was developed at Cornell by Tim Dorcey et al. In 1995 the first public videoconference between North America and Africa took place, linking a technofair in San Francisco with a techno-rave and cyberdeli in Cape Town. At the 1998 Winter Olympics opening ceremony in Nagano, Japan, Seiji Ozawa conducted the Ode to Joy from Beethoven's Ninth Symphony simultaneously across five continents in near-real-time. Kyocera conducted a two-year development campaign from 1997 to 1999 that resulted in the release of the VP-210 Visual Phone, the first mobile colour videophone that also doubled as a camera phone for still photos. The camera phone was the same size as similar contemporary mobile phones, but sported a large camera lens and a 5 cm (2 inch) colour TFT display capable of displaying 65,000 colors, and was able to process two video frames per second. Videotelephony was popularized in the 2000s via free Internet services such as Skype and iChat, web plugins supporting H.26x video standards, and online telecommunication programs that promoted low cost, albeit lower quality, videoconferencing to virtually every location with an Internet connection. Videotelephony became even more widespread through the deployment of video-enabled mobile phones such as 2010s iPhone 4, plus videoconferencing and computer webcams which use Internet telephony. In the upper echelons of government, business, and commerce, telepresence technology, an advanced form of videoconferencing, has helped reduce the need to travel.[citation needed] In May 2005, the first high definition videoconferencing systems, produced by Lifesize, were displayed at the Interop trade show in Las Vegas, Nevada, able to provide video at 30 frames per second with a 1280 by 720 display resolution. Polycom introduced its first high definition videoconferencing system to the market in 2006. As of the 2010s, high-definition resolution for videoconferencing became a popular feature, with most major suppliers in the videoconferencing market offering it. Technological developments by videoconferencing developers in the 2010s have extended the capabilities of videoconferencing systems beyond the boardroom for use with hand-held mobile devices that combine the use of video, audio and on-screen drawing capabilities broadcasting in real time over secure networks, independent of location. Mobile collaboration systems now allow people in previously unreachable locations, such as workers on an offshore oil rig, the ability to view and discuss issues with colleagues thousands of miles away. Traditional videoconferencing system manufacturers have begun providing mobile applications as well, such as those that allow for live and still image streaming. The highest ever video call (other than those from aircraft and spacecraft) took place on May 19, 2013, when British adventurer Daniel Hughes used a smartphone with a BGAN satellite modem to make a videocall to the BBC from the summit of Mount Everest, at 8,848 metres (29,029 ft) above sea level. The COVID-19 pandemic resulted in a significant increase in the use of videoconferencing. Berstein Research found that Zoom added more subscribers during the first two months of 2020 alone than in the entire year 2019. GoToMeeting had a 20 percent increase in usage, according to LogMeIn. UK based StarLeaf reported a 600 percent increase in national call volumes. Videoconferencing became so widespread during the pandemic that the term Zoom fatigue came to prominence, referring to the taxing nature of spending long periods of time on videocalls. This fatigue refers to the psychological and physiological effects participants involved in videoconferencing. One experimental study from 2021 revealed a link between camera use in videoconferencing and a prediction of fatigue occurrence an individual. Furthermore, a 2022 article in the journal "Computers in Human Behaviour" highlighted a study linking negative attitudes with the use of "self-view" when videoconferencing. On 21 September 2021, Facebook launched two new versions of its Portal video-calling devices, the Portal Go and Portal Plus. The new video calling devices include the first portable variety of the hardware and number of updates. Major categories Videotelephony can be categorized by its functionality and intended purpose, and also by its method of transmission. Videophones were the earliest form of videotelephony, dating back to initial tests in 1927 by AT&T. During the late 1930s, the post offices of several European governments established public videophone services for person-to-person communications using dual cable circuit telephone transmission technology. In the present day, standalone videophones and UMTS video-enabled mobile phones are usually used on a person-to-person basis. Videoconferencing saw its earliest use with AT&T's Picturephone service in the early 1970s. Transmissions were analog over short distances, but converted to digital forms for longer calls, again using telephone transmission technology. Popular corporate video-conferencing systems in the present day have migrated almost exclusively to digital ISDN and IP transmission modes due to the need to convey the very large amounts of data generated by their cameras and microphones. These systems are often intended for use in conference mode, that is by many people in several different locations, all of whom can be viewed by every participant at each location. Telepresence systems are a newer, more advanced subset of videoconferencing systems, meant to allow higher degrees of video and audio fidelity. Such high-end systems are typically deployed in corporate settings. Mobile collaboration systems are another recent development, combining the use of video, audio, and on-screen drawing capabilities using newest generation hand-held electronic devices broadcasting over secure networks, enabling multi-party conferencing in real time, independent of location. Proximity chat is another alternative mode, focused on the flexibility of small group conversations. A more recent technology encompassing these functions is TV cams. TV cams enable people to make video calls using video calling services, like Skype on their TV, without using a PC connection. TV cams are specially designed video cameras that feed images in real time to another TV camera or other compatible computing devices like smartphones, tablets and computers. Webcams are popular, relatively low-cost devices that can provide live video and audio streams via personal computers, and can be used with many software clients for both video calls and videoconferencing. Each of the systems has its own advantages and disadvantages, including video quality, capital cost, degrees of sophistication, transmission capacity requirements, and cost of use. From the least to the most expensive systems: Security concerns Computer security experts have shown that poorly configured or inadequately supervised videoconferencing systems can permit an easy virtual entry by computer hackers and criminals into company premises and corporate boardrooms. Adoption For over a century, futurists have envisioned a future where telephone conversations will take place as actual face-to-face encounters with video as well as audio. Sometimes it is simply not possible or practical to have face-to-face meetings with two or more people. Sometimes a telephone conversation or conference call is adequate. Other times, e-mail exchanges are adequate. However, videoconferencing adds another option and can be considered when: Bill Gates said in 2001 that he used videoconferencing "three or four times a year", because digital scheduling was difficult and "if the overhead is super high, then you might as well just have a face-to-face meeting". Some observers argue that three outstanding issues have prevented videoconferencing from becoming a widely adopted form of communication, despite the ubiquity of videoconferencing-capable systems. These are some of the reasons many organizations only use the systems internally, where there is less risk of loss of customers. An alternative for those lacking dedicated facilities is the rental of videoconferencing-equipped meeting rooms in cities around the world. Clients can book rooms and turn up for the meeting, with all technical aspects being prearranged and support being readily available if needed. The issue of eye contact may be solved with advancing technology, including smartphones which have the screen and camera in essentially the same place. In developed countries, the near-ubiquity of smartphones, tablet computers, and computers with built-in audio and webcams removes the need for expensive dedicated hardware. Technology The core technology used in a videotelephony system is digital compression of audio and video streams in real time. The hardware or software that performs compression is called a codec (coder/decoder). Compression rates of up to 1:500 can be achieved. The resulting digital stream of 1s and 0s is subdivided into labeled packets, which are then transmitted through a digital network of some kind (usually ISDN or IP). The other components required for a videoconferencing system include: There are basically three kinds of videoconferencing and videophone systems: Videoconferencing systems use several methods to determine which video feed or feeds to display.: 11–16 Continuous Presence simply displays all participants at the same time, usually with the exception that the viewer either does not see their own feed, or sees their own feed in miniature. Voice-Activated Switch selectively chooses a feed to display at each endpoint, with the goal of showing the person who is currently speaking. This is done by choosing the feed (other than the viewer) which has the loudest audio input (perhaps with some filtering to avoid switching for very short-lived volume spikes). Often, if no remote parties are currently speaking, the feed with the last speaker remains on the screen. Acoustic echo cancellation (AEC) is a processing algorithm that uses the knowledge of audio output to monitor audio input and filter from it noises that echo back after some time delay. If unattended, these echoes can be re-amplified several times, leading to problems including: Echo cancellation is a processor-intensive task that usually works over a narrow range of sound delays. Videophones have historically employed a variety of transmission and reception bandwidths, which can be understood as data transmission speeds. The lower the transmission/reception bandwidth, the lower the data transfer rate, resulting in a progressively limited and poorer image quality (i.e. lower resolution and/or frame rate). Data transfer rates and live video image quality are related but are also subject to other factors such as data compression techniques. Some early videophones employed very low data transmission rates with a resulting poor video quality. Broadband bandwidth is often called high-speed, because it usually has a high rate of data transmission. In general, any connection of 256 kbit/s (0.256 Mbit/s) or greater is more concisely considered broadband Internet. The International Telecommunication Union Telecommunication Standardization Sector (ITU-T) recommendation I.113 has defined broadband as a transmission capacity at 1.5 to 2 Mbit/s. The Federal Communications Commission (United States) definition of broadband is 25 Mbit/s. Currently, adequate video for some purposes becomes possible at data rates lower than the ITU-T broadband definition, with rates of 768 kbit/s and 384 kbit/s used for some videoconferencing applications, and rates as low as 100 kbit/s used for videophones using H.264/MPEG-4 AVC compression protocols. The newer MPEG-4 video and audio compression format can deliver high-quality video at 2 Mbit/s, which is at the low end of cable modem and ADSL broadband performance.[citation needed] The International Telecommunication Union (ITU) has three umbrellas of standards for videoconferencing: The Unified Communications Interoperability Forum (UCIF), a non-profit alliance between communications vendors, launched in May 2010. The organization's vision is to maximize the interoperability of UC based on existing standards. Founding members of UCIF include HP, Microsoft, Polycom, Logitech/Lifesize, and Juniper Networks. Videoconferencing in the late 20th century was limited to the H.323 protocol (notably Cisco's SCCP implementation was an exception), but newer videophones often use SIP, which is often easier to set up in home networking environments. It is a text-based protocol, incorporating many elements of the Hypertext Transfer Protocol (HTTP) and the Simple Mail Transfer Protocol (SMTP). H.323 is still used, but more commonly for business videoconferencing, while SIP is more commonly used in personal consumer videophones. A number of call-setup methods based on instant messaging protocols such as Skype also now provide video. Another protocol used by videophones is H.324, which mixes call setup and video compression. Videophones that work on regular phone lines typically use H.324, but the bandwidth is limited by the modem to around 33 kbit/s, limiting the video quality and frame rate. A slightly modified version of H.324 called 3G-324M defined by 3GPP is also used by some cellphones that allow video calls, typically for use only in UMTS networks. There is also H.320 standard, which specified technical requirements for narrow-band visual telephone systems and terminal equipment, typically for videoconferencing and videophone services. It applied mostly to dedicated circuit-based switched network (point-to-point) connections of moderate or high bandwidth, such as through the medium-bandwidth ISDN digital phone protocol or a fractionated high bandwidth T1 lines. Modern products based on H.320 standard usually support also H.323 standard. The IAX2 protocol also supports videophone calls natively, using the protocol's own capabilities to transport alternate media streams. A few hobbyists obtained the Nortel 1535 Color SIP Videophone cheaply in 2010 as surplus after Nortel's bankruptcy and deployed the sets on the Asterisk (PBX) platform. While additional software is required to patch together multiple video feeds for conference calls or convert between dissimilar video standards, SIP calls between two identical handsets within the same PBX were relatively straightforward. The components within a videoconferencing system can be divided up into several different layers: User Interface, Conference Control, Control or Signaling Plane, and Media Plane. Videoconferencing User Interfaces (VUI) can be either graphical or voice-responsive. Many in the industry have encountered both types of interface, and normally a graphical interface is encountered on a computer. User interfaces for conferencing have a number of different uses; they can be used for scheduling, setup, and making a video call. Through the user interface, the administrator is able to control the other three layers of the system. Conference Control performs resource allocation, management, and routing. This layer along with the User Interface creates meetings (scheduled or unscheduled) or adds and removes participants from a conference. Control (Signaling) Plane contains the stacks that signal different endpoints to create a call and/or a conference. Signals can be, but are not limited to, H.323 and Session Initiation Protocol (SIP) Protocols. These signals control incoming and outgoing connections as well as session parameters. The Media Plane controls the audio and video mixing and streaming. This layer manages Real-Time Transport Protocols, User Datagram Packets (UDP) and Real-Time Transport Control Protocol (RTCP). The RTP and UDP normally carry information such the payload type which is the type of codec, frame rate, video size, and many others. RTCP on the other hand acts as a quality control Protocol for detecting errors during streaming. Simultaneous videoconferencing among three or more remote points is possible in a hardware-based system by means of a Multipoint Control Unit (MCU). This is a bridge that interconnects calls from several sources (in a similar way to the audio conference call). All parties call the MCU, or the MCU can also call the parties which are going to participate, in sequence. There are MCU bridges for IP and ISDN-based videoconferencing. There are MCUs which are pure software and others that are a combination of hardware and software. An MCU is characterized according to the number of simultaneous calls it can handle, its ability to conduct transposing of data rates and protocols, and features such as Continuous Presence, in which multiple parties can be seen on-screen at once. MCUs can be stand-alone hardware devices, or they can be embedded into dedicated videoconferencing units. The MCU consists of two logical components: The MC controls the conferencing while it is active on the signaling plane, which is simply where the system manages conferencing creation, endpoint signaling and in-conferencing controls. This component negotiates parameters with every endpoint in the network and controls conferencing resources. While the MC controls resources and signaling negotiations, the MP operates on the media plane and receives media from each endpoint. The MP generates output streams from each endpoint and redirects the information to other endpoints in the conference. Some systems are capable of multipoint conferencing with no MCU, stand-alone, embedded or otherwise. These use a standards-based H.323 technique known as decentralized multipoint, where each station in a multipoint call exchanges video and audio directly with the other stations with no central manager or other bottleneck. The advantages of this technique are that the video and audio will generally be of higher quality because they do not have to be relayed through a central point. Also, users can make ad hoc multipoint calls without any concern for the availability or control of an MCU. This added convenience and quality comes at the expense of some increased network bandwidth, because every station must transmit to every other station directly. Cloud-based videoconferencing can be used without the hardware generally required by other videoconferencing systems, and can be designed for use by SMEs, or larger international or multinational corporations like Facebook. Cloud-based systems can handle either 2D or 3D video broadcasting. Cloud-based systems can also implement mobile calls, VOIP, and other forms of video calling. They can also come with a video recording function to archive past meetings. Impact High speed Internet connectivity has become more widely available and affordable, as has good-quality video capture and display hardware. Consequently, personal videoconferencing systems based on webcams, personal computer systems, software compression, and the Internet have become progressively more affordable by the general public. The availability of freeware (often as part of chat programs) has made software based videoconferencing accessible to many. The widest deployment of videotelephony now occurs in mobile phones. Nearly all mobile phones supporting UMTS networks can work as videophones using their internal cameras and are able to make video calls wirelessly to other UMTS users anywhere.[citation needed] As of the second quarter of 2007, there are over 131 million UMTS users (and hence potential videophone users), on 134 networks in 59 countries.[citation needed] Mobile phones can also use broadband wireless Internet, whether through the cell phone network or over a local Wi-Fi connection, along with software-based videophone apps to make calls to any video-capable Internet user, whether mobile or fixed. Deaf, hard-of-hearing, and mute individuals have a particular role in the development of affordable high-quality videotelephony as a means of communicating with each other in sign language. Unlike Video Relay Service, which is intended to support communication between a caller using sign language and another party using spoken language, videoconferencing can be used directly between two deaf signers. Videophones are increasingly used in the provision of telemedicine to the elderly, disabled, and to those in remote locations, where the ease and convenience of quickly obtaining diagnostic and consultative medical services are readily apparent. In one single instance quoted in 2006: "A nurse-led clinic at Letham has received positive feedback on a trial of a video-link which allowed 60 pensioners to be assessed by medics without traveling to a doctor's office or medical clinic." A further improvement in telemedical services has been the development of new technology incorporated into special videophones to permit remote diagnostic services, such as blood sugar level, blood pressure, and vital signs monitoring. Such units are capable of relaying both regular audio-video plus medical data over either standard (POTS) telephone or newer broadband lines. Videotelephony has also been deployed in corporate teleconferencing, also available through the use of public access videoconferencing rooms. A higher level of videoconferencing that employs advanced telecommunication technologies and high-resolution displays is called telepresence. Today the principles, if not the precise mechanisms, of a videophone are employed by many users worldwide in the form of webcam videocalls using personal computers, with inexpensive webcams, microphones, and free video calling Web client programs. Thus an activity that was disappointing as a separate service has found a niche as a minor feature in software products intended for other purposes. A study conducted by Pew Research in 2010, revealed that 7% of Americans have made a mobile video call. In the United States, videoconferencing has allowed testimony to be used for an individual who is unable or prefers not to attend the physical legal settings or would be subjected to severe psychological stress in doing so, however, there is a controversy on the use of testimony by foreign or unavailable witnesses via video transmission, regarding the violation of the Confrontation Clause of the Sixth Amendment of the U.S. Constitution. Videoconferencing may also be associated with a number of technical risks. In a military investigation in North Carolina, Afghan witnesses have testified via videoconferencing. In Hall County, Georgia, videoconferencing systems are used for initial court appearances. The systems link jails with courtrooms, reducing the expenses and security risks of transporting prisoners to the courtroom. The U.S. Social Security Administration (SSA), which oversees the world's largest administrative judicial system under its Office of Disability Adjudication and Review (ODAR), has made extensive use of videoconferencing to conduct hearings at remote locations. In Fiscal Year (FY) 2009, the U.S. Social Security Administration (SSA) conducted 86,320 videoconferenced hearings, a 55% increase over FY 2008. In August 2010, the SSA opened its fifth and largest videoconferencing-only National Hearing Center (NHC), in St. Louis, Missouri. This continues the SSA's effort to use video hearings as a means to clear its substantial hearing backlog. Since 2007, the SSA has also established NHCs in Albuquerque, New Mexico, Baltimore, Maryland, Falls Church, Virginia, and Chicago. Videoconferencing has gained widespread popularity within education in recent years, particularly so following the COVID-19 Pandemic of early 2020, when much education provision moved online. It provides students with the chance to learn by participating in two-way communication forums. Because it is live, videotelephony allows teachers to access remote or otherwise isolated learners. Students from diverse communities and backgrounds can come together to learn about one another through practices known as telecollaboration (in foreign language education) and virtual exchange, although language barriers will continue to be present. Such students are able to explore, communicate, analyze, and share information and ideas with one another. Educational institutions have promoted videoconferencing as a way to reduce costs and increase student numbers, with lectures and seminars now often being provided online through videoconferencing technology. Videoconferencing offers educational institutes the possibility to provide courses and education to greater numbers of students, dispersed over large geographical areas than can be provided from a single bricks-and-mortar location Through videoconferencing, students can visit other parts of the world, including museums and other cultural and educational sites. Such virtual field trips can provide enriched learning opportunities to students, especially those who are geographically isolated or economically disadvantaged. Small schools can use these technologies to pool resources and provide courses, such as in foreign languages, which could not otherwise be offered. Other benefits that videoconferencing can provide to education include: Videoconferencing is a highly useful technology for real time telemedicine and telenursing applications, such as diagnosis, consulting, prevention, treatment, and transmission of medical images. With videoconferencing, patients may contact nurses and physicians in emergency or routine situations; physicians and other paramedical professionals can discuss cases across large distances. Rural areas can use this technology for diagnostic purposes, thus saving lives and making more efficient use of health care money. For example, a rural medical center in Ohio used videoconferencing to successfully cut the number of transfers of sick infants to a hospital 70 miles (110 km) away. This had previously cost nearly $10,000 per transfer. Special peripherals such as microscopes fitted with digital cameras, videoendoscopes, medical ultrasound imaging devices, otoscopes, etc., can be used in conjunction with videoconferencing equipment to transmit data about a patient. Recent developments in mobile collaboration on hand-held mobile devices have also extended video-conferencing capabilities to locations previously unreachable, such as a remote community, long-term care facility, or a patient's home. Mayo Clinic uses videoconferencing to enable collaboration among multidisciplinary teams of specialists developing treatment plans for complex cases. The technology links Mayo locations with doctors at hospitals that require Mayo’s expertise and input. Videoconferencing can enable individuals in distant locations to participate in meetings on short notice, with time and money savings. Technology such as VoIP can be used in conjunction with desktop videoconferencing to enable low-cost face-to-face business meetings without leaving the desk, especially for businesses with widespread offices. The technology is also used for remote work. One research report based on a sampling of 1,800 corporate employees showed that, as of June 2010, 54% of the respondents with access to videoconferencing used it "all of the time" or "frequently". Aside from traditional meetings, videoconferencing enables collaborative group sessions in which people collaborate to produce products and services. Industrial Light & Magic uses videoconferencing as part of a 24-hour global video effects production environment for the film industry. Intel Corporation have used videoconferencing to reduce both costs and environmental impacts of its business operations. Videoconferencing is also currently being introduced on online networking websites, in order to help businesses form profitable relationships quickly and efficiently without leaving their place of work. This has been leveraged by banks to connect busy banking professionals with customers in various locations using video banking technology. Videoconferencing on hand-held mobile devices (mobile collaboration technology) is being used in industries such as manufacturing, energy, healthcare, insurance, government, and public safety. Live, visual interaction removes traditional restrictions of distance and time, often in locations previously unreachable, such as a manufacturing plant floor thousands of miles away. In the increasingly globalized film industry, videoconferencing has become useful as a method by which creative talent in many different locations can collaborate closely on the complex details of film production. For example, for the 2013 award-winning animated film Frozen, Burbank-based Walt Disney Animation Studios hired the New York City-based husband-and-wife songwriting team of Robert Lopez and Kristen Anderson-Lopez to write the songs, which required two-hour-long transcontinental videoconferences nearly every weekday for about 14 months. With the development of lower-cost endpoints, the integration of video cameras into personal computers and mobile devices, and software applications such as FaceTime, Skype, Teams, BlueJeans and Zoom, videoconferencing has changed from just a business-to-business offering to include business-to-consumer (and consumer-to-consumer) use. Although videoconferencing has frequently proven its value, research has shown that some non-managerial employees prefer not to use it due to several factors, including anxiety. Some such anxieties can often be avoided if managers use the technology as part of the normal course of business. Remote workers can also adopt certain behaviors and best practices to stay connected with their co-workers and company.[better source needed] Researchers also find that attendees of business and medical videoconferences must work harder to interpret information delivered during a conference than they would if they attended face-to-face. They recommend that those coordinating videoconferences make adjustments to their conferencing procedures and equipment. The concept of press videoconferencing was developed in October 2007 by the PanAfrican Press Association (APPA), a Paris France-based non-governmental organization, to allow African journalists to participate in international press conferences on developmental and good governance issues. Press videoconferencing permits international press conferences via videoconferencing over the Internet. Journalists can participate on an international press conference from any location, without leaving their offices or countries. They need only be seated by a computer connected to the Internet in order to ask their questions. In 2004, the International Monetary Fund introduced the Online Media Briefing Center, a password-protected site available only to professional journalists. The site enables the IMF to present press briefings globally and facilitates direct questions to briefers from the press. The site has been copied by other international organizations since its inception. More than 4,000 journalists worldwide are currently registered with the IMF. One of the first demonstrations of the ability for telecommunications to help sign language users communicate with each other occurred when AT&T's videophone (trademarked as the Picturephone) was introduced to the public at the 1964 New York World's Fair—two deaf users were able to communicate freely with each other between the fair and another city. Various universities and other organizations, including British Telecom's Martlesham facility, have also conducted extensive research on signing via video telephony. The use of sign language via videotelephony was hampered for many years due to the difficulty of its use over slow analog copper phone lines, coupled with the high cost of better quality ISDN (data) phone lines. Those factors largely disappeared with the introduction of more efficient and powerful video codecs and the advent of lower-cost high-speed ISDN data and IP (Internet) services in the 1990s. Significant improvements in video call quality of service for the deaf occurred in the United States in 2003 when Sorenson Media Inc. (formerly Sorenson Vision Inc.), a video compression software coding company, developed its VP-100 model stand-alone videophone specifically for the deaf community. It was designed to output its video to the user's television in order to lower the cost of acquisition and to offer remote control and a powerful video compression codec for unequaled video quality and ease of use with video relay services. Favorable reviews quickly led to its popular usage at educational facilities for the deaf, and from there to the greater deaf community. Coupled with similar high-quality videophones introduced by other electronics manufacturers, the availability of high-speed Internet, and sponsored video relay services authorized by the U.S. Federal Communications Commission in 2002, VRS services for the deaf underwent rapid growth in that country. Using such video equipment in the present day, the deaf, hard-of-hearing, and speech-impaired can communicate between themselves and with hearing individuals using sign language. The United States and several other countries compensate companies to provide video relay services (VRS). Telecommunication equipment can be used to talk to others via a sign language interpreter, who uses a conventional telephone at the same time to communicate with the deaf person's party. Video equipment is also used to do on-site sign language translation via Video Remote Interpreting (VRI). The relatively low cost and widespread availability of 3G mobile phone technology with video calling capabilities have given deaf and speech-impaired users a greater ability to communicate with the same ease as others. Some wireless operators have even started free sign language gateways. Sign language interpretation services via VRS or by VRI are useful in the present day where one of the parties is deaf, hard-of-hearing, or speech-impaired (mute). In such cases the interpretation flow is normally within the same principal language, such as French Sign Language (LSF) to spoken French, Spanish Sign Language (LSE) to spoken Spanish, British Sign Language (BSL) to spoken English, and American Sign Language (ASL) also to spoken English (since BSL and ASL are completely distinct from each other), German Sign Language (DGS) to spoken German, and so on. Multilingual sign language interpreters, who can also translate as well across principal languages (such as a multilingual interpreter interpreting a call from a deaf person using ASL to reserve a hotel room at a hotel in the Dominican Republic whose staff speaks Spanish only, therefore the interpreter has to use ASL, spoken Spanish, and spoken English to facilitate the call for the deaf person), are also available, albeit less frequently. Such activities involve considerable mental processing efforts on the part of the translator, since sign languages are distinct natural languages with their own construction, semantics and syntax, different from the aural version of the same principal language. With video interpreting, sign language interpreters work remotely with live video and audio feeds, so that the interpreter can see the deaf or mute party, and converse with the hearing party, and vice versa. Much like telephone interpreting, video interpreting can be used for situations in which no on-site interpreters are available. However, video interpreting cannot be used for situations in which all parties are speaking via telephone alone. VRS and VRI interpretation requires all parties to have the necessary equipment. Some advanced equipment enables interpreters to control the video camera remotely, in order to zoom in and out or to point the camera toward the party that is signing. Descriptive names and terminology The name videophone never became as standardized as its earlier counterpart telephone, resulting in a variety of names and terms being used worldwide, and even within the same region or country. Videophones are also known as video phones, videotelephones (or video telephones) and often by an early trademarked name Picturephone, which was the world's first commercial videophone produced in volume. The compound name videophone slowly entered into general use after 1950, although video telephone likely entered the lexicon earlier after video was coined in 1935. Videophone calls (also: videocalls, video chat) as well as Skype and Skyping in verb form differ from videoconferencing in that they expect to serve individuals, not groups. However that distinction has become increasingly blurred with technology improvements such as increased bandwidth and sophisticated software clients that can allow for multiple parties on a call. In general everyday usage the term videoconferencing is now frequently used instead of videocall for point-to-point calls between two units. Both videophone calls and videoconferencing are also now commonly referred to as a video link. Webcams are popular, relatively low-cost devices that can provide live video and audio streams via personal computers, and can be used with many software clients for both video calls and videoconferencing. A videoconference system is generally higher cost than a videophone and deploys greater capabilities. A videoconference (also known as a videoteleconference) allows two or more locations to communicate via live, simultaneous two-way video and audio transmissions. This is often accomplished by the use of a multipoint control unit (a centralized distribution and call management system) or by a similar non-centralized multipoint capability embedded in each videoconferencing unit. Again, technology improvements have circumvented traditional definitions by allowing multiple-party videoconferencing via web-based applications. A telepresence system is a high-end videoconferencing system and service usually employed by enterprise-level corporate offices. Telepresence conference rooms use state-of-the-art room designs, video cameras, displays, sound systems and processors, coupled with high-to-very-high capacity bandwidth transmissions. Typical uses of the various technologies described above include calling one-to-one or conferencing one-to-many or many-to-many for personal, business, educational, deaf Video Relay Service and tele-medical, diagnostic and rehabilitative purposes. personal videocalls to inmates incarcerated in penitentiaries, and videoconferencing to resolve airline engineering issues at maintenance facilities, are being created or evolving on an ongoing basis. Other names for videophone that have been used in English are: Viewphone (the British Telecom equivalent to AT&T's Picturephone), and visiophone, a common French translation that has also crept into limited English usage, as well as over twenty less common names and expressions. Latin-based translations of videophone in other languages include vidéophone (French), Bildtelefon (German), videotelefono (Italian), both videófono and videoteléfono (Spanish), both beeldtelefoon and videofoon (Dutch), and videofonía (Catalan). A telepresence robot (also telerobotics) is a robotically controlled and motorized videoconferencing display to help give a better sense of remote physical presence for communication and collaboration in an office, home, school, etc. when one cannot be there in person. The robotic avatar device can move about and look around at the command of the remote person it represents. Popular culture In science fiction literature, names commonly associated with videophones include telephonoscope, telephote, viewphone, vidphone, vidfone, and visiphone. The first example was probably the cartoon "Edison's Telephonoscope" by George du Maurier in Punch 1878. In «In the year 2889», published 1889, the French author Jules Verne predicts that «The transmission of speech is an old story; the transmission of images by means of sensitive mirrors connected by wires is a thing but of yesterday.» Early examples in Anglophone literature, using the word videotelephone, includes The World of Null-A Harl Vincent from late 1920s. In many science fiction movies and TV programs that are set in the future, videophones were used as a primary method of communication. One of the first movies where a videophone was used was Fritz Lang's Metropolis (1927). Other notable examples of videophones in popular culture include an iconic scene from the 1968 film 2001: A Space Odyssey set on Space Station V. The movie was released shortly before AT&T began its efforts to commercialize its Picturephone Mod II service in several cities and depicts a video call to Earth using an advanced AT&T videophone—which it predicts will cost $1.70 for a two-minute call in 2001 (a fraction of the company's real rates on Earth in 1968). Film director Stanley Kubrick strove for scientific accuracy, relying on interviews with scientists and engineers at Bell Labs in the United States. Dr. Larry Rabiner of Bell Labs, discussing videophone research in the documentary 2001: The Making of a Myth, stated that in the mid-to late-1960s videophones "... captured the imagination of the public and ... of Mr. Kubrick and the people who reported to him". In one 2001 movie scene a central character, Dr. Heywood Floyd, calls home to contact his family, a social feature noted in the Making of a Myth. Floyd talks with and views his daughter from a space station in orbit above the Earth, discussing what type of present he should bring home for her.[unreliable source] Other earlier examples of videophones in popular culture included a videophone that was featured in the Warner Bros. cartoon, Plane Daffy, in which the female spy Hatta Mari used a videophone to communicate with Adolf Hitler (1944), as well as a device with the same functionality has been used by the comic strip character Dick Tracy, who often used his "2-way wrist TV" to communicate with police headquarters. (1964–1977). By the early 2010s videotelephony and videophones had become commonplace and unremarkable in various forms of media, in part due to their real and ubiquitous presence in common electronic devices and laptop computers. Additionally, TV programming increasingly used videophones to interview subjects of interest and to present live coverage by news correspondents, via the Internet or by satellite links. In the mass market media, the popular U.S. TV talk show hostess Oprah Winfrey incorporated videotelephony into her TV program on a regular basis from May 21, 2009, with an initial episode called Where the Skype Are You?, as part of a marketing agreement with the Internet telecommunication company Skype. See also Notes Bibliography Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Life_on_Titan] | [TOKENS: 3426] |
Contents Life on Titan Whether there is life on Titan, the largest moon of Saturn, is currently an open question and a topic of scientific assessment and research. Titan is far colder than Earth, but of all the places in the Solar System, Titan is the only place besides Earth known to have liquids in the form of rivers, lakes, and seas on its surface. Its thick atmosphere is chemically active and rich in carbon compounds. On the surface there are small and large bodies of both liquid methane and ethane, and it is likely that there is a layer of liquid water under its ice shell. Some scientists speculate that these liquid mixes may provide prebiotic chemistry for living cells different from those on Earth. In June 2010, scientists analyzing data from the Cassini–Huygens mission reported anomalies in the atmosphere near the surface which could be consistent with the presence of methane-producing organisms, but may alternatively be due to non-living chemical or meteorological processes. The Cassini–Huygens mission was not equipped to look directly for micro-organisms or to provide a thorough inventory of complex organic compounds. Chemistry Titan's consideration as an environment for the study of prebiotic chemistry or potentially exotic life stems in large part due to the diversity of the organic chemistry that occurs in its atmosphere, driven by photochemical reactions in its outer layers. The following chemicals have been detected in Titan's upper atmosphere by Cassini's mass spectrometer: As mass spectrometry identifies the atomic mass of a compound but not its structure, additional research is required to identify the exact compound that has been detected. Where the compounds have been identified in the literature, their chemical formula has been replaced by their name above. The figures in Magee (2009) involve corrections for high pressure background. Other compounds believed to be indicated by the data and associated models include ammonia, polyynes, amines, ethylenimine, deuterium hydride, allene, 1,3 butadiene and any number of more complex chemicals in lower concentrations, as well as carbon dioxide and limited quantities of water vapour. Surface temperature Due to its distance from the Sun, Titan is much colder than Earth. Its surface temperature is about 94 K (−179 °C, or −290 °F). At these temperatures, water ice—if present—does not melt, evaporate or sublimate, but remains solid. Because of the extreme cold and also because of lack of carbon dioxide (CO2) in the atmosphere, scientists such as Jonathan Lunine have viewed Titan less as a likely habitat for extraterrestrial life, than as an experiment for examining hypotheses on the conditions that prevailed prior to the appearance of life on Earth. Even though the usual surface temperature on Titan is not compatible with liquid water, calculations by Lunine and others suggest that meteor strikes could create occasional "impact oases"—craters in which liquid water might persist for hundreds of years or longer, which would enable water-based organic chemistry. However, Lunine does not rule out life in an environment of liquid methane and ethane, and has written about what discovery of such a life form (even if very primitive) would imply about the prevalence of life in the universe. In the 1970s, astronomers found unexpectedly high levels of infrared emissions from Titan. One possible explanation for this was the surface was warmer than expected, due to a greenhouse effect. Some estimates of the surface temperature even approached temperatures in the cooler regions of Earth. There was, however, another possible explanation for the infrared emissions: Titan's surface was very cold, but the upper atmosphere was heated due to absorption of ultraviolet light by molecules such as ethane, ethylene and acetylene. In September 1979, Pioneer 11, the first space probe to conduct fly-by observations of Saturn and its moons, sent data showing Titan's surface to be extremely cold by Earth standards, and much below the temperatures generally associated with planetary habitability. Titan may become warmer in the future. Five to six billion years from now, as the Sun becomes a red giant, surface temperatures could rise to ~200 K (−70 °C), high enough for stable oceans of a water–ammonia mixture to exist on its surface. As the Sun's ultraviolet output decreases, the haze in Titan's upper atmosphere will be depleted, lessening the anti-greenhouse effect on its surface and enabling the greenhouse effect created by atmospheric methane to play a far greater role. These conditions together could create an environment agreeable to exotic forms of life, and will persist for several hundred million years. This was sufficient time for simple life to evolve on Earth, although the presence of ammonia on Titan could cause the same chemical reactions to proceed more slowly. Absence of surface liquid water The lack of liquid water on Titan's surface was cited by NASA astrobiologist Andrew Pohorille in 2009 as an argument against life there. Pohorille considers that water is important not only as the solvent used by "the only life we know" but also because its chemical properties are "uniquely suited to promote self-organization of organic matter". He has questioned whether prospects for finding life on Titan's surface are sufficient to justify the expense of a mission that would look for it. Possible subsurface liquid water Laboratory simulations have led to the suggestion that enough organic material exists on Titan to start a chemical evolution analogous to what is thought to have started life on Earth. While the analogy assumes the presence of liquid water for longer periods than is currently observable, several hypotheses suggest that liquid water from an impact could be preserved under a frozen isolation layer. It has also been proposed that ammonia oceans could exist deep below the surface; one model suggests an ammonia–water solution as much as 200 km deep beneath a water ice crust, conditions that, "while extreme by terrestrial standards, are such that life could indeed survive". Heat transfer between the interior and upper layers would be critical in sustaining any sub-surface oceanic life. Detection of microbial life on Titan would depend on its biogenic effects. For example, the atmospheric methane and nitrogen could be examined for biogenic origin. Data published in 2012 obtained from NASA's Cassini spacecraft, have strengthened evidence that Titan likely harbors a layer of liquid water under its ice shell. Formation of complex molecules Titan is the only known natural satellite (moon) in the Solar System that has a fully developed atmosphere that consists of more than trace gases. Titan's atmosphere is thick, chemically active, and is known to be rich in organic compounds; this has led to speculation about whether chemical precursors of life may have been generated there. The atmosphere also contains hydrogen gas, which is cycling through the atmosphere and the surface environment, and which living things comparable to Earth methanogens could combine with some of the organic compounds (such as acetylene) to obtain energy. The Miller–Urey experiment and several following experiments have shown that with an atmosphere similar to that of Titan and the addition of UV radiation, complex molecules and polymer substances like tholins can be generated. The reaction starts with dissociation of nitrogen and methane, forming hydrogen cyanide and acetylene. Further reactions have been studied extensively. In October 2010, Sarah Hörst of the University of Arizona reported finding the five nucleotide bases—building blocks of DNA and RNA—among the many compounds produced when energy was applied to a combination of gases like those in Titan's atmosphere. Hörst also found amino acids, the building blocks of protein. She said it was the first time nucleotide bases and amino acids had been found in such an experiment without liquid water being present. In April 2013, NASA reported that complex organic chemicals could arise on Titan based on studies simulating the atmosphere of Titan. In June 2013, polycyclic aromatic hydrocarbons (PAHs) were detected in the upper atmosphere of Titan. A team of researchers led by Martin Rahm suggested in 2016 that polyimine could readily function as a building block in Titan's conditions. Titan's atmosphere produces significant quantities of hydrogen cyanide, which readily polymerize into forms which can capture light energy in Titan's surface conditions. As of yet, the answer to what happens with Titan's cyanide is unknown; while it is rich in the upper atmosphere where it is created, it is depleted at the surface, suggesting that there is some sort of reaction consuming it. In July 2017, Cassini scientists positively identified the presence of carbon chain anions in Titan's upper atmosphere which appeared to be involved in the production of large complex organics. These highly reactive molecules were previously known to contribute to building complex organics in the Interstellar Medium, therefore highlighting a possibly universal stepping stone to producing complex organic material. In July 2017, scientists reported that acrylonitrile (C2H3CN), a chemical possibly essential for life by being related to cell membrane and vesicle structure formation, had been found on Titan. In October 2018, researchers reported low-temperature chemical pathways from simple organic compounds to complex polycyclic aromatic hydrocarbon (PAH) chemicals. Such chemical pathways may help explain the presence of PAHs in the low-temperature atmosphere of Titan, and may be significant pathways, in terms of the PAH world hypothesis, in producing precursors to biochemicals related to life as we know it. Hypotheses Although all living things on Earth (including methanogens) use liquid water as a solvent, it is conceivable that life on Titan might instead use a liquid hydrocarbon, such as methane or ethane. Water is a stronger solvent than hydrocarbons; however, water is more chemically reactive, and can break down large organic molecules through hydrolysis. A life-form whose solvent was a hydrocarbon would not face the risk of its biomolecules being destroyed in this way. Titan appears to have lakes of liquid ethane or liquid methane on its surface, as well as rivers and seas, which some scientific models suggest could support hypothetical non-water-based life. It has been speculated that life could exist in the liquid methane and ethane that form rivers and lakes on Titan's surface, just as organisms on Earth live in water. Such hypothetical creatures would take in H2 in place of O2, react it with acetylene instead of glucose, and produce methane instead of carbon dioxide. By comparison, some methanogens on Earth obtain energy by reacting hydrogen with carbon dioxide, producing methane and water. In 2005, astrobiologists Christopher McKay and Heather Smith predicted that if methanogenic life is consuming atmospheric hydrogen in sufficient volume, it will have a measurable effect on the mixing ratio in the troposphere of Titan. The effects predicted included a level of acetylene much lower than otherwise expected, as well as a reduction in the concentration of hydrogen itself. Evidence consistent with these predictions was reported in June 2010 by Darrell Strobel of Johns Hopkins University, who analysed measurements of hydrogen concentration in the upper and lower atmosphere. Strobel found that the hydrogen concentration in the upper atmosphere is so much larger than near the surface that the physics of diffusion leads to hydrogen flowing downwards at a rate of roughly 1025 molecules per second. Near the surface the downward-flowing hydrogen apparently disappears. Another paper released the same month showed very low levels of acetylene on Titan's surface. Chris McKay agreed with Strobel that presence of life, as suggested in McKay's 2005 article, is a possible explanation for the findings about hydrogen and acetylene, but also cautioned that other explanations are currently more likely: namely the possibility that the results are due to human error, to a meteorological process, or to the presence of some mineral catalyst enabling hydrogen and acetylene to react chemically. He noted that such a catalyst, one effective at −178 °C (95 K), is presently unknown and would in itself be a startling discovery, though less startling than discovery of an extraterrestrial life form. The June 2010 findings gave rise to considerable media interest, including a report in the British newspaper, the Telegraph, which spoke of clues to the existence of "primitive aliens". A hypothetical cell membrane capable of functioning in liquid methane was modeled in February 2015. The proposed chemical base for these membranes is acrylonitrile, which has been detected on Titan. Called an "azotosome" ('nitrogen body'), formed from "azoto", Greek for nitrogen, and "soma", Greek for body, it lacks the phosphorus and oxygen found in phospholipids on Earth but contains nitrogen. Despite the very different chemical structure and external environment, its properties are surprisingly similar, including autoformation of sheets, flexibility, stability, and other properties. According to computer simulation in 2020 by the Rahm group, azotosomes could not form under the conditions in Titan lakes due to thermodynamic barriers. In 2025, a new mechanism to overcome these barriers was proposed by astrobiologists Christian Mayer and Conor Nixon based on interaction between small mist droplets and the surface of methane lakes. At present, azotosome formation remains speculative without laboratory demonstration of their existence. An analysis of ALMA data, completed in 2017, confirmed substantial amounts of acrylonitrile in Titan's atmosphere. In order to assess the likelihood of finding any sort of life on various planets and moons, Dirk Schulze-Makuch and other scientists have developed a planetary habitability index which takes into account factors including characteristics of the surface and atmosphere, availability of energy, solvents and organic compounds. Using this index, based on data available in late 2011, the model suggests that Titan has the highest current habitability rating of any known world, other than Earth. While the Cassini–Huygens mission was not equipped to provide evidence for biosignatures or complex organics, it showed an environment on Titan that is similar, in some ways, to ones theorized for the primordial Earth. Scientists think that the atmosphere of early Earth was similar in composition to the current atmosphere on Titan, with the important exception of a lack of water vapor on Titan. Many hypotheses have developed that attempt to bridge the step from chemical to biological evolution. Titan is presented as a test case for the relation between chemical reactivity and life, in a 2007 report on life's limiting conditions prepared by a committee of scientists under the United States National Research Council. The committee, chaired by John Baross, considered that "if life is an intrinsic property of chemical reactivity, life should exist on Titan. Indeed, for life not to exist on Titan, we would have to argue that life is not an intrinsic property of the reactivity of carbon-containing molecules under conditions where they are stable..." David Grinspoon, one of the scientists who in 2005 proposed that hypothetical organisms on Titan might use hydrogen and acetylene as an energy source, has mentioned the Gaia hypothesis in the context of discussion about Titan life. He suggests that, just as Earth's environment and its organisms have evolved together, the same thing is likely to have happened on other worlds with life on them. In Grinspoon's view, worlds that are "geologically and meteorologically alive are much more likely to be biologically alive as well". An alternate explanation for life's hypothetical existence on Titan has been proposed: if life were to be found on Titan, it could have originated from Earth in a process called panspermia. It is theorized that large asteroid and cometary impacts on Earth's surface have caused hundreds of millions of fragments of microbe-laden rock to escape Earth's gravity. Calculations indicate that a number of these would encounter many of the bodies in the Solar System, including Titan. On the other hand, Jonathan Lunine has argued that any living things in Titan's cryogenic hydrocarbon lakes would need to be so different chemically from Earth life that it would not be possible for one to be the ancestor of the other. In Lunine's view, presence of organisms in Titan's lakes would mean a second, independent origin of life within the Solar System, implying that life has a high probability of emerging on habitable worlds throughout the cosmos. Planned and proposed missions The proposed Titan Mare Explorer mission, a Discovery-class lander that would splash down in a lake, "would have the possibility of detecting life", according to astronomer Chris Impey of the University of Arizona. The planned Dragonfly rotorcraft mission is intended to land on solid ground and relocate many times. Dragonfly will be New Frontiers program Mission #4. Its instruments will study how far prebiotic chemistry may have progressed. Dragonfly will carry equipment to study the chemical composition of Titan's surface, and to sample the lower atmosphere for possible biosignatures, including hydrogen concentrations. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Martian_surface] | [TOKENS: 1671] |
Contents Martian surface The study of surface characteristics (or surface properties and processes) is a broad category of Mars science that examines the nature of the materials making up the Martian surface. The study evolved from telescopic and remote-sensing techniques developed by astronomers to study planetary surfaces. However, it has increasingly become a subdiscipline of geology as automated spacecraft bring ever-improving resolution and instrument capabilities. By using characteristics such as color, albedo, and thermal inertia and analytical tools such as reflectance spectroscopy and radar, scientists are able to study the chemistry and physical makeup (e.g., grain sizes, surface roughness, and rock abundances) of the Martian surface. The resulting data help scientists understand the planet's mineral composition and the nature of geological processes operating on the surface. Mars's surface layer represents a tiny fraction of the total volume of the planet, yet plays a significant role in the planet's geologic history. Understanding physical surface properties is also very important in determining safe landing sites for spacecraft. Albedo and color Like all planets, Mars reflects a portion of the light it receives from the sun. The fraction of sunlight reflected is a quantity called albedo, which ranges from 0 for a body that reflects no sunlight to 1.0 for a body that reflects all sunlight. Different parts of a planet's surface (and atmosphere) have different albedo values depending on the chemical and physical nature of the surface. No topography is visible on Mars from Earth-based telescopes. The bright areas and dark markings on pre-spaceflight-era maps of Mars are all albedo features. (See Classical albedo features on Mars.) They have little relation to topography. Dark markings are most distinct in a broad belt from 0° to 40° S latitude. However, the most prominent dark marking, Syrtis Major Planum, is in the northern hemisphere, outside this belt. The classical albedo feature Mare Acidalium (Acidalia Planitia) is another prominent dark area that lies north of the main belt. Bright areas, excluding the polar caps and transient clouds, include Hellas, Tharsis, and Arabia Terra. The bright areas are now known to be locations where fine dust covers the surface. The dark markings represent areas that the wind has swept clean of dust, leaving behind a lag of dark, rocky material. The dark color is consistent with the presence of mafic rocks, such as basalt. The albedo of a surface usually varies with the wavelength of light hitting it. Mars reflects little light at the blue end of the spectrum but much at red and higher wavelengths. This is why Mars has the familiar reddish-orange color to the naked eye. But detailed observations reveal a subtle range of colors on Mars's surface. Color variations provide clues to the composition of surface materials. The bright areas are reddish-ochre in color, and the dark areas appear dark gray. A third type of area, intermediate in color and albedo, is also present and thought to represent regions containing a mixture of the material from the bright and dark areas. The dark gray areas can be further subdivided into those that are more reddish and those less reddish in hue. Reflectance spectroscopy Reflectance spectroscopy is a technique that measures the amount of sunlight absorbed or reflected by the Martian surface at specific wavelengths. The spectra represent mixtures of spectra from individual minerals on the surface along with contributions from absorption lines in the solar spectrum and the Martian atmosphere. By separating out (“deconvolving”) each of these contributions, scientists can compare the resulting spectra to laboratory spectra of known minerals to determine the probable identity and abundance of individual minerals on the surface. Using this technique, scientists have long known that the bright ochre areas probably contain abundant ferric iron (Fe3+) oxides typical of weathered iron-bearing materials (e.g., rust). Spectra of the dark areas are consistent with the presence of ferrous iron (Fe2+) in mafic minerals and show absorption bands suggestive of pyroxene, a group of minerals that is very common in basalt. Spectra of the redder dark areas are consistent with mafic materials covered with thin alteration coatings. Thermal inertia Thermal inertia measurement is a remote-sensing technique that allows scientists to distinguish fine-grained from coarse-grained areas on the Martian surface. Thermal inertia is a measure of how fast or slow something heats up or cools off. For example, metals have very low thermal inertia. An aluminum cookie sheet taken out of an oven is cool to the touch in less than a minute; while a ceramic plate (high thermal inertia) taken from the same oven takes much longer to cool off. Scientists can estimate the thermal inertia on the Martian surface by measuring variations in surface temperature with respect to time of day and fitting this data to numerical temperature models. The thermal inertia of a material is directly related to its thermal conductivity, density, and specific heat capacity. Rocky materials do not vary much in density and specific heat, so variations in thermal inertia are mainly due to variations in thermal conductivity. Solid rock surfaces, such as outcroppings, have high thermal conductivities and inertias. Dust and small granular material in the regolith have low thermal inertias because the void spaces between grains restrict thermal conductivity to the contact point between grains. Thermal inertia values for most of the Martian surface are inversely related to albedo. Thus, high albedo areas have low thermal inertias indicating surfaces that are covered with dust and other fine granular material. The dark gray, low albedo surfaces have high thermal inertias more typical of consolidated rock. However, thermal inertia values are not high enough to indicate widespread outcroppings are common on Mars. Even the rockier areas appear to be mixed with a significant amount of loose material. Data from the Infrared Thermal Mapping (IRTM) experiment on the Viking orbiters identified areas of high thermal inertia throughout the interior of Valles Marineris and the chaotic terrain, suggesting that these areas contain a relatively large number of blocks and boulders. Radar investigations Radar studies provide a wealth of data on elevations, slopes, textures, and material properties of the Martian surface. Mars is an inviting target for Earth-based radar investigations because of its relative proximity to Earth and its favorable orbital and rotational characteristics that allow good coverage over wide areas of the planet's surface. Radar echoes from Mars were first obtained in the early 1960s, and the technique has been vital in finding safe terrain for Mars landers. Dispersion of the returned radar echoes from Mars shows that a lot of variation exists in surface roughness and slope across the planet's surface. Wide areas of the planet, particularly in Syria and Sinai Plana, are relatively smooth and flat. Meridiani Planum, the landing site of the Mars Exploration Rover Opportunity, is one of the flattest and smoothest (at decimeter-scale) locations ever investigated by radar—a fact borne out by surface images at the landing site. Other areas show high levels of roughness in radar that are not discernible in images taken from orbit. The average surface abundance of centimeter- to meter-scale rocks is much greater on Mars than the other terrestrial planets. Tharsis and Elysium, in particular, show a high degree of small-scale surface roughness associated with volcanoes. This extremely rough terrain is suggestive of young, ʻaʻā lava flows. A 200-km-long band of low to zero radar albedo ("stealth" region) cuts across the southwest Tharsis. The region corresponds to the location of the Medusa Fossae Formation, which consists of thick layers of unconsolidated materials, perhaps volcanic ash or loess. Ground-penetrating radar instruments on the Mars Express orbiter (MARSIS) and the Mars Reconnaissance Orbiter (SHARAD) are currently providing stunning echo-return data on subsurface materials and structures to depths of up to 5 km. Results have shown that the polar layered deposits are composed of almost pure ice, with no more than 10% dust by volume and that fretted valleys in Deuteronilus Mensae contain thick glaciers covered by a mantle of rocky debris. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Birthday#cite_note-44] | [TOKENS: 4101] |
Contents Birthday A birthday is the anniversary of the birth of a person or the figurative birth of an institution. Birthdays of people are celebrated in numerous cultures, often with birthday gifts, birthday cards, a birthday party, or a rite of passage. Many religions celebrate the birth of their founders or religious figures with special holidays (e.g. Christmas, Mawlid, Buddha's Birthday, Krishna Janmashtami, and Gurpurb). There is a distinction between birthday and birthdate (also known as date of birth): the former, except for February 29, occurs each year (e.g. January 15), while the latter is the complete date when a person was born (e.g. January 15, 2001). Coming of age In most legal systems, one becomes a legal adult on a particular birthday when they reach the age of majority (usually between 12 and 21), and reaching age-specific milestones confers particular rights and responsibilities. At certain ages, one may become eligible to leave full-time education, become subject to military conscription or to enlist in the military, to consent to sexual intercourse, to marry with parental consent, to marry without parental consent, to vote, to run for elected office, to legally purchase (or consume) alcohol and tobacco products, to purchase lottery tickets, or to obtain a driver's licence. The age of majority is when minors cease to legally be considered children and assume control over their persons, actions, and decisions, thereby terminating the legal control and responsibilities of their parents or guardians over and for them. Most countries set the age of majority at 18, though it varies by jurisdiction. Many cultures celebrate a coming of age birthday when a person reaches a particular year of life. Some cultures celebrate landmark birthdays in early life or old age. In many cultures and jurisdictions, if a person's real birthday is unknown (for example, if they are an orphan), their birthday may be adopted or assigned to a specific day of the year, such as January 1. Racehorses are reckoned to become one year old in the year following their birth on January 1 in the Northern Hemisphere and August 1 in the Southern Hemisphere.[relevant?] Birthday parties In certain parts of the world, an individual's birthday is celebrated by a party featuring a specially made cake. Presents are bestowed on the individual by the guests appropriate to their age. Other birthday activities may include entertainment (sometimes by a hired professional, i.e., a clown, magician, or musician) and a special toast or speech by the birthday celebrant. The last stanza of Patty Hill's and Mildred Hill's famous song, "Good Morning to You" (unofficially titled "Happy Birthday to You") is typically sung by the guests at some point in the proceedings. In some countries, a piñata takes the place of a cake. The birthday cake may be decorated with lettering and the person's age, or studded with the same number of lit candles as the age of the individual. The celebrated individual may make a silent wish and attempt to blow out the candles in one breath; if successful, superstition holds that the wish will be granted. In many cultures, the wish must be kept secret or it will not "come true". Birthdays as holidays Historically significant people's birthdays, such as national heroes or founders, are often commemorated by an official holiday marking the anniversary of their birth. Some notables, particularly monarchs, have an official birthday on a fixed day of the year, which may not necessarily match the day of their birth, but on which celebrations are held. In Mahayana Buddhism, many monasteries celebrate the anniversary of Buddha's birth, usually in a highly formal, ritualized manner. They treat Buddha's statue as if it was Buddha himself as if he were alive; bathing, and "feeding" him. Jesus Christ's traditional birthday is celebrated as Christmas Eve or Christmas Day around the world, on December 24 or 25, respectively. As some Eastern churches use the Julian calendar, December 25 will fall on January 7 in the Gregorian calendar. These dates are traditional and have no connection with Jesus's actual birthday, which is not recorded in the Gospels. Similarly, the birthdays of the Virgin Mary and John the Baptist are liturgically celebrated on September 8 and June 24, especially in the Roman Catholic and Eastern Orthodox traditions (although for those Eastern Orthodox churches using the Julian calendar the corresponding Gregorian dates are September 21 and July 7 respectively). As with Christmas, the dates of these celebrations are traditional and probably have no connection with the actual birthdays of these individuals. Catholic saints are remembered by a liturgical feast on the anniversary of their "birth" into heaven a.k.a. their day of death. In Hinduism, Ganesh Chaturthi is a festival celebrating the birth of the elephant-headed deity Ganesha in extensive community celebrations and at home. Figurines of Ganesha are made for the holiday and are widely sold. Sikhs celebrate the anniversary of the birth of Guru Nanak and other Sikh gurus, which is known as Gurpurb. Mawlid is the anniversary of the birth of Muhammad and is celebrated on the 12th or 17th day of Rabi' al-awwal by adherents of Sunni and Shia Islam respectively. These are the two most commonly accepted dates of birth of Muhammad. However, there is much controversy regarding the permissibility of celebrating Mawlid, as some Muslims judge the custom as an unacceptable practice according to Islamic tradition. In Iran, Mother's Day is celebrated on the birthday of Fatima al-Zahra, the daughter of Muhammad. Banners reading Ya Fatima ("O Fatima") are displayed on government buildings, private buildings, public streets and car windows. Religious views In Judaism, rabbis are divided about celebrating this custom, although the majority of the faithful accept it. In the Torah, the only mention of a birthday is the celebration of Pharaoh's birthday in Egypt (Genesis 40:20). Although the birthday of Jesus of Nazareth is celebrated as a Christian holiday on December 25, historically the celebrating of an individual person's birthday has been subject to theological debate. Early Christians, notes The World Book Encyclopedia, "considered the celebration of anyone's birth to be a pagan custom." Origen, in his commentary "On Levites," wrote that Christians should not only refrain from celebrating their birthdays but should look at them with disgust as a pagan custom. A saint's day was typically celebrated on the anniversary of their martyrdom or death, considered the occasion of or preparation for their entrance into Heaven or the New Jerusalem. Ordinary folk in the Middle Ages celebrated their saint's day (the saint they were named after), but nobility celebrated the anniversary of their birth.[citation needed] The "Squire's Tale", one of Chaucer's Canterbury Tales, opens as King Cambuskan proclaims a feast to celebrate his birthday. In the Modern era, the Catholic Church, the Eastern Orthodox Church and Protestantism, i.e. the three main branches of Christianity, as well as almost all Christian religious denominations, consider celebrating birthdays acceptable or at most a choice of the individual. An exception is Jehovah's Witnesses, who do not celebrate them for various reasons: in their interpretation this feast has pagan origins, was not celebrated by early Christians, is negatively expounded in the Holy Scriptures and has customs linked to superstition and magic. In some historically Roman Catholic and Eastern Orthodox countries,[a] it is common to have a 'name day', otherwise known as a 'Saint's day'. It is celebrated in much the same way as a birthday, but it is held on the official day of a saint with the same Christian name as the birthday person; the difference being that one may look up a person's name day in a calendar, or easily remember common name days (for example, John or Mary); however in pious traditions, the two were often made to concur by giving a newborn the name of a saint celebrated on its day of confirmation, more seldom one's birthday. Some are given the name of the religious feast of their christening's day or birthday, for example, Noel or Pascal (French for Christmas and "of Easter"); as another example, Togliatti was given Palmiro as his first name because he was born on Palm Sunday. The birthday does not reflect Islamic tradition, and because of this, the majority of Muslims refrain from celebrating it. Others do not object, as long as it is not accompanied by behavior contrary to Islamic tradition. A good portion of Muslims (and Arab Christians) who have emigrated to the United States and Europe celebrate birthdays as customary, especially for children, while others abstain. Hindus celebrate the birth anniversary day every year when the day that corresponds to the lunar month or solar month (Sun Signs Nirayana System – Sourava Mana Masa) of birth and has the same asterism (Star/Nakshatra) as that of the date of birth. That age is reckoned whenever Janma Nakshatra of the same month passes. Hindus regard death to be more auspicious than birth, since the person is liberated from the bondages of material society. Also, traditionally, rituals and prayers for the departed are observed on the 5th and 11th days, with many relatives gathering. Historical and cultural perspectives According to Herodotus (5th century BC), of all the days in the year, the one which the Persians celebrate most is their birthday. It was customary to have the board furnished on that day with an ampler supply than common: the richer people eat wholly baked cow, horse, camel, or donkey (Greek: ὄνον), while the poorer classes use instead the smaller kinds of cattle. On his birthday, the king anointed his head and presented gifts to the Persians. According to the law of the Royal Supper, on that day "no one should be refused a request". The rule for drinking was "No restrictions". In ancient Rome, a birthday (dies natalis) was originally an act of religious cultivation (cultus). A dies natalis was celebrated annually for a temple on the day of its founding, and the term is still used sometimes for the anniversary of an institution such as a university. The temple founding day might become the "birthday" of the deity housed there. March 1, for example, was celebrated as the birthday of the god Mars. Each human likewise had a natal divinity, the guardian spirit called the Genius, or sometimes the Juno for a woman, who was owed religious devotion on the day of birth, usually in the household shrine (lararium). The decoration of a lararium often shows the Genius in the role of the person carrying out the rites. A person marked their birthday with ritual acts that might include lighting an altar, saying prayers, making vows (vota), anointing and wreathing a statue of the Genius, or sacrificing to a patron deity. Incense, cakes, and wine were common offerings. Celebrating someone else's birthday was a way to show affection, friendship, or respect. In exile, the poet Ovid, though alone, celebrated not only his own birthday rite but that of his far distant wife. Birthday parties affirmed social as well as sacred ties. One of the Vindolanda tablets is an invitation to a birthday party from the wife of one Roman officer to the wife of another. Books were a popular birthday gift, sometimes handcrafted as a luxury edition or composed especially for the person honored. Birthday poems are a minor but distinctive genre of Latin literature. The banquets, libations, and offerings or gifts that were a regular part of most Roman religious observances thus became part of birthday celebrations for individuals. A highly esteemed person would continue to be celebrated on their birthday after death, in addition to the several holidays on the Roman calendar for commemorating the dead collectively. Birthday commemoration was considered so important that money was often bequeathed to a social organization to fund an annual banquet in the deceased's honor. The observance of a patron's birthday or the honoring of a political figure's Genius was one of the religious foundations for imperial cult or so-called "emperor worship." The Chinese word for "year(s) old" (t 歲, s 岁, suì) is entirely different from the usual word for "year(s)" (年, nián), reflecting the former importance of Chinese astrology and the belief that one's fate was bound to the stars imagined to be in opposition to the planet Jupiter at the time of one's birth. The importance of this duodecennial orbital cycle only survives in popular culture as the 12 animals of the Chinese zodiac, which change each Chinese New Year and may be used as a theme for some gifts or decorations. Because of the importance attached to the influence of these stars in ancient China and throughout the Sinosphere, East Asian age reckoning previously began with one at birth and then added years at each Chinese New Year, so that it formed a record of the suì one had lived through rather than of the exact amount of time from one's birth. This method—which can differ by as much as two years of age from other systems—is increasingly uncommon and is not used for official purposes in the PRC or on Taiwan, although the word suì is still used for describing age. Traditionally, Chinese birthdays—when celebrated—were reckoned using the lunisolar calendar, which varies from the Gregorian calendar by as much as a month forward or backward depending on the year. Celebrating the lunisolar birthday remains common on Taiwan while growing increasingly uncommon on the mainland. Birthday traditions reflected the culture's deep-seated focus on longevity and wordplay. From the homophony in some dialects between 酒 ("rice wine") and 久 (meaning "long" in the sense of time passing), osmanthus and other rice wines are traditional gifts for birthdays in China. Longevity noodles are another traditional food consumed on the day, although western-style birthday cakes are increasingly common among urban Chinese. Hongbaos—red envelopes stuffed with money, now especially the red 100 RMB notes—are the usual gift from relatives and close family friends for most children. Gifts for adults on their birthdays are much less common, although the birthday for each decade is a larger occasion that might prompt a large dinner and celebration. The Japanese reckoned their birthdays by the Chinese system until the Meiji Reforms. Celebrations remained uncommon or muted until after the American occupation that followed World War II.[citation needed] Children's birthday parties are the most important, typically celebrated with a cake, candles, and singing. Adults often just celebrate with their partner. In North Korea, the Day of the Sun, Kim Il Sung's birthday, is the most important public holiday of the country, and Kim Jong Il's birthday is celebrated as the Day of the Shining Star. North Koreans are not permitted to celebrate birthdays on July 8 and December 17 because these were the dates of the deaths of Kim Il Sung and Kim Jong Il, respectively. More than 100,000 North Koreans celebrate displaced birthdays on July 9 and December 18 instead to avoid these dates. A person born on July 8 before 1994 may change their birthday, with official recognition. South Korea was one of the last countries to use a form of East Asian age reckoning for many official purposes. Prior to June 2023, three systems were used together—"Korean ages" that start with 1 at birth and increase every January 1st with the Gregorian New Year, "year ages" that start with 0 at birth and otherwise increase the same way, and "actual ages" that start with 0 at birth and increase each birthday. First birthday celebrations was heavily celebrated, despite usually having little to do with the child's age. In June 2023, all Korean ages were set back at least one year, and official ages henceforth are reckoned only by birthdays. In Ghana, children wake up on their birthday to a special treat called oto, which is a patty made from mashed sweet potato and eggs fried in palm oil. Later they have a birthday party where they usually eat stew and rice and a dish known as kelewele, which is fried plantain chunks. Distribution through the year Birthdays are fairly evenly distributed throughout the year, with some seasonal effects. In the United States, there tend to be more births in September and October. This may be because there is a holiday season nine months before (the human gestation period is about nine months), or because the longest nights of the year also occur in the Northern Hemisphere nine months before. However, the holidays affect birth rates more than the winter: New Zealand, a Southern Hemisphere country, has the same September and October peak with no corresponding peak in March and April. The least common birthdays tend to fall around public holidays, such as Christmas, New Year's Day and fixed-date holidays such as Independence Day in the US, which falls on July 4. Between 1973 and 1999, September 16 was the most common birthday in the United States, and December 25 was the least common birthday (other than February 29 because of leap years). In 2011, October 5 and 6 were reported as the most frequently occurring birthdays. New Zealand's most common birthday is September 29, and the least common birthday is December 25. The ten most common birthdays all fall within a thirteen-day period, between September 22 and October 4. The ten least common birthdays (other than February 29) are December 24–27, January 1–2, February 6, March 22, April 1, and April 25. This is based on all live births registered in New Zealand between 1980 and 2017. Positive and negative associations with culturally significant dates may influence birth rates. The study shows a 5.3% decrease in spontaneous births and a 16.9% decrease in Caesarean births on Halloween, compared to dates occurring within one week before and one week after the October holiday. In contrast, on Valentine's Day, there is a 3.6% increase in spontaneous births and a 12.1% increase in Caesarean births. In Sweden, 9.3% of the population is born in March and 7.3% in November, when a uniform distribution would give 8.3%. In the Gregorian calendar (a common solar calendar), February in a leap year has 29 days instead of the usual 28, so the year lasts 366 days instead of the usual 365. A person born on February 29 may be called a "leapling" or a "leaper". In common years, they usually celebrate their birthdays on February 28. In some situations, March 1 is used as the birthday in a non-leap year since it is the day following February 28. Technically, a leapling will have fewer birthday anniversaries than their age in years. This phenomenon is exploited when a person claims to be only a quarter of their actual age, by counting their leap-year birthday anniversaries only. In Gilbert and Sullivan's 1879 comic opera The Pirates of Penzance, Frederic the pirate apprentice discovers that he is bound to serve the pirates until his 21st birthday rather than until his 21st year. For legal purposes, legal birthdays depend on how local laws count time intervals. An individual's Beddian birthday, named in tribute to firefighter Bobby Beddia, occurs during the year that their age matches the last two digits of the year they were born. Some studies show people are more likely to die on their birthdays, with explanations including excessive drinking, suicide, cardiovascular events due to high stress or happiness, efforts to postpone death for major social events, and death certificate paperwork errors. See also References Notes External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/White_Wolf,_Inc.] | [TOKENS: 2176] |
Contents White Wolf Publishing White Wolf Entertainment AB, formerly White Wolf Publishing, is an American roleplaying game and book publisher. The company was founded in 1991 as a merger between Lion Rampant and White Wolf Magazine (est. 1986 in Rocky Face, Georgia; it later became "White Wolf Inphobia"), and was initially led by Mark Rein-Hagen of the former and Steve Wieck and Stewart Wieck of the latter. White Wolf Publishing, Inc. merged with CCP Games in 2006. White Wolf Publishing operated as an imprint of CCP hf, but ceased in-house production of any material, instead licensing their properties to other publishers. It was announced in October 2015 that White Wolf had been acquired from CCP by Paradox Interactive. In November 2018, after most of its staff were dismissed for making controversial statements, it was announced that White Wolf would no longer function as an entity separate from Paradox Interactive. In May 2025, Jason Carl, Brand Marketing Manager at White Wolf, announced the company's return as the official licensing and publishing entity for all World of Darkness transmedia properties. The name "White Wolf" originates from Michael Moorcock's works. Overview White Wolf published a line of several different but overlapping games set in the "World of Darkness", a "modern gothic" world that, while seemingly similar to the real world, is home to supernatural terrors, ancient conspiracies, and several approaching apocalypses. The company also published the high fantasy Exalted RPG, the modern mythic Scion, and d20 system material under their Sword & Sorcery imprint, including such titles as the Dungeons & Dragons gothic horror campaign setting Ravenloft, and Monte Cook's Arcana Unearthed series. In order to complement the World of Darkness game line, a LARP system dubbed Mind's Eye Theatre has been published. White Wolf also released several series of novels based on the Old World of Darkness, all of which are currently out of print (although many are coming back into availability via print-on-demand). White Wolf also ventured in the collectible card game market with Arcadia, Rage, and Vampire: The Eternal Struggle (formerly Jyhad). V:TES, perhaps the most successful card game, was originally published by Wizards of the Coast in 1994, but was abandoned just two years later after a revamped base set, name change and three expansions were published. White Wolf acquired the rights to the game in 2000, even though no new material had been produced for the game in over four years. Since then, several V:TES expansions have been released, and the game was the only official source of material for the Old World of Darkness, until 2011 when the 20th Anniversary Edition of Vampire: The Masquerade was published and the Onyx Path was announced. Video games such as Vampire: The Masquerade - Redemption and Vampire: The Masquerade - Bloodlines are based on White Wolf's role-playing game Vampire: The Masquerade. There are also several Hunter: The Reckoning video games. On Saturday, 11 November 2006, White Wolf and CCP Games, the Icelandic MMO development company responsible for EVE Online, announced a merger between the two companies during the keynote address at the EVE Online Fanfest 2006. It was also revealed that a World of Darkness MMORPG was already in the planning stages. This game was cancelled in April 2014 after nine years of development. At GenCon 2012 it was announced that CCP Games/White Wolf would not continue to produce table-top RPGs.[citation needed] Onyx Path Publishing, a new company by White Wolf Creative Director Richard Thomas, purchased the Trinity games and Scion from CCP and became licensee for the production of World of Darkness titles (classic and new), as well as Exalted. Onyx Path does not hold a license to the Mind's Eye Theatre titles. On Thursday, 29 October 2015, Paradox Interactive and CCP announced that Paradox had purchased White Wolf and all of its intellectual properties. Tobias Sjögren would serve as the CEO of the revived company, which would remain a subsidiary of Paradox. Martin Ericsson, formerly a developer on the World of Darkness MMO, served in the "Lead Storyteller" role for the company. In November 2018, as a result of backlash generated by material pertaining to the "murder of gay Chechens" published in a Vampire: The Masquerade Fifth Edition source book as well as the inclusion of optional neo-Nazi aesthetic in the Brujah vampire clan, it was announced that White Wolf would no longer function as an entity separate from its parent company, and would cease developing and publishing products internally. In May 2025, Paradox Interactive announced that it was rebranding its publisher World of Darkness, renaming it White Wolf. The new White Wolf would develop tabletop RPGs, as well as co-publish the video game Vampire: The Masquerade – Bloodlines 2 with Paradox Interactive. RPG products The games of this series use White Wolf's Storyteller System. Several games inspired spinoffs in the form of historical period settings such as the Dark Ages. In addition to those game lines a series of books was produced under the title World of Darkness. These provided stand-alone materials for multiple game lines with the focus on a specific region or theme, e.g. WoD: Blood-Dimmed Tides (about the oceans), WoD: Combat (an alternative 'crossover' combat system to resolve contradictory mechanics and add some sophistication), WoD: Tokyo, Hong Kong, Midnight Circus, and WoD: Mafia. For the Third Edition of Ars Magica, White Wolf connected that game's pseudohistorical setting to the future World of Darkness setting. This was a simple adjustment (since the core premise of both settings is 'Earth as we know it' + 'supernatural fiction is reality') and particularly suited to the 'Tremere connection' between a clan of vampires from the original Vampire and a House of magi in the Order of Hermes (the central organization of Ars Magica as well as one of the 'Traditions' in M:TA). The games of this series use White Wolf's newer Storytelling System. For over a decade it was also known as "World of Darkness," causing it to be referred to as the "new World of Darkness" or nWoD to distinguish it from the prior line of games. In December 2015, it was renamed "Chronicles of Darkness" by its new publisher Onyx Path in order to more clearly distinguish the two given Paradox Entertainment's intention to reboot the original setting. Trinity was originally named Æon, but a trademark issue with Viacom related to the MTV show Æon Flux resulted in a name change. Mind's Eye Theatre (LARP) The majority of the Old World of Darkness games were adapted into the original Mind's Eye Theatre format for live-action roleplaying. Product lines in this era include: Subsequently, the Mind's Eye Theatre was revamped for the New World of Darkness. A core Mind's Eye Theatre rulebook was published as the LARP analogue to the World of Darkness core rulebook, with several Mind's Eye Theatre adaptations following in suit: The Requiem, The Forsaken, and The Awakening each adapted their respective namesakes to the new system of MET rules. The license to produce Mind's Eye Theatre content was acquired by By Night Studios in 2013. At Midwinter Gaming Convention in 2013 it was announced that as a result of CCP Games' discontinuation of publishing, By Night Studios had acquired the license to all Mind's Eye Theatre titles. In May 2013, By Night Studios launched a successful Kickstarter campaign to rebuild the Mind's Eye Theatre: Vampire The Masquerade product specifically for the Live-Action Role Play audience. The book was published in December 2013. This was followed by Mind's Eye Theatre: Werewolf the Apocalypse in October 2016 and Mind's Eye Theatre: Changeling the Dreaming in 2020. In 2019, By Night Studios released Mind's Eye Theatre: Vampire the Masquerade, Volume II: Issue 1, intended to be the first in a series of releases featuring new character options. Eventually each of these releases would be collected within a full edition of Volume II. This plan was eventually scrapped in favor of releasing the full version of Mind's Eye Theatre: Vampire the Masquerade, Volume 2 in October 2021, featuring the additional new rules that were slated to appear in Issues 2 and 3. After publication of Vampire the Masquerade, Volume 2, work began Mind's Eye Theatre: Werewolf the Apocalypse, Volume 2, which was written but never scheduled for formal release. In February 2023, the manuscript was released on the By Night Studios website for free. In May 2023, By Night Studios announced a new Laws of the Night, featuring new live action rules based on Vampire: The Masquerade, Fifth Edition. A crowdfunding campaign was launched and successfully raised $111,165 with a target goal of $25,000. The PDF version of the book was released online in September 2023, with physical copies scheduled to ship in 2024. By Night Studios also published Mind's Eye Theatre: Vampire the Masquerade: War of Ages in 2023, which took the setting for Vampire the Masquerade, Fifth Edition and re-imagined the game in the style of Nordic live action role-playing games, focusing less on game mechanics in favor of a deeper focus on immersive role-playing. Fiction Starting in 1986, White Wolf published fiction in various formats, beginning with its eponymous magazine title (which ran 57 issues), then three comic book titles in 1987, and on to graphic novels, paperbacks and hardcover books. Works included novels and anthologies based on White Wolf's games, as well as general fantasy and horror fiction. White Wolf printed several Elric of Melniboné collections by Michael Moorcock. The company also put out general fiction collections by Harlan Ellison, as well as several paperback editions of the "Borderlands" anthologies edited by Thomas F. Monteleone. White Wolf had different imprints under which various books are published, most notably: Black Dog Game Factory was also a fictional company in the World of Darkness, as detailed in the Subsidiaries: A Guide to Pentex game supplement. Reception White Wolf won the 2004 Silver Ennie Award for "Best Publisher". See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/History_of_the_Jews_in_Myanmar] | [TOKENS: 1480] |
Contents History of the Jews in Myanmar The history of the Jews in Myanmar (formerly Burma) begins primarily in the mid-19th century, when hundreds of Jews emigrated from Iraq during the British colonial period. Cochin Jews came from India and both groups were part of the development of the British Empire, becoming allied with the British in Burma (now Myanmar). At its height in 1940 the community of Jews in the country stood at 2,500 members. During and after the Second World War many Jews left the country, first under pressure from the Japanese occupation of Burma, and later because of repression under the newly independent nationalist Burmese government. One synagogue survives in Yangon, the capital, and in the 21st century, it attracts an increasing number of tourists. History The first recorded Jew in the country was Solomon Gabirol, who served as a commander in the army of King Alaungpaya in the 18th century. But it was in the mid-19th century, during the British colonial period, that Jewish merchants from Iraq and India began establishing sizable communities in Rangoon and Mandalay. The Baghdadi Jews had emigrated from Iraq to escape persecution and being subject to pogroms; they comprised most of the immigrants. Cochin Jews and the Bene Israel came from India. Under British rule, the local Jewish community prospered as merchants developed small businesses, and traders worked in cotton and rice. The Baghdadi Jews established Musmeah Yeshua synagogue in Rangoon; it is the only synagogue still standing in Yangon or the country. It was first built in the 1850s as a small wooden structure, then rebuilt in 1896. The Jewish cemetery, containing 700 graves, is about six miles away. A Jewish school, for children up to middle-school age, had 200 students at its peak in 1910. After that some parents sent their children to secular schools, especially as many Jewish men married Burmese or other ethnic women. In some cases, children were sent to India or Great Britain for higher education. Jews were so established in their major communities that Rangoon and Pathein both elected Jewish mayors in the early 20th century. In this community, a number of the men "married out", to Burmese women. The Jewish community also established ties with British colonial officers and businessmen. A second synagogue, Beth El, was opened in 1932, reflecting the growth in population. By 1940 the community numbered its peak of 2500 persons. most were involved in business and industry, with "some owning ice factories and bottling plants, others dealing in textiles and timber. The rest were primarily customs officials and traders." In the early part of the twentieth century, various minority groups began to work toward establishing some autonomy, including the Karen people, who were indigenous to the territory. The Burmese were working toward nationalism; they had been later migrants from China in ancient times. With the Japanese invasion in 1942, many Jews fled to India, as their British alliances made the Japanese hostile to them. Though the Japanese were allies of the Nazis, they did not have any particular antipathy towards the Jews. At the same time, they viewed the local Jews with suspicion as a pro-British and a "European" group. In the drive for independence, the Burmese majority worked to dispossess and defeat the minority groups and strictly limited their rights in the new government. Burma was the first Asian nation to recognize Israel, and it maintains diplomatic relations with the Jewish state. Israel opened its first Diplomatic mission in Yangon in 1953, and in 1957 it became an embassy. Both nations shared a Socialist outlook in their early years and held extensive contacts between their respective leaders. Following nationalization of businesses in 1964, the remaining Jewish community suffered further decline. Beth El closed. Most members moved to other countries. The country's last rabbi left in 1969. Since the late 20th century, some of the Mizo people, who are ethnically descended from Tibet and live in the north of Burma, on the Indian border, have identified as Jews. They have taken on the belief that they descend from the lost tribe of Manasseh, based on certain traditions that are similar to those of Judaism. Some have converted to Judaism and immigrated to Israel. Their settlers in Israel have embraced Orthodox Judaism (they had to convert to Orthodoxy to be considered citizens) and have been settled in the West Bank and Gush Katif. They are known as the Bnei Menashe. As of 2002, 20 Jews remained in Yangon, the capital city. Many Burmese Jews have immigrated to Israel over the years, after India achieved independence. The local Jews use the Musmeah Yeshua Synagogue, but it rarely draws the required quorum of men for a full religious service. Often, employees of the Israeli embassy help maintain regular services; Moses Samuels, a native-born descendant of Jewish immigrants from Iraq, took on his father's role as Trustee of the synagogue to keep it up, along with the cemetery. His son Sammy Samuels was also committed to the future of the synagogue. The senior Samuels has given numerous tours to visitors. In 2011 the congregation had 45 Jews. In 2007 the US-ASEAN Council for Business and Technology, the US-ASEAN Business Council's 501(c)(3) tax-exempt organization, obtained a license from the United States Department of the Treasury's Office of Foreign Assets Control (OFAC) to raise funds for a humanitarian project: the maintenance and restoration of the Musmeah Yeshua Synagogue in Yangon. (The license was needed to operate outside the US economic sanctions against the government of Myanmar because of its human rights abuses; sanctions were lifted in 2012.) The Council planned to provide for the synagogue's monthly expenses; complete restoration and maintenance of the synagogue; and assist the synagogue to purchase and establish a new cemetery. On December 8, 2013, an interfaith event attended by the Myanmar Presidential Minister U Aung Min, US Ambassador Derek Mitchell, Israeli Ambassador Hagay Moshe Behar, the Yangon Religious Council, and other guests celebrated the completion of the restoration and establishing the synagogue as self-supporting. They credited anthropologist Ruth Cernea, who wrote a history of the Jewish community in Rangoon; Laura Hudson of the Council, and Stuart Spencer, a member of the synagogue's diaspora, as three leaders of this project. The Yangon Heritage Trust has installed a blue plaque at the synagogue, marking its historical significance. On July 26, 2020, the MICCI Myanmar - Israel Chamber of Commerce, Industry and Innovation was launched in Musmea Yeshua Synagogue in Yangon by the Ambassador of Israel, Ronen Gilor, the head of the Jewish Community, Sammy Samuels, the President of UMFCCI together with Myanmar and Israeli Business and leaders. The MICCI was incorporated by the DICA of the Ministry of Investment and Foreign Relations of the Republic of the Union of Myanmar. Representation in other media See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Lod#cite_note-90] | [TOKENS: 4733] |
Contents Lod Lod (Hebrew: לוד, fully vocalized: לֹד), also known as Lydda (Ancient Greek: Λύδδα) and Lidd (Arabic: اللِّدّ, romanized: al-Lidd, or اللُّدّ, al-Ludd), is a city 15 km (9+1⁄2 mi) southeast of Tel Aviv and 40 km (25 mi) northwest of Jerusalem in the Central District of Israel. It is situated between the lower Shephelah on the east and the coastal plain on the west. The city had a population of 90,814 in 2023. Lod has been inhabited since at least the Neolithic period. It is mentioned a few times in the Hebrew Bible and in the New Testament. Between the 5th century BCE and up until the late Roman period, it was a prominent center for Jewish scholarship and trade. Around 200 CE, the city became a Roman colony and was renamed Diospolis (Ancient Greek: Διόσπολις, lit. 'city of Zeus'). Tradition identifies Lod as the 4th century martyrdom site of Saint George; the Church of Saint George and Mosque of Al-Khadr located in the city is believed to have housed his remains. Following the Arab conquest of the Levant, Lod served as the capital of Jund Filastin; however, a few decades later, the seat of power was transferred to Ramla, and Lod slipped in importance. Under Crusader rule, the city was a Catholic diocese of the Latin Church and it remains a titular see to this day.[citation needed] Lod underwent a major change in its population in the mid-20th century. Exclusively Palestinian Arab in 1947, Lod was part of the area designated for an Arab state in the United Nations Partition Plan for Palestine; however, in July 1948, the city was occupied by the Israel Defense Forces, and most of its Arab inhabitants were expelled in the Palestinian expulsion from Lydda and Ramle. The city was largely resettled by Jewish immigrants, most of them expelled from Arab countries. Today, Lod is one of Israel's mixed cities, with an Arab population of 30%. Lod is one of Israel's major transportation hubs. The main international airport, Ben Gurion Airport, is located 8 km (5 miles) north of the city. The city is also a major railway and road junction. Religious references The Hebrew name Lod appears in the Hebrew Bible as a town of Benjamin, founded along with Ono by Shamed or Shamer (1 Chronicles 8:12; Ezra 2:33; Nehemiah 7:37; 11:35). In Ezra 2:33, it is mentioned as one of the cities whose inhabitants returned after the Babylonian captivity. Lod is not mentioned among the towns allocated to the tribe of Benjamin in Joshua 18:11–28. The name Lod derives from a tri-consonantal root not extant in Northwest Semitic, but only in Arabic (“to quarrel; withhold, hinder”). An Arabic etymology of such an ancient name is unlikely (the earliest attestation is from the Achaemenid period). In the New Testament, the town appears in its Greek form, Lydda, as the site of Peter's healing of Aeneas in Acts 9:32–38. The city is also mentioned in an Islamic hadith as the location of the battlefield where the false messiah (al-Masih ad-Dajjal) will be slain before the Day of Judgment. History The first occupation dates to the Neolithic in the Near East and is associated with the Lodian culture. Occupation continued in the Levant Chalcolithic. Pottery finds have dated the initial settlement in the area now occupied by the town to 5600–5250 BCE. In the Early Bronze, it was an important settlement in the central coastal plain between the Judean Shephelah and the Mediterranean coast, along Nahal Ayalon. Other important nearby sites were Tel Dalit, Tel Bareqet, Khirbat Abu Hamid (Shoham North), Tel Afeq, Azor and Jaffa. Two architectural phases belong to the late EB I in Area B. The first phase had a mudbrick wall, while the late phase included a circulat stone structure. Later excavations have produced an occupation later, Stratum IV. It consists of two phases, Stratum IVb with mudbrick wall on stone foundations and rounded exterior corners. In Stratum IVa there was a mudbrick wall with no stone foundations, with imported Egyptian potter and local pottery imitations. Another excavations revealed nine occupation strata. Strata VI-III belonged to Early Bronze IB. The material culture showed Egyptian imports in strata V and IV. Occupation continued into Early Bronze II with four strata (V-II). There was continuity in the material culture and indications of centralized urban planning. North to the tell were scattered MB II burials. The earliest written record is in a list of Canaanite towns drawn up by the Egyptian pharaoh Thutmose III at Karnak in 1465 BCE. From the fifth century BCE until the Roman period, the city was a centre of Jewish scholarship and commerce. According to British historian Martin Gilbert, during the Hasmonean period, Jonathan Maccabee and his brother, Simon Maccabaeus, enlarged the area under Jewish control, which included conquering the city. The Jewish community in Lod during the Mishnah and Talmud era is described in a significant number of sources, including information on its institutions, demographics, and way of life. The city reached its height as a Jewish center between the First Jewish-Roman War and the Bar Kokhba revolt, and again in the days of Judah ha-Nasi and the start of the Amoraim period. The city was then the site of numerous public institutions, including schools, study houses, and synagogues. In 43 BC, Cassius, the Roman governor of Syria, sold the inhabitants of Lod into slavery, but they were set free two years later by Mark Antony. During the First Jewish–Roman War, the Roman proconsul of Syria, Cestius Gallus, razed the town on his way to Jerusalem in Tishrei 66 CE. According to Josephus, "[he] found the city deserted, for the entire population had gone up to Jerusalem for the Feast of Tabernacles. He killed fifty people whom he found, burned the town and marched on". Lydda was occupied by Emperor Vespasian in 68 CE. In the period following the destruction of Jerusalem in 70 CE, Rabbi Tarfon, who appears in many Tannaitic and Jewish legal discussions, served as a rabbinic authority in Lod. During the Kitos War, 115–117 CE, the Roman army laid siege to Lod, where the rebel Jews had gathered under the leadership of Julian and Pappos. Torah study was outlawed by the Romans and pursued mostly in the underground. The distress became so great, the patriarch Rabban Gamaliel II, who was shut up there and died soon afterwards, permitted fasting on Ḥanukkah. Other rabbis disagreed with this ruling. Lydda was next taken and many of the Jews were executed; the "slain of Lydda" are often mentioned in words of reverential praise in the Talmud. In 200 CE, emperor Septimius Severus elevated the town to the status of a city, calling it Colonia Lucia Septimia Severa Diospolis. The name Diospolis ("City of Zeus") may have been bestowed earlier, possibly by Hadrian. At that point, most of its inhabitants were Christian. The earliest known bishop is Aëtius, a friend of Arius. During the following century (200-300CE), it's said that Joshua ben Levi founded a yeshiva in Lod. In December 415, the Council of Diospolis was held here to try Pelagius; he was acquitted. In the sixth century, the city was renamed Georgiopolis after St. George, a soldier in the guard of the emperor Diocletian, who was born there between 256 and 285 CE. The Church of Saint George and Mosque of Al-Khadr is named for him. The 6th-century Madaba map shows Lydda as an unwalled city with a cluster of buildings under a black inscription reading "Lod, also Lydea, also Diospolis". An isolated large building with a semicircular colonnaded plaza in front of it might represent the St George shrine. After the Muslim conquest of Palestine by Amr ibn al-'As in 636 CE, Lod which was referred to as "al-Ludd" in Arabic served as the capital of Jund Filastin ("Military District of Palaestina") before the seat of power was moved to nearby Ramla during the reign of the Umayyad Caliph Suleiman ibn Abd al-Malik in 715–716. The population of al-Ludd was relocated to Ramla, as well. With the relocation of its inhabitants and the construction of the White Mosque in Ramla, al-Ludd lost its importance and fell into decay. The city was visited by the local Arab geographer al-Muqaddasi in 985, when it was under the Fatimid Caliphate, and was noted for its Great Mosque which served the residents of al-Ludd, Ramla, and the nearby villages. He also wrote of the city's "wonderful church (of St. George) at the gate of which Christ will slay the Antichrist." The Crusaders occupied the city in 1099 and named it St Jorge de Lidde. It was briefly conquered by Saladin, but retaken by the Crusaders in 1191. For the English Crusaders, it was a place of great significance as the birthplace of Saint George. The Crusaders made it the seat of a Latin Church diocese, and it remains a titular see. It owed the service of 10 knights and 20 sergeants, and it had its own burgess court during this era. In 1226, Ayyubid Syrian geographer Yaqut al-Hamawi visited al-Ludd and stated it was part of the Jerusalem District during Ayyubid rule. Sultan Baybars brought Lydda again under Muslim control by 1267–8. According to Qalqashandi, Lydda was an administrative centre of a wilaya during the fourteenth and fifteenth century in the Mamluk empire. Mujir al-Din described it as a pleasant village with an active Friday mosque. During this time, Lydda was a station on the postal route between Cairo and Damascus. In 1517, Lydda was incorporated into the Ottoman Empire as part of the Damascus Eyalet, and in the 1550s, the revenues of Lydda were designated for the new waqf of Hasseki Sultan Imaret in Jerusalem, established by Hasseki Hurrem Sultan (Roxelana), the wife of Suleiman the Magnificent. By 1596 Lydda was a part of the nahiya ("subdistrict") of Ramla, which was under the administration of the liwa ("district") of Gaza. It had a population of 241 households and 14 bachelors who were all Muslims, and 233 households who were Christians. They paid a fixed tax-rate of 33,3 % on agricultural products, including wheat, barley, summer crops, vineyards, fruit trees, sesame, special product ("dawalib" =spinning wheels), goats and beehives, in addition to occasional revenues and market toll, a total of 45,000 Akçe. All of the revenue went to the Waqf. In 1051 AH/1641/2, the Bedouin tribe of al-Sawālima from around Jaffa attacked the villages of Subṭāra, Bayt Dajan, al-Sāfiriya, Jindās, Lydda and Yāzūr belonging to Waqf Haseki Sultan. The village appeared as Lydda, though misplaced, on the map of Pierre Jacotin compiled in 1799. Missionary William M. Thomson visited Lydda in the mid-19th century, describing it as a "flourishing village of some 2,000 inhabitants, imbosomed in noble orchards of olive, fig, pomegranate, mulberry, sycamore, and other trees, surrounded every way by a very fertile neighbourhood. The inhabitants are evidently industrious and thriving, and the whole country between this and Ramleh is fast being filled up with their flourishing orchards. Rarely have I beheld a rural scene more delightful than this presented in early harvest ... It must be seen, heard, and enjoyed to be appreciated." In 1869, the population of Ludd was given as: 55 Catholics, 1,940 "Greeks", 5 Protestants and 4,850 Muslims. In 1870, the Church of Saint George was rebuilt. In 1892, the first railway station in the entire region was established in the city. In the second half of the 19th century, Jewish merchants migrated to the city, but left after the 1921 Jaffa riots. In 1882, the Palestine Exploration Fund's Survey of Western Palestine described Lod as "A small town, standing among enclosure of prickly pear, and having fine olive groves around it, especially to the south. The minaret of the mosque is a very conspicuous object over the whole of the plain. The inhabitants are principally Moslim, though the place is the seat of a Greek bishop resident of Jerusalem. The Crusading church has lately been restored, and is used by the Greeks. Wells are found in the gardens...." From 1918, Lydda was under the administration of the British Mandate in Palestine, as per a League of Nations decree that followed the Great War. During the Second World War, the British set up supply posts in and around Lydda and its railway station, also building an airport that was renamed Ben Gurion Airport after the death of Israel's first prime minister in 1973. At the time of the 1922 census of Palestine, Lydda had a population of 8,103 inhabitants (7,166 Muslims, 926 Christians, and 11 Jews), the Christians were 921 Orthodox, 4 Roman Catholics and 1 Melkite. This had increased by the 1931 census to 11,250 (10,002 Muslims, 1,210 Christians, 28 Jews, and 10 Bahai), in a total of 2475 residential houses. In 1938, Lydda had a population of 12,750. In 1945, Lydda had a population of 16,780 (14,910 Muslims, 1,840 Christians, 20 Jews and 10 "other"). Until 1948, Lydda was an Arab town with a population of around 20,000—18,500 Muslims and 1,500 Christians. In 1947, the United Nations proposed dividing Mandatory Palestine into two states, one Jewish state and one Arab; Lydda was to form part of the proposed Arab state. In the ensuing war, Israel captured Arab towns outside the area the UN had allotted it, including Lydda. In December 1947, thirteen Jewish passengers in a seven-car convoy to Ben Shemen Youth Village were ambushed and murdered.In a separate incident, three Jewish youths, two men and a woman were captured, then raped and murdered in a neighbouring village. Their bodies were paraded in Lydda’s principal street. The Israel Defense Forces entered Lydda on 11 July 1948. The following day, under the impression that it was under attack, the 3rd Battalion was ordered to shoot anyone "seen on the streets". According to Israel, 250 Arabs were killed. Other estimates are higher: Arab historian Aref al Aref estimated 400, and Nimr al Khatib 1,700. In 1948, the population rose to 50,000 during the Nakba, as Arab refugees fleeing other areas made their way there. A key event was the Palestinian expulsion from Lydda and Ramle, with the expulsion of 50,000-70,000 Palestinians from Lydda and Ramle by the Israel Defense Forces. All but 700 to 1,056 were expelled by order of the Israeli high command, and forced to walk 17 km (10+1⁄2 mi) to the Jordanian Arab Legion lines. Estimates of those who died from exhaustion and dehydration vary from a handful to 355. The town was subsequently sacked by the Israeli army. Some scholars, including Ilan Pappé, characterize this as ethnic cleansing. The few hundred Arabs who remained in the city were soon outnumbered by the influx of Jews who immigrated to Lod from August 1948 onward, most of them from Arab countries. As a result, Lod became a predominantly Jewish town. After the establishment of the state, the biblical name Lod was readopted. The Jewish immigrants who settled Lod came in waves, first from Morocco and Tunisia, later from Ethiopia, and then from the former Soviet Union. Since 2008, many urban development projects have been undertaken to improve the image of the city. Upscale neighbourhoods have been built, among them Ganei Ya'ar and Ahisemah, expanding the city to the east. According to a 2010 report in the Economist, a three-meter-high wall was built between Jewish and Arab neighbourhoods and construction in Jewish areas was given priority over construction in Arab neighborhoods. The newspaper says that violent crime in the Arab sector revolves mainly around family feuds over turf and honour crimes. In 2010, the Lod Community Foundation organised an event for representatives of bicultural youth movements, volunteer aid organisations, educational start-ups, businessmen, sports organizations, and conservationists working on programmes to better the city. In the 2021 Israel–Palestine crisis, a state of emergency was declared in Lod after Arab rioting led to the death of an Israeli Jew. The Mayor of Lod, Yair Revivio, urged Prime Minister of Israel Benjamin Netanyahu to deploy Israel Border Police to restore order in the city. This was the first time since 1966 that Israel had declared this kind of emergency lockdown. International media noted that both Jewish and Palestinian mobs were active in Lod, but the "crackdown came for one side" only. Demographics In the 19th century and until the Lydda Death March, Lod was an exclusively Muslim-Christian town, with an estimated 6,850 inhabitants, of whom approximately 2,000 (29%) were Christian. According to the Israel Central Bureau of Statistics (CBS), the population of Lod in 2010 was 69,500 people. According to the 2019 census, the population of Lod was 77,223, of which 53,581 people, comprising 69.4% of the city's population, were classified as "Jews and Others", and 23,642 people, comprising 30.6% as "Arab". Education According to CBS, 38 schools and 13,188 pupils are in the city. They are spread out as 26 elementary schools and 8,325 elementary school pupils, and 13 high schools and 4,863 high school pupils. About 52.5% of 12th-grade pupils were entitled to a matriculation certificate in 2001.[citation needed] Economy The airport and related industries are a major source of employment for the residents of Lod. Other important factories in the city are the communication equipment company "Talard", "Cafe-Co" - a subsidiary of the Strauss Group and "Kashev" - the computer center of Bank Leumi. A Jewish Agency Absorption Centre is also located in Lod. According to CBS figures for 2000, 23,032 people were salaried workers and 1,405 were self-employed. The mean monthly wage for a salaried worker was NIS 4,754, a real change of 2.9% over the course of 2000. Salaried men had a mean monthly wage of NIS 5,821 (a real change of 1.4%) versus NIS 3,547 for women (a real change of 4.6%). The mean income for the self-employed was NIS 4,991. About 1,275 people were receiving unemployment benefits and 7,145 were receiving an income supplement. Art and culture In 2009-2010, Dor Guez held an exhibit, Georgeopolis, at the Petach Tikva art museum that focuses on Lod. Archaeology A well-preserved mosaic floor dating to the Roman period was excavated in 1996 as part of a salvage dig conducted on behalf of the Israel Antiquities Authority and the Municipality of Lod, prior to widening HeHalutz Street. According to Jacob Fisch, executive director of the Friends of the Israel Antiquities Authority, a worker at the construction site noticed the tail of a tiger and halted work. The mosaic was initially covered over with soil at the conclusion of the excavation for lack of funds to conserve and develop the site. The mosaic is now part of the Lod Mosaic Archaeological Center. The floor, with its colorful display of birds, fish, exotic animals and merchant ships, is believed to have been commissioned by a wealthy resident of the city for his private home. The Lod Community Archaeology Program, which operates in ten Lod schools, five Jewish and five Israeli Arab, combines archaeological studies with participation in digs in Lod. Sports The city's major football club, Hapoel Bnei Lod, plays in Liga Leumit (the second division). Its home is at the Lod Municipal Stadium. The club was formed by a merger of Bnei Lod and Rakevet Lod in the 1980s. Two other clubs in the city play in the regional leagues: Hapoel MS Ortodoxim Lod in Liga Bet and Maccabi Lod in Liga Gimel. Hapoel Lod played in the top division during the 1960s and 1980s, and won the State Cup in 1984. The club folded in 2002. A new club, Hapoel Maxim Lod (named after former mayor Maxim Levy) was established soon after, but folded in 2007. Notable people Twin towns-sister cities Lod is twinned with: See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Fellow] | [TOKENS: 1653] |
Contents Fellow A fellow is a title and form of address for distinguished, learned, or skilled individuals in academia, medicine, research, and industry. The exact meaning of the term differs in each field. In learned or professional societies, the term refers to a privileged member who is specially elected in recognition of their work and achievements. Within institutions of higher education, a fellow is a member of a highly ranked group of teachers at a particular college or university or a member of the governing body in some universities. It can also be a specially selected postgraduate student who has been appointed to a post (called a fellowship) granting a stipend, research facilities and other privileges for a fixed period (usually one year or more) in order to undertake some advanced study or research, often in return for teaching services. In the context of medical education in North America, a fellow (also known as a fellow physician) is a doctor who is undergoing a supervised, sub-specialty medical training (fellowship) after having completed a specialty training program (residency). Lastly, In large, R&D-intensive institutions, the term denotes a small number of senior scientists and engineers. Education and academia In education and academia there are several kinds of fellowships, awarded for different reasons. The title of (senior) teaching fellow is used to denote an academic teaching position at a university or similar institution and is roughly equivalent to the title of (senior) lecturer. The title (senior) fellow can also be bestowed to an academic member of staff upon retirement who continues to be affiliated to a university in the United Kingdom. The term teaching fellow or teaching assistant is used, in the United States and United Kingdom, in secondary school, high school and middle school setting for students or adults that assist a teacher with one or more classes. In US medical institutions, a fellow (also known as a fellow physician) refers to someone who has completed residency training (e.g. in family medicine, internal medicine, pediatrics, general surgery, etc.) and is currently in a 1- to 3-year subspecialty training program (e.g. cardiology, sleep medicine/somnology, pediatric nephrology, transplant surgery, etc.). The title of research fellow may be used to denote an academic position at a university or a similar institution; it is roughly equivalent to the title of lecturer in the Commonwealth teaching career pathway.[citation needed] Research fellow may also refer to the recipient of academic financial grant or scholarship. For example, in Germany, institutions such as the Alexander von Humboldt Foundation offer research fellowship for postdoctoral research and refer to the holder as research fellows, while the award holder may formally hold a specific academic title at their home institution (e.g., Privatdozent). These are often shortened to the name of the programme or organization, e.g. Dorothy Hodgkin Fellow rather than Dorothy Hodgkin Research Fellow, except where this might cause confusion with another fellowship, (e.g. Royal Society University Research Fellowship.) In the context of graduate school in the United States and Canada, a fellow is a recipient of a postgraduate fellowship. Examples include the NSF Graduate Research Fellowship, the DoD National Defense Science and Engineering Graduate Fellowship, the DOE Computational Science Graduate Fellowship, the Guggenheim Fellowship, the Rosenthal Fellowship, the Frank Knox Memorial Fellowship, the Woodrow Wilson Teaching Fellowship and the Presidential Management Fellowship. It is granted to prospective or current students, on the basis of their academic or research achievements. In the UK, research fellowships are awarded to support postdoctoral researchers such as those funded by the Wellcome Trust and the Biotechnology and Biological Sciences Research Council (BBSRC). At ETH Zurich, postdoctoral fellowships support incoming researchers. The MacArthur Fellows Program (aka "genius grant") as prestigious research fellowship awarded in the United States. Fellowships may involve a short placement for capacity building, e.g., to get more experience in government, such as the American Association for the Advancement of Science's fellowships and the American Academy of Arts and Sciences Fellowship programs. Some institutions offer fellowships as a professional training program as well as a financial grant, such as the Balsillie School of International Affairs, where tuition and other fees are paid by the fellowship. Fellows are often the highest grade of membership of many professional associations or learned societies, for example, the Chartered Institute of Arbitrators, the Chartered Governance Institute, Royal College of Surgeons, the Institution of Chemical Engineers, or Royal Society of Chemistry. Lower grades are referred to as members (who typically share voting rights with the fellows), or associates (who may or may not, depending on whether "associate" status is a form of full membership). Additional grades of membership exist in, for example, the Institute of Electrical and Electronics Engineers (IEEE) and the Association for Computing Machinery (ACM). Fellowships of this type can be awarded as a title of honor in their own right, e.g. the Fellow of the Royal Society (FRS) or Fellow of the Royal Academy of Engineering (FREng). Exclusive learned societies such as the Royal Society have Fellow as the only grade of membership. Appointment as an honorary fellow in a learned or professional society can be either to honour exceptional achievement or service within the professional domain of the awarding body or to honour contributions related to the domain from someone who is professionally outside it. Membership of the awarding body may or may not be a requirement. How a fellowship is awarded varies for each society, but may typically involve some or all of these: At the ancient universities of Oxford, Cambridge, and Trinity College, Dublin, members of the teaching staff typically have two affiliations: one as a reader, lecturer, or other academic rank within a department of the university, as at other universities, and a second affiliation as a fellow of one of the colleges of the university. The fellows, sometimes referred to as university dons, form the governing body of the college. They may elect a council to handle day-to-day management. All fellows are entitled to certain privileges within their colleges, which may include dining at High Table (free of charge) and possibly the right to a room in college (free of charge).[citation needed] At Cambridge, retired academics may remain fellows.[citation needed] At Oxford, however, a governing body fellow would normally be elected a fellow emeritus and would leave the governing body upon their retirement.[citation needed] Distinguished old members of the college, or its benefactors and friends, might also be elected 'Honorary Fellow', normally for life; but beyond limited dining rights this is merely an honour. Most Oxford colleges have 'Fellows by Special Election' or 'Supernumerary Fellows', who may be members of the teaching staff, but not necessarily members of the governing body. Some senior administrators of a college such as bursars are made fellows, and thereby members of the governing body, because of their importance to the running of a college.[citation needed] At some universities in the United States, "fellows" are members of the board of trustees who hold administrative positions as non-executive trustee rather than academics.[citation needed] Industry and corporate fellows In industries intensive in science, engineering, medicine, and research & development, companies may appoint a very small number of top senior researchers as corporate, technical or industry fellows, either in Science or in Engineering. These are internationally recognized leaders who are among the best in the world in their respective fields. Corporate, Technical or Industry Fellow in either Science or Engineering is the most senior rank or title one can achieve in a scientific or engineering career, though fellows often also hold business titles such as Vice President or Chief Technology Officer. Notable examples of fellows in scientific, medical and other research-intensive organizations include: The title fellow can be used for participants in a professional development program run by a nonprofit or governmental organization. This type of fellowship is a short term work opportunity (1–2 years) for professionals who already possess some level of academic or professional expertise that will serve the nonprofit mission. Fellows are given a stipend as well as professional experience and leadership training. See also References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ultima_(series)] | [TOKENS: 6863] |
Contents Ultima (series) Ultima is a series of open world fantasy role-playing video games from Origin Systems, created by Richard Garriott. Electronic Arts has owned the brand since 1992. The series had sold over 2 million copies by 1997. Alongside Wizardry and Might and Magic, the Ultima series is considered to have established many norms of the computer role-playing game genre. Several games of the series are considered seminal in their genre. Their innovations, particularly in the early installments, were widely copied by other games. The Ultima games are primarily within the scope of fantasy fiction but contain science fiction elements as well. They take place for the most part in a world called Britannia; the recurring hero is initially called the Stranger, until he attains the role of Avatar in Ultima IV and is known by that appellation from then on. Games The main Ultima series consists of nine installments (with the seventh title divided into two parts) grouped into three trilogies, or "Ages": The Age of Darkness (Ultima I-III), The Age of Enlightenment (Ultima IV-VI), and The Age of Armageddon (Ultima VII-IX). The last is also sometimes referred to as "The Guardian Saga" after its chief antagonist. The first trilogy is set in a fantasy world named Sosaria, but during the cataclysmic events of The Age of Darkness, it is sundered and three-quarters of it vanish. What is left becomes known as Britannia, a realm ruled by the benevolent Lord British, and is where the later games mostly take place. The protagonist in all the games is a resident of Earth who is called upon by Lord British to protect Sosaria and, later, Britannia from a number of dangers. Originally, the player character was referred to as "the Stranger", but by the end of Ultima IV he becomes universally known as the Avatar. In Ultima I: The First Age of Darkness (1981), the Stranger is first summoned to Sosaria to defeat the evil wizard Mondain who aims to enslave it. Since Mondain possesses the Gem of Immortality, which makes him invulnerable, the Stranger locates a time machine, travels back in time to kill Mondain before he creates the Gem, and shatters the incomplete artifact. Ultima II: The Revenge of the Enchantress (1982) details Mondain's secret student and lover Minax's attempt to avenge him. When Minax launches an attack on the Stranger's homeworld of Earth, her actions cause doorways to open to various times and locations throughout Earth's history, and brings forth legions of monsters to all of them. The Stranger, after obtaining the Quicksword that alone can harm her, locates the evil sorceress at Castle Shadowguard at the origin of time and defeats her. Ultima III: Exodus (1983) reveals that Mondain and Minax had an offspring, the eponymous Exodus, "neither human, nor machine", according to the later games (it is depicted as a computer at the conclusion of the game, and it appears to be a demonic, self-aware artificial intelligence). Some time after Minax's death, Exodus starts its own attack on Sosaria and the Stranger is summoned once again to destroy it. Exodus was the first installment of the series featuring a player party system, which was used in many later games. Ultima IV: Quest of the Avatar (1985) marked a turning point in the series from the traditional "hero vs. villain" plots, instead introducing a complex alignment system based upon the Eight Virtues derived from the combinations of the Three Principles of Love, Truth and Courage. Although Britannia now prospers under Lord British's rule, he fears for his subjects' spiritual well-being and summons the Stranger again to become a spiritual leader of Britannian people by example. Throughout the game, the Stranger's actions determine how close he comes to this ideal. Upon achieving enlightenment in every Virtue, he can reach the Codex of Ultimate Wisdom and becomes the "Avatar", the embodiment of Britannia's virtues. In Ultima V: Warriors of Destiny (1988), the Avatar returns to Britannia to find that after Lord British had been lost in the Underworld, Lord Blackthorn, who rules in his stead, was corrupted by the Shadowlords and enforces a radically twisted vision of the Virtues, deviating considerably from their original meaning. The Avatar and his companions proceed to rescue the true king, overthrow the tyrant, and restore the Virtues in their true form. Ultima VI: The False Prophet (1990) details the invasion of Britannia by Gargoyles, which the Avatar and his companions have to repel. Over the course of the game, it is revealed that the Gargoyles have valid reasons to loathe the Avatar. Exploring the themes of racism and xenophobia, the game tasks the Avatar with understanding and reconciling two seemingly opposing cultures. Ultima VII: The Black Gate (1992) sees the Avatar entangled in the plan of an ostensibly virtuous and benevolent organization named the Fellowship (inspired by Scientology) to create a gateway for the evil entity known as the Guardian to enter Britannia. Though all of the main line of Ultima games are arranged into trilogies, Richard Garriott later revealed that Ultima VII was the first game where he did any sort of planning ahead for future games in the series. He elaborated that "the first three didn't have much to do with each other, they were 'Richard Garriott learns to program'; IV through VI were a backwards-designed trilogy, in the sense that I tied them together as I wrote them; but VII-IX, the story of the Guardian, were a preplanned trilogy, and we had a definite idea of where we wanted to go." An expansion pack was released named Forge of Virtue that added a newly arisen volcanic island to the map that the Avatar was invited to investigate. The tie-in storyline was limited to this island, where a piece of Exodus (his data storage unit) had resurfaced. To leave the island again, the Avatar had to destroy this remnant of Exodus. In the process of doing so, he also created The Black Sword, an immensely powerful weapon possessed by a demon. Ultima VII Part Two: Serpent Isle (1993) was released as the second part of Ultima VII because it used the same game engine as Ultima VII. According to interviews, Richard Garriott felt it therefore did not warrant a new number. Production was rushed due to deadlines set to the developers, and the storyline was cut short; remains of the original, longer storyline can be found in the database. Following the Fellowship's defeat, its founder Batlin flees to the Serpent Isle, pursued by the Avatar and companions. Serpent Isle is revealed as another fragment of former Sosaria, and its history which is revealed throughout the game provides many explanations and ties up many loose ends left over from the Age of Darkness era. Magical storms herald the unraveling of the dying world's very fabric, and the game's mood is notably melancholic, including the voluntary sacrificial death of a long-standing companion of the Avatar, Dupre. By the end of the game, the Avatar is abducted by the Guardian and thrown into another world, which becomes the setting for the next game in the series. The Silver Seed was an expansion pack for Ultima VII Part 2 where the Avatar travels back in time to plant a silver seed, thus balancing the forces that hold the Serpent Isle together. Like Forge of Virtue, the expansion contained an isolated sub-quest that was irrelevant to the main game's storyline, but provided the Avatar with a plethora of useful and powerful artifacts. In Ultima VIII: Pagan (1994), the Avatar finds himself exiled by the Guardian to a world called "Pagan". The Britannic Principles and Virtues are unknown here. Pagan is ruled by the Elemental Titans, god-like servants of the Guardian. The Avatar defeats them with their own magic, ascending to demi-godhood himself, and finally returns to Britannia. A planned expansion pack, The Lost Vale, was canceled after Ultima VIII failed to meet sales expectations. Ultima IX: Ascension (1999), the final installment of the series, sees Britannia conquered and its Virtues corrupted by the Guardian. The Avatar has to cleanse and restore them. The Guardian is revealed to be the evil part of the Avatar himself, expelled from him when he became the Avatar. To stop it, he has to merge with it, destroying himself as a separate entity. The unreleased version of the plot featured a more apocalyptic ending, with the Guardian and Lord British killed, Britannia destroyed, and the Avatar ascending to a higher plane of existence. Akalabeth: World of Doom was released in 1979, and is sometimes considered a precursor to the Ultima series. Sierra On-Line also produced Ultima: Escape from Mt. Drash in 1983. The maze game has nothing in common with the others, but is highly sought after by collectors due to extreme rarity. The Worlds of Ultima series is a spin-off of Ultima VI using the same game engine, following the Avatar's adventures after the game's conclusion: The second spin-off series, Ultima Underworld, consisted of three games with a first-person perspective: Console versions of Ultima have allowed further exposure to the series, especially in Japan where the games have been bestsellers and were accompanied by several tie-in products including Ultima cartoons and manga. In most cases, gameplay and graphics have been changed significantly. Ultima Online (1997), a MMORPG spin-off of the main series, has become an unexpected hit, making it one of the earliest and longest-running successful MMORPGs of all time. Its lore retconned the ending of Ultima I, stating that when the Stranger shattered the Gem of Immortality, he discovered that it was tied to the world itself, therefore its shards each contained a miniature version of Britannia. The player characters in Ultima Online exist on these "shards". Eight expansion packs for UO have been released (The Second Age, Renaissance, Third Dawn, Lord Blackthorn's Revenge, Age of Shadows, Samurai Empire, Mondain's Legacy and Stygian Abyss). The aging UO graphic engine was renewed in 2007 with the official Kingdom Reborn client. Ultima Online 2, later renamed to Ultima Worlds Online: Origin and canceled in 2001, would have introduced steampunk elements to the game world, following Lord British's unsuccessful attempt to merge past, present, and future shards together. UO spawned two sequel efforts that were canceled before release: Ultima Worlds Online: Origin (canceled in 2001, though the game's storyline was published in the Technocrat War trilogy) and Ultima X: Odyssey (canceled in 2004). Ultima X: Odyssey would have continued the story of Ultima IX. Now merged with the Guardian, the Avatar creates a world of Alucinor inside his mind, where the players were supposed to pursue the Eight Virtues in order to strengthen him and weaken the Guardian. Ultima X was developed without participation of the original creator Richard Garriott and he no longer owns the rights to the series. However, he still owns the rights to several of the game characters so it is impossible for either him or Electronic Arts to produce a new Ultima title without getting permission from each other. Lord of Ultima is a defunct free-to-play browser-based MMORTS released in 2010 by EA Phenomic. It was the first release in the Ultima series since Ultima Online, and also the first title to have no involvement from series creator Garriott or founding company Origin. It has been criticized[by whom?] for having slow-paced gameplay and very weak connections to the Ultima franchise lore. EA announced on February 12, 2014, that Lord of Ultima would be shut down and taken offline as of May 12, 2014.[needs update] Announced in summer 2012, Ultima Forever is a defunct free-to-play online action role-playing game. In contrast to Lord of Ultima, Ultima Forever returns to the lore of the original game series. As of August 29, 2014. Ultima Forever's servers were shut down. A group of volunteer programmers created Ultima V: Lazarus in 2005, a remake of Ultima V using the Dungeon Siege engine. The Exult open-source project aims to recreate Ultima VII for modern operating systems, using the game's original plot, data, and graphics files. Several novels were released under the Ultima name, including: In Japan, various novels, multiple gamebooks, a soundtrack CD, two kinds of wrist watches, a tape dispenser, a pencil holder, a board game, a jacket, and a beach towel were released. There were rumours of an Ultima anime cartoon, but its existence has been described as unlikely. Four main manga comics were released in Japan: Packaging Ultima game boxes often contained so-called "feelies"; e.g. from Ultima II on, every game in the main series came with a cloth map of the game world. Starting with Ultima IV, small trinkets like pendants, coins and magic stones were included. Made of metal or glass, they usually represented an important object found within the game itself. Not liking how games were sold in zip lock bags with a few pages printed out for instructions, Richard Garriott insisted Ultima II be sold in a box, with a cloth map, and a manual. Sierra was the only company at that time willing to agree to this, and thus he signed with them. Copy protection measures In the Atari 8bit version of Ultima IV one of the floppy disks had an unformatted track. In its absence the player would lose on every fight, which would not be obvious as a copy protection effect right away as one could assume that this was just due to either lack of experience or proper equipment. The protection mechanism was subtle enough to be overlooked by the German distributor that originally delivered Atari 8bit packages with floppies that were formatted regularly, and thus these paid copies acted like unlicensed copies, causing players to lose every battle. In Ultima V, there were one or two instances where ostensibly insignificant information found in the accompanying booklet were asked by person(s) encountered in the game. The game also used runic script in some places and a special language for spell names, for both of which the necessary translation tables / explanations were provided in the booklet. Similarly, a journal of Lord British's doomed expedition into the underworld was included with the box; over the course of the game it turns out that the player will have to retrace the expedition's steps to recover a vital item. These can be seen as subtle copy-protection measures, well fitted for the context of history and fantasy so that a casual player didn't take them for copy protection. Ultima VI introduced a more systematic use of copy protection in the form of in-game questions, preventing the player from progressing any further if the questions were answered incorrectly. In Ultima VII, this practice was continued, although in both games the player had an unlimited number of tries to answer the questions correctly. Answers could be obtained by consulting the manual or cloth map, although the manual released with the Ultima Collection contained all copy protection answers for every game. In Ultima VII Part 2: Serpent Isle, the copy protection was changed slightly. Players were asked questions at two points in the game, and if they could not answer after two attempts, all NPCs said nothing but altered versions of famous quotes. Everything would also be labeled "Oink!", preventing the game from being played. From Ultima VIII onward, copy protection questions were discontinued. Common elements Originally, the world of Ultima was made up of four continents. These were Lord British's Realm, ruled by Lord British and the Lost King; The Lands of Danger and Despair, ruled by Lord Shamino and the King of the White Dragon; The Lands of the Dark Unknown, ruled by Lord Olympus and the King of the Black Dragon; and The Lands of the Feudal Lords, ruled by the lords of Castle Rondorin and Castle Barataria. After the defeat of Mondain and the shattering of his Gem of Immortality in Ultima I, there was a cataclysm that changed the structure of the world. Three of the four continents seemingly disappeared, leaving only Lord British's realm in the world. This remaining continent was referred from then on as "Sosaria". The Lands of Danger and Despair were later rediscovered as the Serpent Isle, which had been moved to a different dimension or plane, so it seems likely that the other two continents still exist. Ultima II shows Castle Barataria on Planet X, suggesting that the Lands of the Feudal Lords became this planet; Ultima Online: Samurai Empire posits that the Lands of the Feudal Lords was transformed into the Tokuno Islands by the cataclysm. After the defeat of Exodus in Ultima III, Sosaria became Britannia in order to honor its ruler, Lord British. Serpent Isle remained connected with Britannia via a gate in the polar region. The Fellowship leader, Batlin, fled here after the Black Gate was destroyed in Ultima VII, preventing the Guardian's first invasion. Ninety percent of the island's population was destroyed by evil Banes released by Batlin in a foolish attempt to capture them for his own use in Ultima VII Part 2. In Ultima, the player takes the role of the Avatar, who embodies eight virtues. First introduced in Ultima IV, the Three Principles and the Eight Virtues marked a reinvention of the game focus from a traditional role-playing model into an ethically framed one. Each virtue is associated with a party member, one of Britannia's cities, and one of the eight other planets in Britannia's solar system. Each virtue also has a mantra and each principle a word of power that the player must learn. The Eight Virtues explored in Ultima are Honesty, Compassion, Valor, Justice, Sacrifice, Honor, Spirituality, and Humility. These Eight Virtues are based on the Three Principles of Truth, Love, and Courage. The Principles are derived from the One True Axiom, the combination of all Truth, all Love, and all Courage, which is Infinity. The virtues were first introduced in Ultima IV: Quest of the Avatar (1985), where the goal of the game is to practice them and become a moral exemplar. Virtues and their variations are present in all later installments. Richard Garriott's motives in designing the virtue system were to build on the fact that games were provoking thought in the player, even unintentionally. As a designer, he "wasn't interested in teaching any specific lesson; instead, his next game would be about making people think about the consequences of their actions." The original virtue system in Ultima was partially inspired by the 16 ways of purification (sanskara) and character traits (samskara) which lead to Avatarhood in Hinduism. He also drew on his interpretation of characters from The Wizard of Oz, with the Scarecrow representing truth, the Tin Woodsman representing love, and the Cowardly Lion representing courage. The Virtues have become a frequent theme in the Ultima games following Ultima IV, with many different variants used throughout the series. Ultima V: Warriors of Destiny saw Lord Blackthorn turn the virtue system into a rigid and draconian set of laws. The rigid system of Blackthorn unintentionally causes the Virtues to actually achieve their polar opposites, in part due to the influence of the Shadowlords. This shows that the Virtues always come from one's own self, and that codifying ethics into law does not automatically make evil people good. Ultima VI: The False Prophet confronted the Avatar with the fact that, from another point of view, the Avatar's quests for Virtue may not appear virtuous at all, presenting an alternative set of virtues. In Ultima VII, an order known as the Fellowship displaced the Virtues with its own seemingly benevolent belief system, casting Britannia into disarray; and in Ultima IX, the Virtues had been inverted into their opposite anti-virtues. Ultima's virtue system was considered a new frontier in game design, and has become "an industry standard, especially within role-playing games." The original system from Ultima IV has influenced moral systems in games such as Black & White, Star Wars: Knights of the Old Republic and the Fable series. However, Ultima can only be won by being virtuous, while other games typically offer a choice to be vicious. Mark Hayse specifically praises Ultima's virtue system for its subtlety. The game emphasizes the importance of virtue, but leaves the practice ambiguous with no explicit point values and limited guidance. This makes the virtue system more of a "philosophical journey" than an ordinary game puzzle. The early Ultima games referred to the player-protagonist as the Stranger, with an open game design that allowed players to complete quests through theft or violence. After the release of Ultima III, creator Richard Garriott received letters from parents that criticized the Ultima series for allowing immoral actions, such as theft or murder against peaceful citizens. Garriott also received criticism about supposed Satanic content, particularly the demonic nature of the antagonist of Ultima III who appeared on the game packaging. In The Official Book of Ultima, Shay Addams described Richard Garriott's thinking, that "if people were going to look for hidden meaning in his work when they didn't even exist, he would introduce ideas and symbols with meaning and significance he deemed worthwhile, to give them something they could really think about." After watching a television show on Hinduism and the concept of the Avatar, Garriott was inspired to create his own system of eight Virtues for the next protagonist in Ultima IV, the Avatar. The Avatar makes his first appearance in the fourth Ultima game, where his goal is to follow the path of the Virtues, and retrieve the Codex of Ultimate Wisdom from the Great Stygian Abyss. In the fifth game, the Avatar defeats a repressive regime in Britannia, and in the sixth, he brings peace between men and gargoyles. In Ultima VII and VIII, the Avatar battles the Guardian, finally destroying both himself and his foe in Ultima IX: Ascension. With the exception of Ultima IX: Ascension, the player can choose the Avatar's name. Ultima VIII: Pagan fixed the Avatar's identity as a blond-haired blue-eyed male, while the other games allowed the player to select the Avatar's race, gender, and appearance. In Ultima IV onward, the player activates the Avatar's speech using singular keywords, until Ultima VII and Ultima Underworld allowed full dialog. Ultima IX added digitized speech to accompany the text. The Avatar was initially designed to be a blank slate through which players could reflect their own personality The use of the word "avatar" in this manner is the first time that the word represented a concept defined by its modern virtual context. The Avatar was one of the first times that a player could select the race and gender of the protagonist, and can be interpreted as a representative of the player, allowing them to reflect on their actions in the game. However, the Avatar eventually evolved to take on a more specific appearance and character. Lord British is the ruler of Britannia, and an in-game personification of the creator of the series, Richard Garriott. His name comes from a nickname given to him by friends at a computer camp, who felt that his way of saying "hello" was distinctly "British." The "Lord" prefix was added when he played the dungeon master in Dungeons & Dragons games. Garriott released early games, such as Akalabeth, under the name and occasionally appeared in Ultima Online playing as Lord British. He is still known as Lord British even after his departure from Ultima maker Origin Systems: Garriott retained the trademark rights to the name Lord British with its associated symbols, and the character appeared in his latest (and now defunct) online game, Tabula Rasa as General British. Lord Blackthorn becomes regent of Britannia when Lord British disappears while exploring the Underworld in Ultima V. Originally, he is a wise and just ruler, but he is twisted by the Shadowlords and becomes an oppressive tyrant. By the game's conclusion, Lord British is restored to his throne and Blackthorn sent to exile through a red moongate to an unknown world. Ultima VII Part Two: Serpent Isle explains that his destination is the Serpent Isle. While on Serpent Isle, Blackthorn takes refuge among the Xenkan Monks and finds redemption, eventually joining their order. But Ultima IX diverges from this restoration of Lord Blackthorn having him leave the island before the Avatar arrives on the Serpent Island. Blackthorn returns again as a villain in Ultima IX: Ascension, this time as a servant of the Guardian, which again contradicts the restorative end on Ultima VII: Part Two: Serpent Isle. In the end Blackthorn perishes at the hand of Lord British after an extensive magical duel at the center of the Great Stygian Abyss, completely contradicting everything written prior to Ascension and after Ultima VII. In Ultima Online, the timeline of which diverges from the main series after Ultima I, Blackthorn is the closest friend of Lord British, but at the same time he is also his fiercest enemy. He has been defending the peoples' individuality and freedom of belief by creating his own virtue, chaos. In this case, chaos does not represent the destructive force with which it is usually associated. He eventually forged an alliance with various dark magics and emerged as an evil force. After "surviving" through a few years, he was seemingly killed in an assault on the city of Yew. The evil form was later retconned into being a facsimile, and the original Lord Blackthorn became the king of Britannia. Lord Blackthorn was the virtual persona of Ultima Online project director Starr Long. The Guardian is an alien being of immense power from another dimension. A large red humanoid, he is described as a conqueror of worlds. He first appears in Ultima VII: The Black Gate although for the majority of the game he is only a disembodied voice. Having conquered other worlds, he first attempts to conquer Britannia through his agent Batlin, the founder and leader of the Fellowship. The ultimate plan was to create a black moongate to allow the Guardian to physically enter Britannia and conquer it. The Avatar discovers the Guardian's plan and destroys the black moongate as the Guardian is attempting to enter. The Ultima series of computer games employed several different artificial scripts. The people of Britannia, the fantasy world where the games are set, speak English, and most of the day-to-day things are written in Latin alphabet. However, there still are other scripts, which are used by tradition. Britannian runes are the most commonly seen script. In many of the games of the series, most signs are written in runic. The runes are based on Germanic runes, but closer to Dwarven runes in Tolkien's The Lord of the Rings, which creator Richard Garriott has stated he has read. They gained steadier use since Ultima V, which was the first game in the series to use a runic font for in-game signs. Runes in earlier games were mostly found in hard copy materials, such as maps and the decorative covers of booklets. Runes appear less in Ultima VII and in later games. Gargish is the language of the gargoyles of Britannia and the language used in spellcasting within the game. Unlike the runic script, which is usually used simply as a visual cipher for English, the Gargish script encodes a genuine constructed language, based on (but expanding greatly upon) the magical words of power that first saw use in Ultima V, as well as the mantras for each of the Shrines of Virtue, which had remained consistent since Ultima IV. The lexicon mostly comprises deformed or truncated Latinate stems (flam "fire" ← Latin flamma; lap "stone" ← Latin lapis; leg "to read" ← Latin legō), but other origins are also apparent (uis "wisdom" ← English wise; kas "helmet" ← French casque). But the grammar is de novo and bears little resemblance to Latin, being largely analytic in structure instead. Gargish uses suffixes to denote grammatical tense and aspect, and also in some forms of derivation. The Gargish alphabet is featured in Ultima VI, though it is seen only in specific game contexts. Ultima VII and onward does not feature anything written in the alphabet, with the sole exception of some books to be found in the gargoyle colony in the underwater city of Ambrosia in Ultima IX. The Gargish language and alphabet were designed by Herman Miller. The Ophidian alphabet, featured in Ultima VII Part Two: Serpent Isle, was used by the Ophidian civilization that inhabited the Serpent Isle. It is based on various snake forms. Ophidian lettering was quite difficult to read, so the game included a Translation spell that made the letters look like Latin letters. Reception In the United States, the first five Ultima games had collectively sold more than 470,000 copies for home computers by 1990. In Japan, total sales of Pony Canyon's Japanese versions of the Ultima series had reached nearly 100,000 copies on home computers and over 300,000 units on the Famicom (Nintendo Entertainment System), by 1990. In 1996, Next Generation ranked the Ultima series as collectively the 55th top game of all time, commenting that, "While the graphics and playing style change with the technological leaps of the day, [it] has been the most consistent source of roleplaying excitement in history." In 1999, Next Generation listed the Ultima series as number 18 on their "Top 50 Games of All Time", commenting that, "Most PC RPGs are about hacking and slashing through anything that moves, usually while crawling through a dungeon. The Ultima series, however, has always been firmly grounded in a world where a character's virtues are as important as their armor class in determining success." In 2000, Britannia was included in GameSpot's list of the ten best game worlds, called "the oldest and one of the most historically rich gameworlds." Impact and legacy Many innovations of the early Ultimas – in particular Ultima III: Exodus (1983) – eventually became standard among later RPGs, such as the use of tiled graphics and party-based combat, its mix of fantasy and science-fiction elements, and the introduction of time travel as a plot device. In turn, some of these elements were inspired by Wizardry, specifically the party-based combat. Exodus was also revolutionary in its use of a written narrative to convey a larger story than the typically minimal plots that were common at the time. Most video games – including Garriott's own Ultima I and II and Akalabeth – tended to focus primarily on things like combat without venturing much further. In addition, Garriott would introduce in Ultima IV a theme that would persist throughout later Ultimas – a system of chivalry and code of conduct in which the player, or "Avatar", is tested periodically (in both obvious and unseen ways) and judged according to his or her actions. This system of morals and ethics was unique, in that in other video games players could for the most part act and do as they wished without having to consider the consequences of their actions. Ultima III would go on to be released for many other platforms and influenced the development of such RPGs as Excalibur, The Legend of Zelda and Dragon Quest; and many consider the game to be the first modern computer role-playing game. On June 30, 2020, Garriott said he was turned down by EA for any attempts to revive or remaster the Ultima series. Richard Garriott's new company Portalarium developed an RPG/MMORPG that Garriott has described as a clear spiritual successor of the Ultima series. On March 8, 2013, Portalarium launched a Kickstarter campaign for Shroud of the Avatar: Forsaken Virtues. Forsaken Virtues is the first of five full-length episodic installments in Shroud of the Avatar and was designed as a "Selective Multiplayer Game". This allowed the player to determine his or her level of multiplayer involvement that ranges from MMO to single player offline. Despite original plans to launch in Summer 2017, with Episodes 2 through 5 estimated for subsequent yearly releases, the first episode would ultimately be released on March 27, 2018, to mixed reception. Further episodes have not yet been released. References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Elon_Musk#cite_note-NYT-20190601-103] | [TOKENS: 10515] |
Contents Elon Musk Elon Reeve Musk (/ˈiːlɒn/ EE-lon; born June 28, 1971) is a businessman and entrepreneur known for his leadership of Tesla, SpaceX, Twitter, and xAI. Musk has been the wealthiest person in the world since 2025; as of February 2026,[update] Forbes estimates his net worth to be around US$852 billion. Born into a wealthy family in Pretoria, South Africa, Musk emigrated in 1989 to Canada; he has Canadian citizenship since his mother was born there. He received bachelor's degrees in 1997 from the University of Pennsylvania before moving to California to pursue business ventures. In 1995, Musk co-founded the software company Zip2. Following its sale in 1999, he co-founded X.com, an online payment company that later merged to form PayPal, which was acquired by eBay in 2002. Musk also became an American citizen in 2002. In 2002, Musk founded the space technology company SpaceX, becoming its CEO and chief engineer; the company has since led innovations in reusable rockets and commercial spaceflight. Musk joined the automaker Tesla as an early investor in 2004 and became its CEO and product architect in 2008; it has since become a leader in electric vehicles. In 2015, he co-founded OpenAI to advance artificial intelligence (AI) research, but later left; growing discontent with the organization's direction and their leadership in the AI boom in the 2020s led him to establish xAI, which became a subsidiary of SpaceX in 2026. In 2022, he acquired the social network Twitter, implementing significant changes, and rebranding it as X in 2023. His other businesses include the neurotechnology company Neuralink, which he co-founded in 2016, and the tunneling company the Boring Company, which he founded in 2017. In November 2025, a Tesla pay package worth $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Musk was the largest donor in the 2024 U.S. presidential election, where he supported Donald Trump. After Trump was inaugurated as president in early 2025, Musk served as Senior Advisor to the President and as the de facto head of the Department of Government Efficiency (DOGE). After a public feud with Trump, Musk left the Trump administration and returned to managing his companies. Musk is a supporter of global far-right figures, causes, and political parties. His political activities, views, and statements have made him a polarizing figure. Musk has been criticized for COVID-19 misinformation, promoting conspiracy theories, and affirming antisemitic, racist, and transphobic comments. His acquisition of Twitter was controversial due to a subsequent increase in hate speech and the spread of misinformation on the service, following his pledge to decrease censorship. His role in the second Trump administration attracted public backlash, particularly in response to DOGE. The emails he sent to Jeffrey Epstein are included in the Epstein files, which were published between 2025–26 and became a topic of worldwide debate. Early life Elon Reeve Musk was born on June 28, 1971, in Pretoria, South Africa's administrative capital. He is of British and Pennsylvania Dutch ancestry. His mother, Maye (née Haldeman), is a model and dietitian born in Saskatchewan, Canada, and raised in South Africa. Musk therefore holds both South African and Canadian citizenship from birth. His father, Errol Musk, is a South African electromechanical engineer, pilot, sailor, consultant, emerald dealer, and property developer, who partly owned a rental lodge at Timbavati Private Nature Reserve. His maternal grandfather, Joshua N. Haldeman, who died in a plane crash when Elon was a toddler, was an American-born Canadian chiropractor, aviator and political activist in the technocracy movement who moved to South Africa in 1950. Elon has a younger brother, Kimbal, a younger sister, Tosca, and four paternal half-siblings. Musk was baptized as a child in the Anglican Church of Southern Africa. Despite both Elon and Errol previously stating that Errol was a part owner of a Zambian emerald mine, in 2023, Errol recounted that the deal he made was to receive "a portion of the emeralds produced at three small mines". Errol was elected to the Pretoria City Council as a representative of the anti-apartheid Progressive Party and has said that his children shared their father's dislike of apartheid. After his parents divorced in 1979, Elon, aged around 9, chose to live with his father because Errol Musk had an Encyclopædia Britannica and a computer. Elon later regretted his decision and became estranged from his father. Elon has recounted trips to a wilderness school that he described as a "paramilitary Lord of the Flies" where "bullying was a virtue" and children were encouraged to fight over rations. In one incident, after an altercation with a fellow pupil, Elon was thrown down concrete steps and beaten severely, leading to him being hospitalized for his injuries. Elon described his father berating him after he was discharged from the hospital. Errol denied berating Elon and claimed, "The [other] boy had just lost his father to suicide, and Elon had called him stupid. Elon had a tendency to call people stupid. How could I possibly blame that child?" Elon was an enthusiastic reader of books, and had attributed his success in part to having read The Lord of the Rings, the Foundation series, and The Hitchhiker's Guide to the Galaxy. At age ten, he developed an interest in computing and video games, teaching himself how to program from the VIC-20 user manual. At age twelve, Elon sold his BASIC-based game Blastar to PC and Office Technology magazine for approximately $500 (equivalent to $1,600 in 2025). Musk attended Waterkloof House Preparatory School, Bryanston High School, and then Pretoria Boys High School, where he graduated. Musk was a decent but unexceptional student, earning a 61/100 in Afrikaans and a B on his senior math certification. Musk applied for a Canadian passport through his Canadian-born mother to avoid South Africa's mandatory military service, which would have forced him to participate in the apartheid regime, as well as to ease his path to immigration to the United States. While waiting for his application to be processed, he attended the University of Pretoria for five months. Musk arrived in Canada in June 1989, connected with a second cousin in Saskatchewan, and worked odd jobs, including at a farm and a lumber mill. In 1990, he entered Queen's University in Kingston, Ontario. Two years later, he transferred to the University of Pennsylvania, where he studied until 1995. Although Musk has said that he earned his degrees in 1995, the University of Pennsylvania did not award them until 1997 – a Bachelor of Arts in physics and a Bachelor of Science in economics from the university's Wharton School. He reportedly hosted large, ticketed house parties to help pay for tuition, and wrote a business plan for an electronic book-scanning service similar to Google Books. In 1994, Musk held two internships in Silicon Valley: one at energy storage startup Pinnacle Research Institute, which investigated electrolytic supercapacitors for energy storage, and another at Palo Alto–based startup Rocket Science Games. In 1995, he was accepted to a graduate program in materials science at Stanford University, but did not enroll. Musk decided to join the Internet boom of the 1990s, applying for a job at Netscape, to which he reportedly never received a response. The Washington Post reported that Musk lacked legal authorization to remain and work in the United States after failing to enroll at Stanford. In response, Musk said he was allowed to work at that time and that his student visa transitioned to an H1-B. According to numerous former business associates and shareholders, Musk said he was on a student visa at the time. Business career In 1995, Musk, his brother Kimbal, and Greg Kouri founded the web software company Zip2 with funding from a group of angel investors. They housed the venture at a small rented office in Palo Alto. Replying to Rolling Stone, Musk denounced the notion that they started their company with funds borrowed from Errol Musk, but in a tweet, he recognized that his father contributed 10% of a later funding round. The company developed and marketed an Internet city guide for the newspaper publishing industry, with maps, directions, and yellow pages. According to Musk, "The website was up during the day and I was coding it at night, seven days a week, all the time." To impress investors, Musk built a large plastic structure around a standard computer to create the impression that Zip2 was powered by a small supercomputer. The Musk brothers obtained contracts with The New York Times and the Chicago Tribune, and persuaded the board of directors to abandon plans for a merger with CitySearch. Musk's attempts to become CEO were thwarted by the board. Compaq acquired Zip2 for $307 million in cash in February 1999 (equivalent to $590,000,000 in 2025), and Musk received $22 million (equivalent to $43,000,000 in 2025) for his 7-percent share. In 1999, Musk co-founded X.com, an online financial services and e-mail payment company. The startup was one of the first federally insured online banks, and, in its initial months of operation, over 200,000 customers joined the service. The company's investors regarded Musk as inexperienced and replaced him with Intuit CEO Bill Harris by the end of the year. The following year, X.com merged with online bank Confinity to avoid competition. Founded by Max Levchin and Peter Thiel, Confinity had its own money-transfer service, PayPal, which was more popular than X.com's service. Within the merged company, Musk returned as CEO. Musk's preference for Microsoft software over Unix created a rift in the company and caused Thiel to resign. Due to resulting technological issues and lack of a cohesive business model, the board ousted Musk and replaced him with Thiel in 2000.[b] Under Thiel, the company focused on the PayPal service and was renamed PayPal in 2001. In 2002, PayPal was acquired by eBay for $1.5 billion (equivalent to $2,700,000,000 in 2025) in stock, of which Musk—the largest shareholder with 11.72% of shares—received $175.8 million (equivalent to $320,000,000 in 2025). In 2017, Musk purchased the domain X.com from PayPal for an undisclosed amount, stating that it had sentimental value. In 2001, Musk became involved with the nonprofit Mars Society and discussed funding plans to place a growth-chamber for plants on Mars. Seeking a way to launch the greenhouse payloads into space, Musk made two unsuccessful trips to Moscow to purchase intercontinental ballistic missiles (ICBMs) from Russian companies NPO Lavochkin and Kosmotras. Musk instead decided to start a company to build affordable rockets. With $100 million of his early fortune, (equivalent to $180,000,000 in 2025) Musk founded SpaceX in May 2002 and became the company's CEO and Chief Engineer. SpaceX attempted its first launch of the Falcon 1 rocket in 2006. Although the rocket failed to reach Earth orbit, it was awarded a Commercial Orbital Transportation Services program contract from NASA, then led by Mike Griffin. After two more failed attempts that nearly caused Musk to go bankrupt, SpaceX succeeded in launching the Falcon 1 into orbit in 2008. Later that year, SpaceX received a $1.6 billion NASA contract (equivalent to $2,400,000,000 in 2025) for Falcon 9-launched Dragon spacecraft flights to the International Space Station (ISS), replacing the Space Shuttle after its 2011 retirement. In 2012, the Dragon vehicle docked with the ISS, a first for a commercial spacecraft. Working towards its goal of reusable rockets, in 2015 SpaceX successfully landed the first stage of a Falcon 9 on a land platform. Later landings were achieved on autonomous spaceport drone ships, an ocean-based recovery platform. In 2018, SpaceX launched the Falcon Heavy; the inaugural mission carried Musk's personal Tesla Roadster as a dummy payload. Since 2019, SpaceX has been developing Starship, a reusable, super heavy-lift launch vehicle intended to replace the Falcon 9 and Falcon Heavy. In 2020, SpaceX launched its first crewed flight, the Demo-2, becoming the first private company to place astronauts into orbit and dock a crewed spacecraft with the ISS. In 2024, NASA awarded SpaceX an $843 million (equivalent to $865,000,000 in 2025) contract to build a spacecraft that NASA will use to deorbit the ISS at the end of its lifespan. In 2015, SpaceX began development of the Starlink constellation of low Earth orbit satellites to provide satellite Internet access. After the launch of prototype satellites in 2018, the first large constellation was deployed in May 2019. As of May 2025[update], over 7,600 Starlink satellites are operational, comprising 65% of all operational Earth satellites. The total cost of the decade-long project to design, build, and deploy the constellation was estimated by SpaceX in 2020 to be $10 billion (equivalent to $12,000,000,000 in 2025).[c] During the Russian invasion of Ukraine, Musk provided free Starlink service to Ukraine, permitting Internet access and communication at a yearly cost to SpaceX of $400 million (equivalent to $440,000,000 in 2025). However, Musk refused to block Russian state media on Starlink. In 2023, Musk denied Ukraine's request to activate Starlink over Crimea to aid an attack against the Russian navy, citing fears of a nuclear response. Tesla, Inc., originally Tesla Motors, was incorporated in July 2003 by Martin Eberhard and Marc Tarpenning. Both men played active roles in the company's early development prior to Musk's involvement. Musk led the Series A round of investment in February 2004; he invested $6.35 million (equivalent to $11,000,000 in 2025), became the majority shareholder, and joined Tesla's board of directors as chairman. Musk took an active role within the company and oversaw Roadster product design, but was not deeply involved in day-to-day business operations. Following a series of escalating conflicts in 2007 and the 2008 financial crisis, Eberhard was ousted from the firm.[page needed] Musk assumed leadership of the company as CEO and product architect in 2008. A 2009 lawsuit settlement with Eberhard designated Musk as a Tesla co-founder, along with Tarpenning and two others. Tesla began delivery of the Roadster, an electric sports car, in 2008. With sales of about 2,500 vehicles, it was the first mass production all-electric car to use lithium-ion battery cells. Under Musk, Tesla has since launched several well-selling electric vehicles, including the four-door sedan Model S (2012), the crossover Model X (2015), the mass-market sedan Model 3 (2017), the crossover Model Y (2020), and the pickup truck Cybertruck (2023). In May 2020, Musk resigned as chairman of the board as part of the settlement of a lawsuit from the SEC over him tweeting that funding had been "secured" for potentially taking Tesla private. The company has also constructed multiple lithium-ion battery and electric vehicle factories, called Gigafactories. Since its initial public offering in 2010, Tesla stock has risen significantly; it became the most valuable carmaker in summer 2020, and it entered the S&P 500 later that year. In October 2021, it reached a market capitalization of $1 trillion (equivalent to $1,200,000,000,000 in 2025), the sixth company in U.S. history to do so. Musk provided the initial concept and financial capital for SolarCity, which his cousins Lyndon and Peter Rive founded in 2006. By 2013, SolarCity was the second largest provider of solar power systems in the United States. In 2014, Musk promoted the idea of SolarCity building an advanced production facility in Buffalo, New York, triple the size of the largest solar plant in the United States. Construction of the factory started in 2014 and was completed in 2017. It operated as a joint venture with Panasonic until early 2020. Tesla acquired SolarCity for $2 billion in 2016 (equivalent to $2,700,000,000 in 2025) and merged it with its battery unit to create Tesla Energy. The deal's announcement resulted in a more than 10% drop in Tesla's stock price; at the time, SolarCity was facing liquidity issues. Multiple shareholder groups filed a lawsuit against Musk and Tesla's directors, stating that the purchase of SolarCity was done solely to benefit Musk and came at the expense of Tesla and its shareholders. Tesla directors settled the lawsuit in January 2020, leaving Musk the sole remaining defendant. Two years later, the court ruled in Musk's favor. In 2016, Musk co-founded Neuralink, a neurotechnology startup, with an investment of $100 million. Neuralink aims to integrate the human brain with artificial intelligence (AI) by creating devices that are embedded in the brain. Such technology could enhance memory or allow the devices to communicate with software. The company also hopes to develop devices to treat neurological conditions like spinal cord injuries. In 2022, Neuralink announced that clinical trials would begin by the end of the year. In September 2023, the Food and Drug Administration approved Neuralink to initiate six-year human trials. Neuralink has conducted animal testing on macaques at the University of California, Davis. In 2021, the company released a video in which a macaque played the video game Pong via a Neuralink implant. The company's animal trials—which have caused the deaths of some monkeys—have led to claims of animal cruelty. The Physicians Committee for Responsible Medicine has alleged that Neuralink violated the Animal Welfare Act. Employees have complained that pressure from Musk to accelerate development has led to botched experiments and unnecessary animal deaths. In 2022, a federal probe was launched into possible animal welfare violations by Neuralink.[needs update] In 2017, Musk founded the Boring Company to construct tunnels; he also revealed plans for specialized, underground, high-occupancy vehicles that could travel up to 150 miles per hour (240 km/h) and thus circumvent above-ground traffic in major cities. Early in 2017, the company began discussions with regulatory bodies and initiated construction of a 30-foot (9.1 m) wide, 50-foot (15 m) long, and 15-foot (4.6 m) deep "test trench" on the premises of SpaceX's offices, as that required no permits. The Los Angeles tunnel, less than two miles (3.2 km) in length, debuted to journalists in 2018. It used Tesla Model Xs and was reported to be a rough ride while traveling at suboptimal speeds. Two tunnel projects announced in 2018, in Chicago and West Los Angeles, have been canceled. A tunnel beneath the Las Vegas Convention Center was completed in early 2021. Local officials have approved further expansions of the tunnel system. April 14, 2022 In early 2017, Musk expressed interest in buying Twitter and had questioned the platform's commitment to freedom of speech. By 2022, Musk had reached 9.2% stake in the company, making him the largest shareholder.[d] Musk later agreed to a deal that would appoint him to Twitter's board of directors and prohibit him from acquiring more than 14.9% of the company. Days later, Musk made a $43 billion offer to buy Twitter. By the end of April Musk had successfully concluded his bid for approximately $44 billion. This included approximately $12.5 billion in loans and $21 billion in equity financing. Having backtracked on his initial decision, Musk bought the company on October 27, 2022. Immediately after the acquisition, Musk fired several top Twitter executives including CEO Parag Agrawal; Musk became the CEO instead. Under Elon Musk, Twitter instituted monthly subscriptions for a "blue check", and laid off a significant portion of the company's staff. Musk lessened content moderation and hate speech also increased on the platform after his takeover. In late 2022, Musk released internal documents relating to Twitter's moderation of Hunter Biden's laptop controversy in the lead-up to the 2020 presidential election. Musk also promised to step down as CEO after a Twitter poll, and five months later, Musk stepped down as CEO and transitioned his role to executive chairman and chief technology officer (CTO). Despite Musk stepping down as CEO, X continues to struggle with challenges such as viral misinformation, hate speech, and antisemitism controversies. Musk has been accused of trying to silence some of his critics such as Twitch streamer Asmongold, who criticized him during one of his streams. Musk has been accused of removing their accounts' blue checkmarks, which hinders visibility and is considered a form of shadow banning, or suspending their accounts without justification. Other activities In August 2013, Musk announced plans for a version of a vactrain, and assigned engineers from SpaceX and Tesla to design a transport system between Greater Los Angeles and the San Francisco Bay Area, at an estimated cost of $6 billion. Later that year, Musk unveiled the concept, dubbed the Hyperloop, intended to make travel cheaper than any other mode of transport for such long distances. In December 2015, Musk co-founded OpenAI, a not-for-profit artificial intelligence (AI) research company aiming to develop artificial general intelligence, intended to be safe and beneficial to humanity. Musk pledged $1 billion of funding to the company, and initially gave $50 million. In 2018, Musk left the OpenAI board. Since 2018, OpenAI has made significant advances in machine learning. In July 2023, Musk launched the artificial intelligence company xAI, which aims to develop a generative AI program that competes with existing offerings like OpenAI's ChatGPT. Musk obtained funding from investors in SpaceX and Tesla, and xAI hired engineers from Google and OpenAI. December 16, 2022 Musk uses a private jet owned by Falcon Landing LLC, a SpaceX-linked company, and acquired a second jet in August 2020. His heavy use of the jets and the consequent fossil fuel usage have received criticism. Musk's flight usage is tracked on social media through ElonJet. In December 2022, Musk banned the ElonJet account on Twitter, and made temporary bans on the accounts of journalists that posted stories regarding the incident, including Donie O'Sullivan, Keith Olbermann, and journalists from The New York Times, The Washington Post, CNN, and The Intercept. In October 2025, Musk's company xAI launched Grokipedia, an AI-generated online encyclopedia that he promoted as an alternative to Wikipedia. Articles on Grokipedia are generated and reviewed by xAI's Grok chatbot. Media coverage and academic analysis described Grokipedia as frequently reusing Wikipedia content but framing contested political and social topics in line with Musk's own views and right-wing narratives. A study by Cornell University researchers and NBC News stated that Grokipedia cites sources that are blacklisted or considered "generally unreliable" on Wikipedia, for example, the conspiracy site Infowars and the neo-Nazi forum Stormfront. Wired, The Guardian and Time criticized Grokipedia for factual errors and for presenting Musk himself in unusually positive terms while downplaying controversies. Politics Musk is an outlier among business leaders who typically avoid partisan political advocacy. Musk was a registered independent voter when he lived in California. Historically, he has donated to both Democrats and Republicans, many of whom serve in states in which he has a vested interest. Since 2022, his political contributions have mostly supported Republicans, with his first vote for a Republican going to Mayra Flores in the 2022 Texas's 34th congressional district special election. In 2024, he started supporting international far-right political parties, activists, and causes, and has shared misinformation and numerous conspiracy theories. Since 2024, his views have been generally described as right-wing. Musk supported Barack Obama in 2008 and 2012, Hillary Clinton in 2016, Joe Biden in 2020, and Donald Trump in 2024. In the 2020 Democratic Party presidential primaries, Musk endorsed candidate Andrew Yang and expressed support for Yang's proposed universal basic income, and endorsed Kanye West's 2020 presidential campaign. In 2021, Musk publicly expressed opposition to the Build Back Better Act, a $3.5 trillion legislative package endorsed by Joe Biden that ultimately failed to pass due to unanimous opposition from congressional Republicans and several Democrats. In 2022, gave over $50 million to Citizens for Sanity, a conservative political action committee. In 2023, he supported Republican Ron DeSantis for the 2024 U.S. presidential election, giving $10 million to his campaign, and hosted DeSantis's campaign announcement on a Twitter Spaces event. From June 2023 to January 2024, Musk hosted a bipartisan set of X Spaces with Republican and Democratic candidates, including Robert F. Kennedy Jr., Vivek Ramaswamy, and Dean Phillips. In October 2025, former vice-president Kamala Harris commented that it was a mistake from the Democratic side to not invite Musk to a White House electric vehicle event organized in August 2021 and featuring executives from General Motors, Ford and Stellantis, despite Tesla being "the major American manufacturer of extraordinary innovation in this space." Fortune remarked that this was a nod to United Auto Workers and organized labor. Harris said presidents should put aside political loyalties when it came to recognizing innovation, and guessed that the non-invitation impacted Musk's perspective. Fortune noted that, at the time, Musk said, "Yeah, seems odd that Tesla wasn't invited." A month later, he criticized Biden as "not the friendliest administration." Jacob Silverman, author of the book Gilded Rage: Elon Musk and the Radicalization of Silicon Valley, said that the tech industry represented by Musk, Thiel, Andreessen and other capitalists, actually flourished under Biden, but the tech leaders chose Trump for their common ground on cultural issues. By early 2024, Musk had become a vocal and financial supporter of Donald Trump. In July 2024, minutes after the attempted assassination of Donald Trump, Musk endorsed him for president saying; "I fully endorse President Trump and hope for his rapid recovery." During the presidential campaign, Musk joined Trump on stage at a campaign rally, and during the campaign promoted conspiracy theories and falsehoods about Democrats, election fraud and immigration, in support of Trump. Musk was the largest individual donor of the 2024 election. In 2025, Musk contributed $19 million to the Wisconsin Supreme Court race, hoping to influence the state's future redistricting efforts and its regulations governing car manufacturers and dealers. In 2023, Musk said he shunned the World Economic Forum because it was boring. The organization commented that they had not invited him since 2015. He has participated in Dialog, dubbed "Tech Bilderberg" and organized by Peter Thiel and Auren Hoffman, though. Musk's international political actions and comments have come under increasing scrutiny and criticism, especially from the governments and leaders of France, Germany, Norway, Spain and the United Kingdom, particularly due to his position in the U.S. government as well as ownership of X. An NBC News analysis found he had boosted far-right political movements to cut immigration and curtail regulation of business in at least 18 countries on six continents since 2023. During his speech after the second inauguration of Donald Trump, Musk twice made a gesture interpreted by many as a Nazi or a fascist Roman salute.[e] He thumped his right hand over his heart, fingers spread wide, and then extended his right arm out, emphatically, at an upward angle, palm down and fingers together. He then repeated the gesture to the crowd behind him. As he finished the gestures, he said to the crowd, "My heart goes out to you. It is thanks to you that the future of civilization is assured." It was widely condemned as an intentional Nazi salute in Germany, where making such gestures is illegal. The Anti-Defamation League said it was not a Nazi salute, but other Jewish organizations disagreed and condemned the salute. American public opinion was divided on partisan lines as to whether it was a fascist salute. Musk dismissed the accusations of Nazi sympathies, deriding them as "dirty tricks" and a "tired" attack. Neo-Nazi and white supremacist groups celebrated it as a Nazi salute. Multiple European political parties demanded that Musk be banned from entering their countries. The concept of DOGE emerged in a discussion between Musk and Donald Trump, and in August 2024, Trump committed to giving Musk an advisory role, with Musk accepting the offer. In November and December 2024, Musk suggested that the organization could help to cut the U.S. federal budget, consolidate the number of federal agencies, and eliminate the Consumer Financial Protection Bureau, and that its final stage would be "deleting itself". In January 2025, the organization was created by executive order, and Musk was designated a "special government employee". Musk led the organization and was a senior advisor to the president, although his official role is not clear. In sworn statement during a lawsuit, the director of the White House Office of Administration stated that Musk "is not an employee of the U.S. DOGE Service or U.S. DOGE Service Temporary Organization", "is not the U.S. DOGE Service administrator", and has "no actual or formal authority to make government decisions himself". Trump said two days later that he had put Musk in charge of DOGE. A federal judge has ruled that Musk acted as the de facto leader of DOGE. Musk's role in the second Trump administration, particularly in response to DOGE, has attracted public backlash. He was criticized for his treatment of federal government employees, including his influence over the mass layoffs of the federal workforce. He has prioritized secrecy within the organization and has accused others of violating privacy laws. A Senate report alleged that Musk could avoid up to $2 billion in legal liability as a result of DOGE's actions. In May 2025, Bill Gates accused Musk of "killing the world's poorest children" through his cuts to USAID, which modeling by Boston University estimated had resulted in 300,000 deaths by this time, most of them of children. By November 2025, the estimated death toll had increased to 400,000 children and 200,000 adults. Musk announced on May 28, 2025, that he would depart from the Trump administration as planned when the special government employee's 130 day deadline expired, with a White House official confirming that Musk's offboarding from the Trump administration was already underway. His departure was officially confirmed during a joint Oval Office press conference with Trump on May 30, 2025. @realDonaldTrump is in the Epstein files. That is the real reason they have not been made public. June 5, 2025 After leaving office, Musk criticized the Trump administration's Big Beautiful Bill, calling it a "disgusting abomination" due to its provisions increasing the deficit. A feud began between Musk and Trump, with its most notable event being Musk alleging Trump had ties to sex offender Jeffrey Epstein on X (formerly Twitter) on June 5, 2025. Trump responded on Truth Social stating that Musk went "CRAZY" after the "EV Mandate" was purportedly taken away and threatened to cut Musk's government contracts. Musk then called for a third Trump impeachment. The next day, Trump stated that he did not wish to reconcile with Musk, and added that Musk would face "very serious consequences" if he funds Democratic candidates. On June 11, Musk publicly apologized for the tweets against Trump, saying they "went too far". Views November 6, 2022 Rejecting the conservative label, Musk has described himself as a political moderate, even as his views have become more right-wing over time. His views have been characterized as libertarian and far-right, and after his involvement in European politics, they have received criticism from world leaders such as Emmanuel Macron and Olaf Scholz. Within the context of American politics, Musk supported Democratic candidates up until 2022, at which point he voted for a Republican for the first time. He has stated support for universal basic income, gun rights, freedom of speech, a tax on carbon emissions, and H-1B visas. Musk has expressed concern about issues such as artificial intelligence (AI) and climate change, and has been a critic of wealth tax, short-selling, and government subsidies. An immigrant himself, Musk has been accused of being anti-immigration, and regularly blames immigration policies for illegal immigration. He is also a pronatalist who believes population decline is the biggest threat to civilization, and identifies as a cultural Christian. Musk has long been an advocate for space colonization, especially the colonization of Mars. He has repeatedly pushed for humanity colonizing Mars, in order to become an interplanetary species and lower the risks of human extinction. Musk has promoted conspiracy theories and made controversial statements that have led to accusations of racism, sexism, antisemitism, transphobia, disseminating disinformation, and support of white pride. While describing himself as a "pro-Semite", his comments regarding George Soros and Jewish communities have been condemned by the Anti-Defamation League and the Biden White House. Musk was criticized during the COVID-19 pandemic for making unfounded epidemiological claims, defying COVID-19 lockdowns restrictions, and supporting the Canada convoy protest against vaccine mandates. He has amplified false claims of white genocide in South Africa. Musk has been critical of Israel's actions in the Gaza Strip during the Gaza war, praised China's economic and climate goals, suggested that Taiwan and China should resolve cross-strait relations, and was described as having a close relationship with the Chinese government. In Europe, Musk expressed support for Ukraine in 2022 during the Russian invasion, recommended referendums and peace deals on the annexed Russia-occupied territories, and supported the far-right Alternative for Germany political party in 2024. Regarding British politics, Musk blamed the 2024 UK riots on mass migration and open borders, criticized Prime Minister Keir Starmer for what he described as a "two-tier" policing system, and was subsequently attacked as being responsible for spreading misinformation and amplifying the far-right. He has also voiced his support for far-right activist Tommy Robinson and pledged electoral support for Reform UK. In February 2026, Musk described Spanish Prime Minister Pedro Sánchez as a "tyrant" following Sánchez's proposal to prohibit minors under the age of 16 from accessing social media platforms. Legal affairs In 2018, Musk was sued by the U.S. Securities and Exchange Commission (SEC) for a tweet stating that funding had been secured for potentially taking Tesla private.[f] The securities fraud lawsuit characterized the tweet as false, misleading, and damaging to investors, and sought to bar Musk from serving as CEO of publicly traded companies. Two days later, Musk settled with the SEC, without admitting or denying the SEC's allegations. As a result, Musk and Tesla were fined $20 million each, and Musk was forced to step down for three years as Tesla chairman but was able to remain as CEO. Shareholders filed a lawsuit over the tweet, and in February 2023, a jury found Musk and Tesla not liable. Musk has stated in interviews that he does not regret posting the tweet that triggered the SEC investigation. In 2019, Musk stated in a tweet that Tesla would build half a million cars that year. The SEC reacted by asking a court to hold him in contempt for violating the terms of the 2018 settlement agreement. A joint agreement between Musk and the SEC eventually clarified the previous agreement details, including a list of topics about which Musk needed preclearance. In 2020, a judge blocked a lawsuit that claimed a tweet by Musk regarding Tesla stock price ("too high imo") violated the agreement. Freedom of Information Act (FOIA)-released records showed that the SEC concluded Musk had subsequently violated the agreement twice by tweeting regarding "Tesla's solar roof production volumes and its stock price". In October 2023, the SEC sued Musk over his refusal to testify a third time in an investigation into whether he violated federal law by purchasing Twitter stock in 2022. In February 2024, Judge Laurel Beeler ruled that Musk must testify again. In January 2025, the SEC filed a lawsuit against Musk for securities violations related to his purchase of Twitter. In January 2024, Delaware judge Kathaleen McCormick ruled in a 2018 lawsuit that Musk's $55 billion pay package from Tesla be rescinded. McCormick called the compensation granted by the company's board "an unfathomable sum" that was unfair to shareholders. The Delaware Supreme Court overturned McCormick's decision in December 2025, restoring Musk's compensation package and awarding $1 in nominal damages. Personal life Musk became a U.S. citizen in 2002. From the early 2000s until late 2020, Musk resided in California, where both Tesla and SpaceX were founded. He then relocated to Cameron County, Texas, saying that California had become "complacent" about its economic success. While hosting Saturday Night Live in 2021, Musk stated that he has Asperger syndrome (an outdated term for autism spectrum disorder). When asked about his experience growing up with Asperger's syndrome in a TED2022 conference in Vancouver, Musk stated that "the social cues were not intuitive ... I would just tend to take things very literally ... but then that turned out to be wrong — [people were not] simply saying exactly what they mean, there's all sorts of other things that are meant, and [it] took me a while to figure that out." Musk suffers from back pain and has undergone several spine-related surgeries, including a disc replacement. In 2000, he contracted a severe case of malaria while on vacation in South Africa. Musk has stated he uses doctor-prescribed ketamine for occasional depression and that he doses "a small amount once every other week or something like that"; since January 2024, some media outlets have reported that he takes ketamine, marijuana, LSD, ecstasy, mushrooms, cocaine and other drugs. Musk at first refused to comment on his alleged drug use, before responding that he had not tested positive for drugs, and that if drugs somehow improved his productivity, "I would definitely take them!". The New York Times' investigations revealed Musk's overuse of ketamine and numerous other drugs, as well as strained family relationships and concerns from close associates who have become troubled by his public behavior as he became more involved in political activities and government work. According to The Washington Post, President Trump described Musk as "a big-time drug addict". Through his own label Emo G Records, Musk released a rap track, "RIP Harambe", on SoundCloud in March 2019. The following year, he released an EDM track, "Don't Doubt Ur Vibe", featuring his own lyrics and vocals. Musk plays video games, which he stated has a "'restoring effect' that helps his 'mental calibration'". Some games he plays include Quake, Diablo IV, Elden Ring, and Polytopia. Musk once claimed to be one of the world's top video game players but has since admitted to "account boosting", or cheating by hiring outside services to achieve top player rankings. Musk has justified the boosting by claiming that all top accounts do it so he has to as well to remain competitive. In 2024 and 2025, Musk criticized the video game Assassin's Creed Shadows and its creator Ubisoft for "woke" content. Musk posted to X that "DEI kills art" and specified the inclusion of the historical figure Yasuke in the Assassin's Creed game as offensive; he also called the game "terrible". Ubisoft responded by saying that Musk's comments were "just feeding hatred" and that they were focused on producing a game not pushing politics. Musk has fathered at least 14 children, one of whom died as an infant. The Wall Street Journal reported in 2025 that sources close to Musk suggest that the "true number of Musk's children is much higher than publicly known". He had six children with his first wife, Canadian author Justine Wilson, whom he met while attending Queen's University in Ontario, Canada; they married in 2000. In 2002, their first child Nevada Musk died of sudden infant death syndrome at the age of 10 weeks. After his death, the couple used in vitro fertilization (IVF) to continue their family; they had twins in 2004, followed by triplets in 2006. The couple divorced in 2008 and have shared custody of their children. The elder twin he had with Wilson came out as a trans woman and, in 2022, officially changed her name to Vivian Jenna Wilson, adopting her mother's surname because she no longer wished to be associated with Musk. Musk began dating English actress Talulah Riley in 2008. They married two years later at Dornoch Cathedral in Scotland. In 2012, the couple divorced, then remarried the following year. After briefly filing for divorce in 2014, Musk finalized a second divorce from Riley in 2016. Musk then dated the American actress Amber Heard for several months in 2017; he had reportedly been "pursuing" her since 2012. In 2018, Musk and Canadian musician Grimes confirmed they were dating. Grimes and Musk have three children, born in 2020, 2021, and 2022.[g] Musk and Grimes originally gave their eldest child the name "X Æ A-12", which would have violated California regulations as it contained characters that are not in the modern English alphabet; the names registered on the birth certificate are "X" as a first name, "Æ A-Xii" as a middle name, and "Musk" as a last name. They received criticism for choosing a name perceived to be impractical and difficult to pronounce; Musk has said the intended pronunciation is "X Ash A Twelve". Their second child was born via surrogacy. Despite the pregnancy, Musk confirmed reports that the couple were "semi-separated" in September 2021; in an interview with Time in December 2021, he said he was single. In October 2023, Grimes sued Musk over parental rights and custody of X Æ A-Xii. Elon Musk has taken X Æ A-Xii to multiple official events in Washington, D.C. during Trump's second term in office. Also in July 2022, The Wall Street Journal reported that Musk allegedly had an affair with Nicole Shanahan, the wife of Google co-founder Sergey Brin, in 2021, leading to their divorce the following year. Musk denied the report. Musk also had a relationship with Australian actress Natasha Bassett, who has been described as "an occasional girlfriend". In October 2024, The New York Times reported Musk bought a Texas compound for his children and their mothers, though Musk denied having done so. Musk also has four children with Shivon Zilis, director of operations and special projects at Neuralink: twins born via IVF in 2021, a child born in 2024 via surrogacy and a child born in 2025.[h] On February 14, 2025, Ashley St. Clair, an influencer and author, posted on X claiming to have given birth to Musk's son Romulus five months earlier, which media outlets reported as Musk's supposed thirteenth child.[i] On February 22, 2025, it was reported that St Clair had filed for sole custody of her five-month-old son and for Musk to be recognised as the child's father. On March 31, 2025, Musk wrote that, while he was unsure if he was the father of St. Clair's child, he had paid St. Clair $2.5 million and would continue paying her $500,000 per year.[j] Later reporting from the Wall Street Journal indicated that $1 million of these payments to St. Clair were structured as a loan. In 2014, Musk and Ghislaine Maxwell appeared together in a photograph taken at an Academy Awards after-party, which Musk later described as a "photobomb". The January 2026 Epstein files contain emails between Musk and Epstein from 2012 to 2013, after Epstein's first conviction. Emails released on January 30, 2026, indicated that Epstein invited Musk to visit his private island on multiple occasions. The correspondence showed that while Epstein repeatedly encouraged Musk to attend, Musk did not visit the island. In one instance, Musk discussed the possibility of attending a party with his then-wife Talulah Riley and asked which day would be the "wildest party"; according to the emails, the visit did not take place after Epstein later cancelled the plans.[k] On Christmas day in 2012, Musk emailed Epstein asking "Do you have any parties planned? I’ve been working to the edge of sanity this year and so, once my kids head home after Christmas, I really want to hit the party scene in St Barts or elsewhere and let loose. The invitation is much appreciated, but a peaceful island experience is the opposite of what I’m looking for". Epstein replied that the "ratio on my island" might make Musk's wife uncomfortable to which Musk responded, "Ratio is not a problem for Talulah". On September 11, 2013, Epstein sent an email asking Musk if he had any plans for coming to New York for the opening of the United Nations General Assembly where many "interesting people" would be coming to his house to which Musk responded that "Flying to NY to see UN diplomats do nothing would be an unwise use of time". Epstein responded by stating "Do you think i am retarded. Just kidding, there is no one over 25 and all very cute." Musk has denied any close relationship with Epstein and described him as a "creep" who attempted to ingratiate himself with influential people. When Musk was asked in 2019 if he introduced Epstein to Mark Zuckerberg, Musk responded: "I don’t recall introducing Epstein to anyone, as I don’t know the guy well enough to do so." The released emails nonetheless showed cordial exchanges on a range of topics, including Musk's inquiry about parties on the island. The correspondence also indicated that Musk suggested hosting Epstein at SpaceX, while Epstein separately discussed plans to tour SpaceX and bring "the girls", though there is no evidence that such a visit occurred. Musk has described the release of the files a "distraction", later accusing the second Trump administration of suppressing them to protect powerful individuals, including Trump himself.[l] Wealth Elon Musk is the wealthiest person in the world, with an estimated net worth of US$690 billion as of January 2026, according to the Bloomberg Billionaires Index, and $852 billion according to Forbes, primarily from his ownership stakes in SpaceX and Tesla. Having been first listed on the Forbes Billionaires List in 2012, around 75% of Musk's wealth was derived from Tesla stock in November 2020, although he describes himself as "cash poor". According to Forbes, he became the first person in the world to achieve a net worth of $300 billion in 2021; $400 billion in December 2024; $500 billion in October 2025; $600 billion in mid-December 2025; $700 billion later that month; and $800 billion in February 2026. In November 2025, a Tesla pay package worth potentially $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Public image Although his ventures have been highly influential within their separate industries starting in the 2000s, Musk only became a public figure in the early 2010s. He has been described as an eccentric who makes spontaneous and impactful decisions, while also often making controversial statements, contrary to other billionaires who prefer reclusiveness to protect their businesses. Musk's actions and his expressed views have made him a polarizing figure. Biographer Ashlee Vance described people's opinions of Musk as polarized due to his "part philosopher, part troll" persona on Twitter. He has drawn denouncement for using his platform to mock the self-selection of personal pronouns, while also receiving praise for bringing international attention to matters like British survivors of grooming gangs. Musk has been described as an American oligarch due to his extensive influence over public discourse, social media, industry, politics, and government policy. After Trump's re-election, Musk's influence and actions during the transition period and the second presidency of Donald Trump led some to call him "President Musk", the "actual president-elect", "shadow president" or "co-president". Awards for his contributions to the development of the Falcon rockets include the American Institute of Aeronautics and Astronautics George Low Transportation Award in 2008, the Fédération Aéronautique Internationale Gold Space Medal in 2010, and the Royal Aeronautical Society Gold Medal in 2012. In 2015, he received an honorary doctorate in engineering and technology from Yale University and an Institute of Electrical and Electronics Engineers Honorary Membership. Musk was elected a Fellow of the Royal Society (FRS) in 2018.[m] In 2022, Musk was elected to the National Academy of Engineering. Time has listed Musk as one of the most influential people in the world in 2010, 2013, 2018, and 2021. Musk was selected as Time's "Person of the Year" for 2021. Then Time editor-in-chief Edward Felsenthal wrote that, "Person of the Year is a marker of influence, and few individuals have had more influence than Musk on life on Earth, and potentially life off Earth too." Notes References Works cited Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Planetary_habitability] | [TOKENS: 10015] |
Contents Planetary habitability Planetary habitability is a measure used in astrobiology to characterize a planet's or a natural satellite's potential to develop and sustain an environment hospitable to life. The Planetary Habitability Laboratory maintains a catalog of potentially habitable exoplanets. Background Research suggests that life on a planetary body may develop through abiogenesis or be transferred from one body to another, through a hypothetical process known as panspermia. Environments do not need to contain life to be considered habitable nor are accepted habitable zones (HZ) the only areas in which life might arise. As the existence of life beyond Earth is not known, planetary habitability is largely an extrapolation of conditions on Earth and the characteristics of the Sun and Solar System which appear favorable to life's flourishing. Of particular interest are those factors that have sustained complex, multicellular organisms on Earth and not just simpler, unicellular creatures. Research and theory in this regard is a component of a number of natural sciences, such as astronomy, planetary science, and the emerging discipline of astrobiology. An absolute requirement for life is an energy source, and the notion of planetary habitability implies that many other geophysical, geochemical, and astrophysical criteria must be met before an astronomical body can support life. In its astrobiology roadmap, NASA has defined the principal habitability criteria as "extended regions of liquid water, conditions favorable for the assembly of complex organic molecules, and energy sources to sustain metabolism". In August 2018, researchers reported that water worlds could support life. Habitability indicators and biosignatures must be interpreted within a planetary and environmental context. In determining the habitability potential of a body, studies focus on its bulk composition, orbital properties, atmosphere, and potential chemical interactions. Stellar characteristics of importance include mass and luminosity, stable variability, and high metallicity. Rocky, wet terrestrial-type planets and moons with the potential for Earth-like chemistry is a primary focus of astrobiological research. However, more speculative habitability theories occasionally examine alternative biochemistries and other types of astronomical bodies. The idea that planets beyond Earth might host life is an ancient one, though historically it was framed by philosophy as much as physical science.[a] The late 20th century saw two breakthroughs in the field. The observation and robotic spacecraft exploration of other planets and moons within the Solar System has provided critical information on defining habitability criteria and allowed for substantial geophysical comparisons between the Earth and other bodies. The discovery of exoplanets, beginning in the early 1990s and accelerating thereafter, has provided further information for the study of possible extraterrestrial life. These findings confirm that the Sun is not unique among stars in hosting planets and expands the habitability research horizon beyond the Solar System. While Earth is the only place in the Universe known to harbor life, estimates of habitable zones around other stars, along with the discovery of thousands of exoplanets and new insights into the extreme habitats on Earth where organisms known as extremophiles live, suggest that there may be many more habitable places in the Universe than considered possible until very recently. On 4 November 2013, astronomers reported, based on Kepler space mission data, that there could be as many as 40 billion Earth-sized planets orbiting in the habitable zones of Sun-like stars and red dwarfs within the Milky Way. 11 billion of these estimated planets may be orbiting Sun-like stars. The nearest such planet may be 12 light-years away, according to the scientists. As of June 2021, a total of 59 potentially habitable exoplanets have been found. Stellar characteristics An understanding of planetary habitability begins with the host star. The classical habitable zone (HZ) is defined for surface conditions only; but a metabolism that does not depend on the stellar light can still exist outside the HZ, thriving in the interior of the planet where liquid water is available. Under the auspices of SETI's Project Phoenix, scientists Margaret Turnbull and Jill Tarter developed the "HabCat" (or Catalogue of Habitable Stellar Systems) in 2002. The catalog was formed by winnowing the nearly 120,000 stars of the larger Hipparcos Catalogue into a core group of 17,000 potentially habitable stars, and the selection criteria that were used provide a good starting point for understanding which astrophysical factors are necessary for habitable planets. According to research published in August 2015, very large galaxies may be more favorable to the formation and development of habitable planets than smaller galaxies, like the Milky Way galaxy. However, what makes a planet habitable is a much more complex question than having a planet located at the right distance from its host star so that water can be liquid on its surface: various geophysical and geodynamical aspects, the radiation, and the host star's plasma environment can influence the evolution of planets and life if it originated. Liquid water is a necessary but not sufficient condition for life as we know it, as habitability is a function of a multitude of environmental parameters. The spectral class of a star indicates its photospheric temperature, which (for main-sequence stars) correlates to overall mass. The appropriate spectral range for habitable stars is considered to be "late F" or "G", to "mid-K". This corresponds to temperatures of a little more than 7,000 K down to a little less than 4,000 K (6,700 °C to 3,700 °C); the Sun, a G2 star at 5,777 K, is well within these bounds. This spectral range probably accounts for between 5% and 10% of stars in the local Milky Way galaxy. "Middle-class" stars of this sort have a number of characteristics considered important to planetary habitability: K-type stars may be able to support life far longer than the Sun. Whether fainter late K and M class red dwarf stars are also suitable hosts for habitable planets is perhaps the most important open question in the entire field of planetary habitability given their prevalence (habitability of red dwarf systems). Gliese 581 c, a "super-Earth", has been found orbiting in the "habitable zone" (HZ) of a red dwarf and may possess liquid water. However, it is also possible that a greenhouse effect may render it too hot to support life, while its neighbor, Gliese 581 d, may be a more likely candidate for habitability. In September 2010, the discovery was announced of another planet, Gliese 581 g, in an orbit between these two planets. However, reviews of the discovery have placed the existence of this planet in doubt, and it is listed as "unconfirmed". In September 2012, the discovery of two planets orbiting Gliese 163 was announced. One of the planets, Gliese 163 c, about 6.9 times the mass of Earth and somewhat hotter, was considered to be within the habitable zone. A recent study suggests that cooler stars that emit more light in the infrared and near-infrared may actually host warmer planets with less ice and an incidence of snowball states. These wavelengths are absorbed by their planets' ice and greenhouse gases and remain warmer. A 2020 study found that about half of Sun-like stars could host rocky, potentially habitable planets. Specifically, they estimated that, on average, the nearest habitable zone planet around G and K-type stars is about 6 parsecs away, and there are about 4 rocky planets around G and K-type stars within 10 parsecs (32.6 light years) of the Sun. The habitable zone (HZ) is a shell-shaped region of space surrounding a star in which a planet could maintain liquid water on its surface. The concept was first proposed by astrophysicist Su-Shu Huang in 1959, based on climatic constraints imposed by the host star. After an energy source, liquid water is widely considered the most important ingredient for life, considering how integral it is to all life systems on Earth. However, if life is discovered in the absence of water, the definition of an HZ may have to be greatly expanded. The inner edge of the HZ is the distance where runaway greenhouse effect vaporizes the whole water reservoir and, as a second effect, induces the photodissociation of water vapor and the loss of hydrogen to space. The outer edge of the HZ is the distance from the star where a maximum greenhouse effect fails to keep the surface of the planet above the freezing point, and by CO2(carbon dioxide) condensation. A "stable" HZ implies two factors. First, the range of a HZ should not vary greatly over time. All stars increase in luminosity as they age, and a given HZ thus migrates outwards, but if this happens too quickly (for example, with a super-massive star) planets may only have a brief window inside the HZ and a correspondingly smaller chance of developing life. Calculating an HZ range and its long-term movement is never straightforward, as negative feedback loops such as the CNO cycle will tend to offset the increases in luminosity. Assumptions made about atmospheric conditions and geology thus have as great an impact on a putative HZ range as does stellar evolution: the proposed parameters of the Sun's HZ, for example, have fluctuated greatly. Second, no large-mass body such as a gas giant should be present in or relatively close to the HZ, thus disrupting the formation of Earth-size bodies. The matter in the asteroid belt, for example, appears to have been unable to accrete into a planet due to orbital resonances with Jupiter; if the giant had appeared in the region that is now between the orbits of Venus and Mars, Earth would almost certainly not have developed in its present form. However a gas giant inside the HZ might have habitable moons under the right conditions. Changes in luminosity are common to all stars, but the severity of such fluctuations covers a broad range. Most stars are relatively stable, but a significant minority of variable stars often undergo sudden and intense increases in luminosity and consequently in the amount of energy radiated toward bodies in orbit. These stars are considered poor candidates for hosting life-bearing planets, as their unpredictability and energy output changes would negatively impact organisms: living things adapted to a specific temperature range could not survive too great a temperature variation. Further, upswings in luminosity are generally accompanied by massive doses of gamma ray and X-ray radiation which might prove lethal. Atmospheres do mitigate such effects, but their atmosphere might not be retained by planets orbiting variables, because the high-frequency energy buffeting these planets would continually strip them of their protective covering. The Sun, in this respect as in many others, is relatively benign: the variation between its maximum and minimum energy output is roughly 0.1% over its 11-year solar cycle. There is strong (though not undisputed) evidence that even minor changes in the Sun's luminosity have had significant effects on the Earth's climate well within the historical era: the Little Ice Age of the mid-second millennium, for instance, may have been caused by a relatively long-term decline in the Sun's luminosity. Thus, a star does not have to be a true variable for differences in luminosity to affect habitability. Of known solar analogs, one that closely resembles the Sun is considered to be 18 Scorpii; unfortunately for the prospects of life existing in its proximity, the only significant difference between the two bodies is the amplitude of the solar cycle, which appears to be much greater for 18 Scorpii. While the bulk of the material in any star is hydrogen and helium, there is a significant variation in the amount of heavier elements (metals). A high proportion of metals in a star correlates to the amount of heavy material initially available in the protoplanetary disk. A smaller amount of metal makes the formation of planets much less likely, under the solar nebula theory of planetary system formation. Any planets that did form around a metal-poor star would probably be low in mass, and thus unfavorable for life. Spectroscopic studies of systems where exoplanets have been found to date confirm the relationship between high metal content and planet formation: "Stars with planets, or at least with planets similar to the ones we are finding today, are more metal-rich than stars without planetary companions." This relationship between high metallicity and planet formation also means that habitable systems are more likely to be found around stars of younger generations, since stars that formed early in the universe's history have low metal content. Planetary characteristics Habitability indicators and biosignatures must be interpreted within a planetary and environmental context. Whether a planet will emerge as habitable depends on the sequence of events that led to its formation, which could include the production of organic molecules in molecular clouds and protoplanetary disks, delivery of materials during and after planetary accretion, and the orbital location in the planetary system. The chief assumption about habitable planets is that they are terrestrial. Such planets, roughly within one order of magnitude of Earth mass, are primarily composed of silicate rocks, and have not accreted the gaseous outer layers of hydrogen and helium found on gas giants. The possibility that life could evolve in the cloud tops of giant planets has not been decisively ruled out,[c] though it is considered unlikely, as they have no surface and their gravity is enormous. The natural satellites of giant planets, meanwhile, remain valid candidates for hosting life. In February 2011 the Kepler Space Observatory Mission team released a list of 1235 extrasolar planet candidates, including 54 that may be in the habitable zone. Six of the candidates in this zone are smaller than twice the size of Earth. A more recent study found that one of these candidates (KOI 326.01) is much larger and hotter than first reported. Based on the findings, the Kepler team estimated there to be "at least 50 billion planets in the Milky Way" of which "at least 500 million" are in the habitable zone. In analyzing which environments are likely to support life, a distinction is usually made between simple, unicellular organisms such as bacteria and archaea and complex metazoans (animals). Unicellularity necessarily precedes multicellularity in any hypothetical tree of life, and where single-celled organisms do emerge there is no assurance that greater complexity will then develop.[d] The planetary characteristics listed below are considered crucial for life generally, but in every case, multicellular organisms are more picky than unicellular life. In August 2021, a new class of habitable planets, named ocean planets, which involves "hot, ocean-covered planets with hydrogen-rich atmospheres", has been reported. Hycean planets may soon be studied for biosignatures by terrestrial telescopes as well as space telescopes, such as the James Webb Space Telescope (JWST), which was launched on 25 December 2021. Low-mass planets are poor candidates for life for two reasons. First, their lesser gravity makes atmosphere retention difficult. Constituent molecules are more likely to reach escape velocity and be lost to space when buffeted by solar wind or stirred by collision. Planets without a thick atmosphere lack the matter necessary for primal biochemistry, have little insulation and poor heat transfer across their surfaces (for example, Mars, with its thin atmosphere, is colder than the Earth would be if it were at a similar distance from the Sun), and provide less protection against meteoroids and high-frequency radiation. Further, where an atmosphere is less dense than 0.006 Earth atmospheres, water cannot exist in liquid form as the required atmospheric pressure, 4.56 mm Hg (608 Pa) (0.18 inch Hg), does not occur. In addition, a lessened pressure reduces the range of temperatures at which water is liquid. Secondly, smaller planets have smaller diameters and thus higher surface-to-volume ratios than their larger cousins. Such bodies tend to lose the energy left over from their formation quickly and end up geologically dead, lacking the volcanoes, earthquakes and tectonic activity which supply the surface with life-sustaining material and the atmosphere with temperature moderators like carbon dioxide. Plate tectonics appear particularly crucial, at least on Earth: not only does the process recycle important chemicals and minerals, but it also fosters bio-diversity through continent creation and increased environmental complexity and helps create the convective cells necessary to generate Earth's magnetic field. Although geologically active planets with volcanism but no plate tectonics, called Ignan Earths, could also be habitable. "Low mass" is partly a relative label: the Earth is low mass when compared to the Solar System's gas giants, but it is the largest, by diameter and mass, and the densest of all terrestrial bodies.[e] It is large enough to retain an atmosphere through gravity alone and large enough that its molten core remains a heat engine, driving the diverse geology of the surface (the decay of radioactive elements within a planet's core is the other significant component of planetary heating). Mars, by contrast, is nearly (or perhaps totally) geologically dead and has lost much of its atmosphere. Thus it would be fair to infer that the lower mass limit for habitability lies somewhere between that of Mars and that of Earth or Venus: 0.3 Earth masses has been offered as a rough dividing line for habitable planets. However, a 2008 study by the Harvard-Smithsonian Center for Astrophysics suggests that the dividing line may be higher. Earth may in fact lie on the lower boundary of habitability: if it were any smaller, plate tectonics would be impossible. Venus, which has 85% of Earth's mass, shows no signs of tectonic activity. Conversely, "super-Earths", terrestrial planets with higher masses than Earth, would have higher levels of plate tectonics and thus be firmly placed in the habitable range. Exceptional circumstances do offer exceptional cases: Jupiter's moon Io (which is smaller than any of the terrestrial planets) is volcanically dynamic because of the gravitational stresses induced by its orbit, and its neighbor Europa may have a liquid ocean or icy slush underneath a frozen shell also due to power generated from orbiting a gas giant. Saturn's Titan, meanwhile, has an outside chance of harboring life, as it has retained a thick atmosphere and has liquid methane seas on its surface. Organic-chemical reactions that only require minimum energy are possible in these seas, but whether any living system can be based on such minimal reactions is unclear, and would seem unlikely. These satellites are exceptions, but they prove that mass, as a criterion for habitability, cannot necessarily be considered definitive at this stage of our understanding. A larger planet is likely to have a more massive atmosphere. A combination of higher escape velocity to retain lighter atoms, and extensive outgassing from enhanced plate tectonics may greatly increase the atmospheric pressure and temperature at the surface compared to Earth. The enhanced greenhouse effect of such a heavy atmosphere would tend to suggest that the habitable zone should be further out from the central star for such massive planets. Finally, a larger planet is likely to have a large iron core. This allows for a magnetic field to protect the planet from stellar wind and cosmic radiation, which otherwise would tend to strip away planetary atmosphere and bombard living things with ionized particles. Mass is not the only criterion for producing a magnetic field—as the planet must also rotate fast enough to produce a dynamo effect within its core—but it is a significant component of the process. The mass of a potentially habitable exoplanet is between 0.1 and 5.0 Earth masses. However, it is possible for a habitable world to have a mass as low as 0.0268 Earth Masses. The radius of a potentially habitable exoplanet would range between 0.5 and 1.5 Earth radii. As with other criteria, stability is the critical consideration in evaluating the effect of orbital and rotational characteristics on planetary habitability. Orbital eccentricity is the difference between a planet's farthest and closest approach to its parent star divided by the sum of said distances. It is a ratio describing the shape of the elliptical orbit. The greater the eccentricity the greater the temperature fluctuation on a planet's surface. Although they are adaptive, living organisms can stand only so much variation, particularly if the fluctuations overlap both the freezing point and boiling point of the planet's main biotic solvent (e.g., water on Earth). If, for example, Earth's oceans were alternately boiling and freezing solid, it is difficult to imagine life as we know it having evolved. The more complex the organism, the greater the temperature sensitivity. The Earth's orbit is almost perfectly circular, with an eccentricity of less than 0.02; other planets in the Solar System (with the exception of Mercury and Mars) have eccentricities that are similarly benign. Habitability is also influenced by the architecture of the planetary system around a star. The evolution and stability of these systems are determined by gravitational dynamics, which drive the orbital evolution of terrestrial planets. Data collected on the orbital eccentricities of extrasolar planets has surprised most researchers: 90% have an orbital eccentricity greater than that found within the Solar System, and the average is fully 0.25. This means that the vast majority of planets have highly eccentric orbits and of these, even if their average distance from their star is deemed to be within the HZ, they nonetheless would be spending only a small portion of their time within the zone. A planet's movement around its rotational axis must also meet certain criteria if life is to have the opportunity to evolve. A first assumption is that the planet should have moderate seasons. If there is little or no axial tilt (or obliquity) relative to the perpendicular of the ecliptic, seasons will not occur and a main stimulant to biospheric dynamism will disappear. The planet would also be colder than it would be with a significant tilt: when the greatest intensity of radiation is always within a few degrees of the equator, warm weather cannot move poleward and a planet's climate becomes dominated by colder polar weather systems. If a planet is radically tilted, seasons will be extreme and make it more difficult for a biosphere to achieve homeostasis. The axial tilt of the Earth is higher now (in the Quaternary) than it has been in the past, coinciding with reduced polar ice, warmer temperatures, and less seasonal variation. Scientists do not know whether this trend will continue indefinitely with further increases in axial tilt (see Snowball Earth). The exact effects of these changes can only be computer modelled at present, and studies have shown that even extreme tilts of up to 85 degrees do not absolutely preclude life "provided it does not occupy continental surfaces plagued seasonally by the highest temperature." Not only the mean axial tilt, but also its variation over time must be considered. The Earth's tilt varies between 21.5 and 24.5 degrees over 41,000 years. A more drastic variation, or a much shorter periodicity, would induce climatic effects such as variations in seasonal severity. Other orbital considerations include: The Earth's Moon appears to play a crucial role in moderating the Earth's climate by stabilising the axial tilt. It has been suggested that a chaotic tilt may be a "deal-breaker" in terms of habitability—i.e. a satellite the size of the Moon is not only helpful but required to produce stability. This position remains controversial.[f] In the case of the Earth, the sole Moon is sufficiently massive and orbits so as to significantly contribute to ocean tides, which in turn aids the dynamic churning of Earth's large liquid water oceans. These lunar forces not only help ensure that the oceans do not stagnate, but also play a critical role in Earth's dynamic climate. Concentrations of radionuclides in rocky planet mantles may be critical for the habitability of Earth-like planets. Such planets with higher abundances likely lack a persistent dynamo for a significant fraction of their lifetimes, and those with lower concentrations may often be geologically inert. Planetary dynamos create strong magnetic fields which may often be necessary for life to develop or persist as they shield planets from solar winds and cosmic radiation. The electromagnetic emission spectra of stars could be used to identify those which are more likely to host habitable Earth-like planets. As of 2020, radionuclides are thought to be produced by rare stellar processes such as neutron star mergers. Additional geological characteristics may be essential or major factors in the habitability of natural celestial bodies – including some that may shape the body's heat and magnetic field. Some of these are unknown or not well understood and being investigated by planetary scientists, geochemists and others.[additional citation(s) needed] It is generally assumed that any extraterrestrial life that might exist will be based on the same fundamental biochemistry as found on Earth, as the four elements most vital for life, carbon, hydrogen, oxygen, and nitrogen, are also the most common chemically reactive elements in the universe. Indeed, simple biogenic compounds, such as very simple amino acids such as glycine, have been found in meteorites and in the interstellar medium. These four elements together comprise over 96% of Earth's collective biomass. Carbon has an unparalleled ability to bond with itself and to form a massive array of intricate and varied structures, making it an ideal material for the complex mechanisms that form living cells. Hydrogen and oxygen, in the form of water, compose the solvent in which biological processes take place and in which the first reactions occurred that led to life's emergence. The energy released in the formation of powerful covalent bonds between carbon and oxygen, available by oxidizing organic compounds, is the fuel of all complex life-forms. These four elements together make up amino acids, which in turn are the building blocks of proteins, the substance of living tissue. In addition, neither sulfur (required for the building of proteins) nor phosphorus (needed for the formation of DNA, RNA, and the adenosine phosphates essential to metabolism) are rare. Relative abundance in space does not always mirror differentiated abundance within planets; of the four life elements, for instance, only oxygen is present in any abundance in the Earth's crust. This can be partly explained by the fact that many of these elements, such as hydrogen and nitrogen, along with their simplest and most common compounds, such as carbon dioxide, carbon monoxide, methane, ammonia, and water, are gaseous at warm temperatures. In the hot region close to the Sun, these volatile compounds could not have played a significant role in the planets' geological formation. Instead, they were trapped as gases underneath the newly formed crusts, which were largely made of rocky, involatile compounds such as silica (a compound of silicon and oxygen, accounting for oxygen's relative abundance). Outgassing of volatile compounds through the first volcanoes would have contributed to the formation of the planets' atmospheres. The Miller–Urey experiment showed that, with the application of energy, simple inorganic compounds exposed to a primordial atmosphere can react to synthesize amino acids. Even so, volcanic outgassing could not have accounted for the amount of water in Earth's oceans. The vast majority of the water—and arguably carbon—necessary for life must have come from the outer Solar System, away from the Sun's heat, where it could remain solid. Comets impacting with the Earth in the Solar System's early years would have deposited vast amounts of water, along with the other volatile compounds life requires, onto the early Earth, providing a kick-start to the origin of life. Thus, while there is reason to suspect that the four "life elements" ought to be readily available elsewhere, a habitable system probably also requires a supply of long-term orbiting bodies to seed inner planets. Without comets there is a possibility that life as we know it would not exist on Earth. One important qualification to habitability criteria is that only a tiny portion of a planet is required to support life, a so-called Goldilocks Edge or Great Prebiotic Spot. Astrobiologists often concern themselves with "micro-environments", noting that "we lack a fundamental understanding of how evolutionary forces, such as mutation, selection, and genetic drift, operate in micro-organisms that act on and respond to changing micro-environments." Extremophiles are Earth organisms that live in niche environments under severe conditions generally considered inimical to life. Usually (although not always) unicellular, extremophiles include acutely alkaliphilic and acidophilic organisms and others that can survive water temperatures above 100 °C in hydrothermal vents. The discovery of life in extreme conditions has complicated definitions of habitability, but also generated much excitement amongst researchers in greatly broadening the known range of conditions under which life can persist. For example, a planet that might otherwise be unable to support an atmosphere given the solar conditions in its vicinity, might be able to do so within a deep shadowed rift or volcanic cave. Similarly, craterous terrain might offer a refuge for primitive life. The Lawn Hill crater has been studied as an astrobiological analog, with researchers suggesting rapid sediment infill created a protected microenvironment for microbial organisms; similar conditions may have occurred over the geological history of Mars. Earth environments that cannot support life are still instructive to astrobiologists in defining the limits of what organisms can endure. The heart of the Atacama Desert, generally considered the driest place on Earth, appears unable to support life, and it has been subject to study by NASA and ESA for that reason: it provides a Mars analog and the moisture gradients along its edges are ideal for studying the boundary between sterility and habitability. The Atacama was the subject of study in 2003 that partly replicated experiments from the Viking landings on Mars in the 1970s; no DNA could be recovered from two soil samples, and incubation experiments were also negative for biosignatures. The two current ecological approaches for predicting the potential habitability use 19 or 20 environmental factors, with emphasis on water availability, temperature, presence of nutrients, an energy source, and protection from solar ultraviolet and galactic cosmic radiation. The Habitable Exoplanets Catalog uses estimated surface temperature range to classify exoplanets: Mesoplanets would be ideal for complex life, whereas hypopsychroplanets and hyperthermoplanets might only support extremophilic life. The HEC uses the following terms to classify exoplanets in terms of mass, from least to greatest: asteroidan, mercurian, subterran, terran, superterran, neptunian, and jovian. Alternative star systems In determining the feasibility of extraterrestrial life, astronomers had long focused their attention on stars like the Sun. However, since planetary systems that resemble the Solar System are proving to be rare, they have begun to explore the possibility that life might form in systems very unlike the Sun's. It is believed that F, G, K and M-type stars could host habitable exoplanets. About half of the stars similar in temperature to the Sun could have a rocky planet able to support liquid water on its surface, according to research using data from NASA's Kepler Space Telescope. Typical estimates often suggest that 50% or more of all stellar systems are binary systems. This may be partly sample bias, as massive and bright stars tend to be in binaries and these are most easily observed and catalogued; a more precise analysis has suggested that the more common fainter stars are usually singular, and that up to two thirds of all stellar systems are therefore solitary. The separation between stars in a binary may range from less than one astronomical unit (AU, the average Earth–Sun distance) to several hundred. In latter instances, the gravitational effects will be negligible on a planet orbiting an otherwise suitable star and habitability potential will not be disrupted unless the orbit is highly eccentric. However, where the separation is significantly less, a stable orbit may be impossible. If a planet's distance to its primary exceeds about one fifth of the closest approach of the other star, orbital stability is not guaranteed. Whether planets might form in binaries at all had long been unclear, given that gravitational forces might interfere with planet formation. Theoretical work by Alan Boss at the Carnegie Institution has shown that gas giants can form around stars in binary systems much as they do around solitary stars. One study of Alpha Centauri, the nearest star system to the Sun, suggested that binaries need not be discounted in the search for habitable planets. Centauri A and B have an 11 AU distance at closest approach (23 AU mean), and both should have stable habitable zones. A study of long-term orbital stability for simulated planets within the system shows that planets within approximately three AU of either star may remain rather stable (i.e. the semi-major axis deviating by less than 5% during 32 000 binary periods). The continuous habitable zone (CHZ for 4.5 billion years) for Centauri A is conservatively estimated at 1.2 to 1.3 AU and Centauri B at 0.73 to 0.74—well within the stable region in both cases. M-type stars also considered possible hosts of habitable exoplanets, even those with flares such as Proxima b. Determining the habitability of red dwarf stars could help determine how common life in the universe might be, as red dwarfs make up between 70 and 90% of all the stars in the galaxy. However, it is important to bear in mind that flare stars could greatly reduce the habitability of exoplanets by eroding their atmosphere. Astronomers for many years ruled out red dwarfs as potential abodes for life. Their small size (from 0.08 to 0.45 solar masses) means that their nuclear reactions proceed exceptionally slowly, and they emit very little light (from 3% of that produced by the Sun to as little as 0.01%). Any planet in orbit around a red dwarf would have to huddle very close to its parent star to attain Earth-like surface temperatures; from 0.3 AU (just inside the orbit of Mercury) for a star like Lacaille 8760, to as little as 0.032 AU for a star like Proxima Centauri (such a world would have a year lasting just 6.3 days). At those distances, the star's gravity would cause tidal locking. One side of the planet would eternally face the star, while the other would always face away from it. The only ways in which potential life could avoid either an inferno or a deep freeze would be if the planet had an atmosphere thick enough to transfer the star's heat from the day side to the night side, or if there was a gas giant in the habitable zone, with a habitable moon, which would be locked to the planet instead of the star, allowing a more even distribution of radiation over the moon. It was long assumed that such a thick atmosphere would prevent sunlight from reaching the surface in the first place, preventing photosynthesis. This pessimism has been tempered by research. Studies by Robert Haberle and Manoj Joshi of NASA's Ames Research Center in California have shown that a planet's atmosphere (assuming it included greenhouse gases CO2 and H2O) need only be 100 millibars (0.10 atm), for the star's heat to be effectively carried to the night side. This is well within the levels required for photosynthesis, though water would still remain frozen on the dark side in some of their models. Martin Heath of Greenwich Community College, has shown that seawater, too, could be effectively circulated without freezing solid if the ocean basins were deep enough to allow free flow beneath the night side's ice cap. Further research—including a consideration of the amount of photosynthetically active radiation—suggested that tidally locked planets in red dwarf systems might at least be habitable for higher plants. Size is not the only factor in making red dwarfs potentially unsuitable for life, however. On a red dwarf planet, photosynthesis on the night side would be impossible, since it would never see the sun. On the day side, because the sun does not rise or set, areas in the shadows of mountains would remain so forever. Photosynthesis as we understand it would be complicated by the fact that a red dwarf produces most of its radiation in the infrared, and on the Earth the process depends on visible light. There are potential positives to this scenario. Numerous terrestrial ecosystems rely on chemosynthesis rather than photosynthesis, for instance, which would be possible in a red dwarf system. A static primary star position removes the need for plants to steer leaves toward the sun, deal with changing shade/sun patterns, or change from photosynthesis to stored energy during night. Because of the lack of a day-night cycle, including the weak light of morning and evening, far more energy would be available at a given radiation level. Red dwarfs are far more variable and violent than their more stable, larger cousins. Often they are covered in starspots that can dim their emitted light by up to 40% for months at a time, while at other times they emit gigantic flares that can double their brightness in a matter of minutes. Such variation would be very damaging for life, as it would not only destroy any complex organic molecules that could possibly form biological precursors, but also because it would blow off sizeable portions of the planet's atmosphere. For a planet around a red dwarf star to support life, it would require a rapidly rotating magnetic field to protect it from the flares. A tidally locked planet rotates only very slowly, and so cannot produce a geodynamo at its core. The violent flaring period of a red dwarf's life cycle is estimated to only last roughly the first 1.2 billion years of its existence. If a planet forms far away from a red dwarf so as to avoid tidal locking, and then migrates into the star's habitable zone after this turbulent initial period, it is possible that life may have a chance to develop. However, observations of the 7 to 12-billion year old Barnard's Star showcase that even old red dwarfs can have significant flare activity. Barnard's Star was long assumed to have little activity, but in 1998 astronomers observed an intense stellar flare, showing that it is a flare star. Red dwarfs have one advantage over other stars as abodes for life: far greater longevity. It took 4.5 billion years before humanity appeared on Earth, and life as we know it will see suitable conditions for 1 to 2.3 billion years more. Red dwarfs, by contrast, could live for trillions of years because their nuclear reactions are far slower than those of larger stars, meaning that life would have longer to evolve and survive. While the likelihood of finding a planet in the habitable zone around any specific red dwarf is slight, the total amount of habitable zone around all red dwarfs combined is equal to the total amount around Sun-like stars given their ubiquity. Furthermore, this total amount of habitable zone will last longer, because red dwarf stars live for hundreds of billions of years or even longer on the main sequence. However, combined with the above disadvantages, it is more likely that red dwarf stars would remain habitable longer to microbes, while the shorter-lived yellow dwarf stars, like the Sun, would remain habitable longer to animals. Some A and B type stars that have protoplanetary disks (especially Herbig Ae/Be stars) may form habitable planets. The star’s lifespan is short, but the habitable zone is wide, so it may be the planets become habitable for a short period of time. (about thousands or a million years) For the list of the A and B type stars with protoplanetary disks, see A-type main-sequence star and B-type main-sequence star. Recent research suggests that very large stars, greater than ~100 solar masses, could have planetary systems consisting of hundreds of Mercury-sized planets within the habitable zone. Such systems could also contain brown dwarfs and low-mass stars (~0.1–0.3 solar masses). However the very short lifespans of stars of more than a few solar masses would scarcely allow time for a planet to cool, let alone the time needed for a stable biosphere to develop. Massive stars are thus eliminated as possible abodes for life. However, a massive-star system could be a progenitor of life in another way – the supernova explosion of the massive star in the central part of the system. This supernova will disperse heavier elements throughout its vicinity, created during the phase when the massive star has moved off of the main sequence, and the systems of the potential low-mass stars (which are still on the main sequence) within the former massive-star system may be enriched with the relatively large supply of the heavy elements so close to a supernova explosion. However, this states nothing about what types of planets would form as a result of the supernova material, or what their habitability potential would be. Four classes of habitable planets based on water In a review of the factors which are important for the evolution of habitable Earth-sized planets, Lammer et al. proposed a classification of four water-dependent habitat types: Class I habitats are planetary bodies on which stellar and geophysical conditions allow liquid water to be available at the surface, along with sunlight, so that complex multicellular organisms may originate. Class II habitats include bodies which initially enjoy Earth-like conditions, but do not keep their ability to sustain liquid water on their surface due to stellar or geophysical conditions. Mars, and possibly Venus are examples of this class where complex life forms may not develop. Class III habitats are planetary bodies where liquid water oceans exist below the surface, where they can interact directly with a silicate-rich core. Class IV habitats have liquid water layers between two ice layers, or liquids above ice. The galactic neighborhood Along with the characteristics of planets and their star systems, the wider galactic environment may also impact habitability. Scientists considered the possibility that particular areas of galaxies (galactic habitable zones) are better suited to life than others; the Solar System, in the Orion Arm, on the Milky Way galaxy's edge is considered to be in a life-favorable spot: Thus, relative isolation is ultimately what a life-bearing system needs. If the Sun were crowded amongst other systems, the chance of being fatally close to dangerous radiation sources would increase significantly. Further, close neighbors might disrupt the stability of various orbiting bodies such as Oort cloud and Kuiper belt objects, which can bring catastrophe if knocked into the inner Solar System. While stellar crowding proves disadvantageous to habitability, so too does extreme isolation. A star as metal-rich as the Sun would probably not have formed in the very outermost regions of the Milky Way given a decline in the relative abundance of metals and a general lack of star formation. Thus, a "suburban" location, such as the Solar System enjoys, is preferable to a Galaxy's center or farthest reaches. Other considerations While most investigations of extraterrestrial life start with the assumption that advanced life-forms must have similar requirements for life as on Earth, the hypothesis of other types of biochemistry suggests the possibility of lifeforms evolving around a different metabolic mechanism. In Evolving the Alien, biologist Jack Cohen and mathematician Ian Stewart argue astrobiology, based on the Rare Earth hypothesis, is restrictive and unimaginative. They suggest that Earth-like planets may be very rare, but non-carbon-based complex life could possibly emerge in other environments. The most frequently mentioned alternative to carbon is silicon-based life, while ammonia and hydrocarbons are sometimes suggested as alternative solvents to water. The astrobiologist Dirk Schulze-Makuch and other scientists have proposed a Planet Habitability Index whose criteria include "potential for holding a liquid solvent" that is not necessarily restricted to water. More speculative ideas have focused on bodies altogether different from Earth-like planets. Astronomer Frank Drake, a well-known proponent of the search for extraterrestrial life, imagined life on a neutron star: submicroscopic "nuclear molecules" combining to form creatures with a life cycle millions of times quicker than Earth life. Called "imaginative and tongue-in-cheek", the idea gave rise to science fiction depictions. Carl Sagan, another optimist with regards to extraterrestrial life, considered the possibility of organisms that are always airborne within the high atmosphere of Jupiter in a 1976 paper. Cohen and Stewart also envisioned life in both a solar environment and in the atmosphere of a gas giant. "Good Jupiters" are gas giants, like the Solar System's Jupiter, that orbit their stars in circular orbits far enough away from the habitable zone not to disturb it but close enough to "protect" terrestrial planets in closer orbit in two critical ways. First, they help to stabilize the orbits, and thereby the climates of the inner planets. Second, they keep the inner stellar system relatively free of comets and asteroids that could cause devastating impacts. Jupiter orbits the Sun at about five times the distance between the Earth and the Sun. This is the rough distance we should expect to find good Jupiters elsewhere. Jupiter's "caretaker" role was dramatically illustrated in 1994 when Comet Shoemaker–Levy 9 impacted the giant.[according to whom?] However, the evidence is not quite so clear. Research has shown that Jupiter's role in determining the rate at which objects hit Earth is significantly more complicated than once thought. The role of Jupiter in the early history of the Solar System is somewhat better established, and the source of significantly less debate. Early in the Solar System's history, Jupiter is accepted as having played an important role in the hydration of our planet: it increased the eccentricity of asteroid belt orbits and enabled many to cross Earth's orbit and supply the planet with important volatiles such as water and carbon dioxide. Before Earth reached half its present mass, icy bodies from the Jupiter–Saturn region and small bodies from the primordial asteroid belt supplied water to the Earth due to the gravitational scattering of Jupiter and, to a lesser extent, Saturn. Thus, while the gas giants are now helpful protectors, they were once suppliers of critical habitability material. In contrast, Jupiter-sized bodies that orbit too close to the habitable zone but not in it (as in 47 Ursae Majoris), or have a highly elliptical orbit that crosses the habitable zone (like 16 Cygni B) make it very difficult for an independent Earth-like planet to exist in the system. See the discussion of a stable habitable zone above. However, during the process of migrating into a habitable zone, a Jupiter-size planet may capture a terrestrial planet as a moon. Even if such a planet is initially loosely bound and following a strongly inclined orbit, gravitational interactions with the star can stabilize the new moon into a close, circular orbit that is coplanar with the planet's orbit around the star. A supplement to the factors that support life's emergence is the notion that life itself, once formed, becomes a habitability factor in its own right. An important Earth example was the production of molecular oxygen gas (O2) by ancient cyanobacteria, and eventually photosynthesizing plants, leading to a radical change in the composition of Earth's atmosphere. This environmental change is called the Great Oxidation Event. This oxygen proved fundamental to the respiration of later animal species. The Gaia hypothesis, a scientific model of the geo-biosphere pioneered by James Lovelock in 1975, argues that life as a whole fosters and maintains suitable conditions for itself by helping to create a planetary environment suitable for its continuity. Similarly, David Grinspoon has suggested a "living worlds hypothesis" in which our understanding of what constitutes habitability cannot be separated from life already extant on a planet. Planets that are geologically and meteorologically alive are much more likely to be biologically alive as well and "a planet and its life will co-evolve." This is the basis of Earth system science. In 2020, a computer simulation of the evolution of planetary climates over 3 billion years suggested that feedback is a necessary but insufficient condition for preventing planets from ever becoming too hot or cold for life. Chance also plays a crucial role. Related considerations include yet unknown factors influencing the thermal habitability of planets such as "feedback mechanism (or mechanisms) that prevents the climate ever wandering to fatal temperatures". See also Notes References Bibliography Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_note-63] | [TOKENS: 9291] |
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Minecraft#cite_ref-308] | [TOKENS: 12858] |
Contents Minecraft Minecraft is a sandbox game developed and published by Mojang Studios. Following its initial public alpha release in 2009, it was formally released in 2011 for personal computers. The game has since been ported to numerous platforms, including mobile devices and various video game consoles. In Minecraft, players explore a procedurally generated world with virtually infinite terrain made up of voxels (cubes). They can discover and extract raw materials, craft tools and items, build structures, fight hostile mobs, and cooperate with or compete against other players in multiplayer. The game's large community offers a wide variety of user-generated content, such as modifications, servers, player skins, texture packs, and custom maps, which add new game mechanics and possibilities. Originally created by Markus "Notch" Persson using the Java programming language, Jens "Jeb" Bergensten was handed control over the game's development following its full release. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for US$2.5 billion; Xbox Game Studios hold the publishing rights for the Bedrock Edition, the unified cross-platform version which evolved from the Pocket Edition codebase[i] and replaced the legacy console versions. Bedrock is updated concurrently with Mojang's original Java Edition, although with numerous, generally small, differences. Minecraft is the best-selling video game in history with over 350 million copies sold. It has received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions have played prominent roles in popularizing it. The wider Minecraft franchise includes several spin-off games, such as Minecraft: Story Mode, Minecraft Dungeons, and Minecraft Legends. A film adaptation, titled A Minecraft Movie, was released in 2025 and became the second highest-grossing video game film of all time. Gameplay Minecraft is a 3D sandbox video game that has no required goals to accomplish, giving players a large amount of freedom in choosing how to play the game. The game features an optional achievement system. Gameplay is in the first-person perspective by default, but players have the option of third-person perspectives. The game world is composed of rough 3D objects—mainly cubes, referred to as blocks—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a voxel grid, while players can move freely around the world. Players can break, or mine, blocks and then place them elsewhere, enabling them to build things. Very few blocks are affected by gravity, instead maintaining their voxel position in the air. Players can also craft a wide variety of items, such as armor, which mitigates damage from attacks; weapons (such as swords or bows and arrows), which allow monsters and animals to be killed more easily; and tools (such as pickaxes or shovels), which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. They may also freely craft helpful blocks—such as furnaces which can cook food and smelt ores, and torches that produce light—or exchange items with villagers (NPC) through trading emeralds for different goods and vice versa. The game has an inventory system, allowing players to carry a limited number of items. The in-game time system follows a day and night cycle, with one full cycle lasting for 20 real-time minutes. The game also contains a material called redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems. New players are given a randomly selected default character skin out of nine possibilities, including Steve or Alex, but are able to create and upload their own skins. Players encounter various mobs (short for mobile entities) including animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, spawn during the daytime and can be hunted for food and crafting materials, while hostile mobs—including large spiders, witches, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies and skeletons, burn under the sun if they have no headgear and are not standing in water. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk and drowned variants that spawn in deserts and oceans, respectively. The Minecraft environment is procedurally generated as players explore it using a map seed that is randomly chosen at the time of world creation (or manually specified by the player). Divided into biomes representing different environments with unique resources and structures, worlds are designed to be effectively infinite in traditional gameplay, though technical limits on the player have existed throughout development, both intentionally and not. Implementation of horizontally infinite generation initially resulted in a glitch termed the "Far Lands" at over 12 million blocks away from the world center, where terrain generated as wall-like, fissured patterns. The Far Lands and associated glitches were considered the effective edge of the world until they were resolved, with the current horizontal limit instead being a special impassable barrier called the world border, located 30 million blocks away. Vertical space is comparatively limited, with an unbreakable bedrock layer at the bottom and a building limit several hundred blocks into the sky. Minecraft features three independent dimensions accessible through portals and providing alternate game environments. The Overworld is the starting dimension and represents the real world, with a terrestrial surface setting including plains, mountains, forests, oceans, caves, and small sources of lava. The Nether is a hell-like underworld dimension accessed via an obsidian portal and composed mainly of lava. Mobs that populate the Nether include shrieking, fireball-shooting ghasts, alongside anthropomorphic pigs called piglins and their zombified counterparts. Piglins in particular have a bartering system, where players can give them gold ingots and receive items in return. Structures known as Nether Fortresses generate in the Nether, containing mobs such as wither skeletons and blazes, which can drop blaze rods needed to access the End dimension. The player can also choose to build an optional boss mob known as the Wither, using skulls obtained from wither skeletons and soul sand. The End can be reached through an end portal, consisting of twelve end portal frames. End portals are found in underground structures in the Overworld known as strongholds. To find strongholds, players must craft eyes of ender using an ender pearl and blaze powder. Eyes of ender can then be thrown, traveling in the direction of the stronghold. Once the player reaches the stronghold, they can place eyes of ender into each portal frame to activate the end portal. The dimension consists of islands floating in a dark, bottomless void. A boss enemy called the Ender Dragon guards the largest, central island. Killing the dragon opens access to an exit portal, which, when entered, cues the game's ending credits and the End Poem, a roughly 1,500-word work written by Irish novelist Julian Gough, which takes about nine minutes to scroll past, is the game's only narrative text, and the only text of significant length directed at the player.: 10–12 At the conclusion of the credits, the player is teleported back to their respawn point and may continue the game indefinitely. In Survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter in order to survive at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game unless the player is playing on peaceful difficulty. If the hunger bar is empty, the player starves. Health replenishes when players have a full hunger bar or continuously on peaceful. Upon losing all health, players die. The items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game and can be changed by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they despawn after 5 minutes. Players may acquire experience points (commonly referred to as "xp" or "exp") by killing mobs and other players, mining, smelting ores, animal breeding, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects. The game features two more game modes based on Survival, known as Hardcore mode and Adventure mode. Hardcore mode plays identically to Survival mode, but with the game's difficulty setting locked to "Hard" and with permadeath, forcing them to delete the world or explore it as a spectator after dying. Adventure mode was added to the game in a post-launch update, and prevents the player from directly modifying the game's world. It was designed primarily for use in custom maps, allowing map designers to let players experience it as intended. In Creative mode, players have access to an infinite number of all resources and items in the game through the inventory menu and can place or mine them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters usually do not take any damage nor are affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance. Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, local area network (LAN) play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own server by making a realm, using a host provider, hosting one themselves or connect directly to another player's game via Xbox Live, PlayStation Network or Nintendo Switch Online. Single-player worlds have LAN support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use server addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3,000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Bedrock Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018. The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, items, and mobs to entire arrays of mechanisms. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as mini-maps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) that often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new achievements, dimensions, functions, loot tables, predicates, recipes, structures, tags, and world generation. The Xbox 360 Edition supported downloadable content, which was available to purchase via the Xbox Games Store; these content packs usually contained additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combined texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition did not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released exclusively for the Wii U Edition worldwide on 17 May 2016, and later bundled free with the Nintendo Switch Edition at launch. Another based on Fallout was released on consoles that December, and for Windows and Mobile in April 2017. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would run only when the image containing the skin itself was opened. In June 2017, Mojang released the "1.1 Discovery Update" to the Pocket Edition of the game, which later became the Bedrock Edition. The update introduced the "Marketplace", a catalogue of purchasable user-generated content intended to give Minecraft creators "another way to make a living from the game". Various skins, maps, texture packs and add-ons from different creators can be bought with "Minecoins", a digital currency that is purchased with real money. Additionally, users can access specific content with a subscription service titled "Marketplace Pass". Alongside content from independent creators, the Marketplace also houses items published by Mojang and Microsoft themselves, as well as official collaborations between Minecraft and other intellectual properties. By 2022, the Marketplace had over 1.7 billion content downloads, generating over $500 million in revenue. Development Before creating Minecraft, Markus "Notch" Persson was a game developer at King, where he worked until March 2009. At King, he primarily developed browser games and learned several programming languages. During his free time, he prototyped his own games, often drawing inspiration from other titles, and was an active participant on the TIGSource forums for independent developers. One such project was "RubyDung", a base-building game inspired by Dwarf Fortress, but with an isometric, three-dimensional perspective similar to RollerCoaster Tycoon. Among the features in RubyDung that he explored was a first-person view similar to Dungeon Keeper, though he ultimately discarded this idea, feeling the graphics were too pixelated at the time. Around March 2009, Persson left King and joined jAlbum, while continuing to work on his prototypes. Infiniminer, a block-based open-ended mining game first released in April 2009, inspired Persson's vision for RubyDung's future direction. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements. The first public alpha build of Minecraft was released on 17 May 2009 on TIGSource. Over the years, Persson regularly released test builds that added new features, including tools, mobs, and entire new dimensions. In 2011, partly due to the game's rising popularity, Persson decided to release a full 1.0 version—a second part of the "Adventure Update"—on 18 November 2011. Shortly after, Persson stepped down from development, handing the project's lead to Jens "Jeb" Bergensten. On 15 September 2014, Microsoft, the developer behind the Microsoft Windows operating system and Xbox video game console, announced a $2.5 billion acquisition of Mojang, which included the Minecraft intellectual property. Persson had suggested the deal on Twitter, asking a corporation to buy his stake in the game after receiving criticism for enforcing terms in the game's end-user license agreement (EULA), which had been in place for the past three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014 and led to Persson becoming one of Forbes' "World's Billionaires". After 2014, Minecraft's primary versions received usually annual major updates—free to players who have purchased the game— each primarily centered around a specific theme. For instance, version 1.13, the Update Aquatic, focused on ocean-related features, while version 1.16, the Nether Update, introduced significant changes to the Nether dimension. However, in late 2024, Mojang announced a shift in their update strategy; rather than releasing large updates annually, they opted for a more frequent release schedule with smaller, incremental updates, stating, "We know that you want new Minecraft content more often." The Bedrock Edition has also received regular updates, now matching the themes of the Java Edition updates. Other versions of the game, such as various console editions and the Pocket Edition, were either merged into Bedrock or discontinued and have not received further updates. On 7 May 2019, coinciding with Minecraft's 10th anniversary, a JavaScript recreation of an old 2009 Java Edition build named Minecraft Classic was made available to play online for free. On 16 April 2020, a Bedrock Edition-exclusive beta version of Minecraft, called Minecraft RTX, was released by Nvidia. It introduced physically-based rendering, real-time path tracing, and DLSS for RTX-enabled GPUs. The public release was made available on 8 December 2020. Path tracing can only be enabled in supported worlds, which can be downloaded for free via the in-game Minecraft Marketplace, with a texture pack from Nvidia's website, or with compatible third-party texture packs. It cannot be enabled by default with any texture pack on any world. Initially, Minecraft RTX was affected by many bugs, display errors, and instability issues. On 22 March 2025, a new visual mode called Vibrant Visuals, an optional graphical overhaul similar to Minecraft RTX, was announced. It promises modern rendering features—such as dynamic shadows, screen space reflections, volumetric fog, and bloom—without the need of RTX-capable hardware. Vibrant Visuals was released as a part of the Chase the Skies update on 17 June 2025 for Bedrock Edition and is planned to release on Java Edition at a later date. Development began for the original edition of Minecraft—then known as Cave Game, and now known as the Java Edition—in May 2009,[k] and ended on 13 May, when Persson released a test video on YouTube of an early version of the game, dubbed the "Cave game tech test" or the "Cave game tech demo". The game was named Minecraft: Order of the Stone the next day, after a suggestion made by a player. "Order of the Stone" came from the webcomic The Order of the Stick, and "Minecraft" was chosen "because it's a good name". The title was later shortened to just Minecraft, omitting the subtitle. Persson completed the game's base programming over a weekend in May 2009, and private testing began on TigIRC on 16 May. The first public release followed on 17 May 2009 as a developmental version shared on the TIGSource forums. Based on feedback from forum users, Persson continued updating the game. This initial public build later became known as Classic. Further developmental phases—dubbed Survival Test, Indev, and Infdev—were released throughout 2009 and 2010. The first major update, known as Alpha, was released on 30 June 2010. At the time, Persson was still working a day job at jAlbum but later resigned to focus on Minecraft full-time as sales of the alpha version surged. Updates were distributed automatically, introducing new blocks, items, mobs, and changes to game mechanics such as water flow. With revenue generated from the game, Persson founded Mojang, a video game studio, alongside former colleagues Jakob Porser and Carl Manneh. On 11 December 2010, Persson announced that Minecraft would enter its beta phase on 20 December. He assured players that bug fixes and all pre-release updates would remain free. As development progressed, Mojang expanded, hiring additional employees to work on the project. The game officially exited beta and launched in full on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced the hiring of the developers behind Bukkit, a popular developer API for Minecraft servers, to improve Minecraft's support of server modifications. This move included Mojang taking apparent ownership of the CraftBukkit server mod, though this apparent acquisition later became controversial, and its legitimacy was questioned due to CraftBukkit's open-source nature and licensing under the GNU General Public License and Lesser General Public License. In August 2011, Minecraft: Pocket Edition was released as an early alpha for the Xperia Play via the Android Market, later expanding to other Android devices on 8 October 2011. The iOS version followed on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. Unlike Java Edition, Pocket Edition initially focused on Minecraft's creative building and basic survival elements but lacked many features of the PC version. Bergensten confirmed on Twitter that the Pocket Edition was written in C++ rather than Java, as iOS does not support Java. On 10 December 2014, a port of Pocket Edition was released for Windows Phone 8.1. In July 2015, a port of the Pocket Edition to Windows 10 was released as the Windows 10 Edition, with full crossplay to other Pocket versions. In January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 20 September 2017, with the "Better Together Update", the Pocket Edition was ported to the Xbox One, and was renamed to the Bedrock Edition. The console versions of Minecraft debuted with the Xbox 360 edition, developed by 4J Studios and released on 9 May 2012. Announced as part of the Xbox Live Arcade NEXT promotion, this version introduced a redesigned crafting system, a new control interface, in-game tutorials, split-screen multiplayer, and online play via Xbox Live. Unlike the PC version, its worlds were finite, bordered by invisible walls. Initially, the Xbox 360 version resembled outdated PC versions but received updates to bring it closer to Java Edition before eventually being discontinued. The Xbox One version launched on 5 September 2014, featuring larger worlds and support for more players. Minecraft expanded to PlayStation platforms with PlayStation 3 and PlayStation 4 editions released on 17 December 2013 and 4 September 2014, respectively. Originally planned as a PS4 launch title, it was delayed before its eventual release. A PlayStation Vita version followed in October 2014. Like the Xbox versions, the PlayStation editions were developed by 4J Studios. Nintendo platforms received Minecraft: Wii U Edition on 17 December 2015, with a physical release in North America on 17 June 2016 and in Europe on 30 June. The Nintendo Switch version launched via the eShop on 11 May 2017. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition, based on the Pocket Edition, would be available for download immediately after the livestream, and a physical copy available on a later date. The game is compatible only with the New Nintendo 3DS or New Nintendo 2DS XL systems and does not work with the original 3DS or 2DS systems. On 20 September 2017, the Better Together Update introduced Bedrock Edition across Xbox One, Windows 10, VR, and mobile platforms, enabling cross-play between these versions. Bedrock Edition later expanded to Nintendo Switch and PlayStation 4, with the latter receiving the update in December 2019, allowing cross-platform play for users with a free Xbox Live account. The Bedrock Edition released a native version for PlayStation 5 on 22 October 2024, while the Xbox Series X/S version launched on 17 June 2025. On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as "Legacy Console Editions". On 15 January 2019, the New Nintendo 3DS version of Minecraft received its final update, effectively becoming discontinued as well. An educational version of Minecraft, designed for use in schools, launched on 1 November 2016. It is available on Android, ChromeOS, iPadOS, iOS, MacOS, and Windows. On 20 August 2018, Mojang announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that it would be operated by JD.com in China. On 26 June 2020, a public beta for the Education Edition was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020. On 20 May 2016, China Edition (also known as My World) was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile versions are based on the Bedrock Edition. The edition is free-to-play and had over 700 million registered accounts by September 2023. This version of Bedrock Edition is exclusive to Microsoft's Windows 10 and Windows 11 operating systems. The beta release for Windows 10 launched on the Windows Store on 29 July 2015. After nearly a year and a half in beta, Microsoft fully released the version on 19 December 2016. Called the "Ender Update", this release implemented new features to this version of Minecraft like world templates and add-on packs. On 7 June 2022, the Java and Bedrock Editions of Minecraft were merged into a single bundle for purchase on Windows; those who owned one version would automatically gain access to the other version. Both game versions would otherwise remain separate. Around 2011, prior to Minecraft's full release, Mojang collaborated with The Lego Group to create a Lego brick-based Minecraft game called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1×1 block to account for larger pieces typically used in Lego sets. Persson worked on an early version called "Project Rex Kwon Do", named after the character of the same name from the film Napoleon Dynamite. Although Lego approved the project and Mojang assigned two developers for six months, it was canceled due to the Lego Group's demands, according to Mojang's Daniel Kaplan. Lego considered buying Mojang to complete the game, but when Microsoft offered over $2 billion for the company, Lego stepped back, unsure of Minecraft's potential. On 26 June 2025, a build of Brickcraft dated 28 June 2012 was published on a community archive website Omniarchive. Initially, Markus Persson planned to support the Oculus Rift with a Minecraft port. However, after Facebook acquired Oculus in 2013, he abruptly canceled the plans, stating, "Facebook creeps me out." In 2016, a community-made mod, Minecraft VR, added VR support for Java Edition, followed by Vivecraft for HTC Vive. Later that year, Microsoft introduced official Oculus Rift support for Windows 10 Edition, leading to the discontinuation of the Minecraft VR mod due to trademark complaints. Vivecraft was endorsed by Minecraft VR contributors for its Rift support. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 Bedrock version would receive PlayStation VR support later that month. In September 2024, the Minecraft team announced they would no longer support PlayStation VR, which received its final update in March 2025. Music and sound design Minecraft's music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. To create the sound effects for the game, Rosenfeld made extensive use of Foley techniques. On learning the processes for the game, he remarked, "Foley's an interesting thing, and I had to learn its subtleties. Early on, I wasn't that knowledgeable about it. It's a whole trial-and-error process. You just make a sound and eventually you go, 'Oh my God, that's it! Get the microphone!' There's no set way of doing anything at all." He reminisced on creating the in-game sound for grass blocks, stating "It turns out that to make grass sounds you don't actually walk on grass and record it, because grass sounds like nothing. What you want to do is get a VHS, break it apart, and just lightly touch the tape." According to Rosenfeld, his favorite sound to design for the game was the hisses of spiders. He elaborates, "I like the spiders. Recording that was a whole day of me researching what a spider sounds like. Turns out, there are spiders that make little screeching sounds, so I think I got this recording of a fire hose, put it in a sampler, and just pitched it around until it sounded like a weird spider was talking to you." Many of the sound design decisions by Rosenfeld were done accidentally or spontaneously. The creeper notably lacks any specific noises apart from a loud fuse-like sound when about to explode; Rosenfeld later recalled "That was just a complete accident by Markus and me [sic]. We just put in a placeholder sound of burning a matchstick. It seemed to work hilariously well, so we kept it." On other sounds, such as those of the zombie, Rosenfeld remarked, "I actually never wanted the zombies so scary. I intentionally made them sound comical. It's nice to hear that they work so well [...]." Rosenfeld remarked that the sound engine was "terrible" to work with, remembering "If you had two song files at once, it [the game engine] would actually crash. There were so many more weird glitches like that the guys never really fixed because they were too busy with the actual game and not the sound engine." The background music in Minecraft consists of instrumental ambient music. To compose the music of Minecraft, Rosenfeld used the package from Ableton Live, along with several additional plug-ins. Speaking on them, Rosenfeld said "They can be pretty much everything from an effect to an entire orchestra. Additionally, I've got some synthesizers that are attached to the computer. Like a Moog Voyager, Dave Smith Prophet 08 and a Virus TI." On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which included the music that was added in a 2013 "Music Update" for the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. On 14 August 2020, Ghostly released Volume Beta on CD and vinyl, with alternate color LPs and lenticular cover pressings released in limited quantities. The final update Rosenfeld worked on was 2018's 1.13 Update Aquatic. His music remained the only music in the game until 2020's "Nether Update", introducing pieces from Lena Raine. Since then, other composers have made contributions, including Kumi Tanioka, Samuel Åberg, Aaron Cherof, and Amos Roddy, with Raine remaining as the new primary composer. Ownership of all music besides Rosenfeld's independently released albums has been retained by Microsoft, with their label publishing all of the other artists' releases. Gareth Coker also composed some of the music for the game's mini games from the Legacy Console editions. Rosenfeld had stated his intent to create a third album of music for the game in a 2015 interview with Fact, and confirmed its existence in a 2017 tweet, stating that his work on the record as of then had tallied up to be longer than the previous two albums combined, which in total clocks in at over 3 hours and 18 minutes. However, due to licensing issues with Microsoft, the third volume has since not seen release. On 8 January 2021, Rosenfeld was asked in an interview with Anthony Fantano whether or not there was still a third volume of his music intended for release. Rosenfeld responded, saying, "I have something—I consider it finished—but things have become complicated, especially as Minecraft is now a big property, so I don't know." Reception Minecraft has received critical acclaim, with praise for the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have expressed enjoyment in Minecraft's complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands. IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste". A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock Paper Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly. The Xbox One Edition was one of the best received ports, being praised for its relatively large worlds. The PlayStation 3 Edition also received generally favorable reviews, being compared to the Xbox 360 Edition and praised for its well-adapted controls. The PlayStation 4 edition was the best received port to date, being praised for having 36 times larger worlds than the PlayStation 3 edition and described as nearly identical to the Xbox One edition. The PlayStation Vita Edition received generally positive reviews from critics but was noted for its technical limitations. The Wii U version received generally positive reviews from critics but was noted for a lack of GamePad integration. The 3DS version received mixed reviews, being criticized for its high price, technical issues, and lack of cross-platform play. The Nintendo Switch Edition received fairly positive reviews from critics, being praised, like other modern ports, for its relatively larger worlds. Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content. Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. As of 10 October 2014[update], the game had sold 17 million copies on PC, becoming the best-selling PC game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 700 million registered accounts by September 2023. By 2023, the game had sold over 300 million copies. As of April 2025, Minecraft has sold over 350 million copies. The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. As of 4 April 2014[update], the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 Edition sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. As of 2022, the Vita version has sold over 1.65 million physical copies in Japan, making it the best-selling Vita game in the country. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter. The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million. In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013, it was nominated as the family game of the year at the British Academy Video Games Awards. During the 16th Annual D.I.C.E. Awards, the Academy of Interactive Arts & Sciences nominated the Xbox 360 version of Minecraft for "Strategy/Simulation Game of the Year". Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer's The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list. Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards. Minecraft also won "Stream Game of the Year" at inaugural Streamer Awards in 2021. The game later garnered a Nickelodeon Kids' Choice Award nomination for Favorite Video Game in 2021, and won the same category in 2022 and 2023. At the Golden Joystick Awards 2025, it won the Still Playing Award - PC and Console. Minecraft has been subject to several notable controversies. In June 2014, Mojang announced that it would begin enforcing the portion of Minecraft's end-user license agreement (EULA) which prohibits servers from giving in-game advantages to players in exchange for donations or payments. Spokesperson Owen Hill stated that servers could still require players to pay a fee to access the server and could sell in-game cosmetic items. The change was supported by Persson, citing emails he received from parents of children who had spent hundreds of dollars on servers. The Minecraft community and server owners protested, arguing that the EULA's terms were more broad than Mojang was claiming, that the crackdown would force smaller servers to shut down for financial reasons, and that Mojang was suppressing competition for its own Minecraft Realms subscription service. The controversy contributed to Notch's decision to sell Mojang. In 2020, Mojang announced an eventual change to the Java Edition to require a login from a Microsoft account rather than a Mojang account, the latter of which would be sunsetted. This also required Java Edition players to create Xbox network Gamertags. Mojang defended the move to Microsoft accounts by saying that improved security could be offered, including two-factor authentication, blocking cyberbullies in chat, and improved parental controls. The community responded with intense backlash, citing various technical difficulties encountered in the process and how account migration would be mandatory, even for those who do not play on servers. As of 10 March 2022, Microsoft required that all players migrate in order to maintain access the Java Edition of Minecraft. Mojang announced a deadline of 19 September 2023 for account migration, after which all legacy Mojang accounts became inaccessible and unable to be migrated. In June 2022, Mojang added a player-reporting feature in Java Edition. Players could report other players on multiplayer servers for sending messages prohibited by the Xbox Live Code of Conduct; report categories included profane language,[l] substance abuse, hate speech, threats of violence, and nudity. If a player was found to be in violation of Xbox Community Standards, they would be banned from all servers for a specific period of time or permanently. The update containing the report feature (1.19.1) was released on 27 July 2022. Mojang received substantial backlash and protest from community members, one of the most common complaints being that banned players would be forbidden from joining any server, even private ones. Others took issue to what they saw as Microsoft increasing control over its player base and exercising censorship, leading some to start a hashtag #saveminecraft and dub the version "1.19.84", a reference to the dystopian novel Nineteen Eighty-Four. The "Mob Vote" was an online event organized by Mojang in which the Minecraft community voted between three original mob concepts; initially, the winning mob was to be implemented in a future update, while the losing mobs were scrapped, though after the first mob vote this was changed, and losing mobs would now have a chance to come to the game in the future. The first Mob Vote was held during Minecon Earth 2017 and became an annual event starting with Minecraft Live 2020. The Mob Vote was often criticized for forcing players to choose one mob instead of implementing all three, causing divisions and flaming within the community, and potentially allowing internet bots and Minecraft content creators with large fanbases to conduct vote brigading. The Mob Vote was also blamed for a perceived lack of new content added to Minecraft since Microsoft's acquisition of Mojang in 2014. The 2023 Mob Vote featured three passive mobs—the crab, the penguin, and the armadillo—with voting scheduled to start on 13 October. In response, a Change.org petition was created on 6 October, demanding that Mojang eliminate the Mob Vote and instead implement all three mobs going forward. The petition received approximately 445,000 signatures by 13 October and was joined by calls to boycott the Mob Vote, as well as a partially tongue-in-cheek "revolutionary" propaganda campaign in which sympathizers created anti-Mojang and pro-boycott posters in the vein of real 20th century propaganda posters. Mojang did not release an official response to the boycott, and the Mob Vote otherwise proceeded normally, with the armadillo winning the vote. In September 2024, as part of a blog post detailing their future plans for Minecraft's development, Mojang announced the Mob Vote would be retired. Cultural impact In September 2019, The Guardian classified Minecraft as the best video game of the 21st century to date, and in November 2019, Polygon called it the "most important game of the decade" in its 2010s "decade in review". In June 2020, Minecraft was inducted into the World Video Game Hall of Fame. Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development. Social media sites such as YouTube, Facebook, and Reddit have played a significant role in popularizing Minecraft. Research conducted by the Annenberg School for Communication at the University of Pennsylvania showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. The game would go on to be a prominent fixture within YouTube's gaming scene during the entire 2010s; in 2014, it was the second-most searched term on the entire platform. By 2018, it was still YouTube's biggest game globally. Some popular commentators have received employment at Machinima, a now-defunct gaming video company that owned a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at Minecon 2011 had the highest attendance. Another well-known YouTube personality is Jordan Maron, known online as CaptainSparklez, who has also created many Minecraft music parodies, including "Revenge", a parody of Usher's "DJ Got Us Fallin' in Love". Minecraft's popularity on YouTube was described by Polygon as quietly dominant, although in 2019, thanks in part to PewDiePie's playthroughs of the game, Minecraft experienced a visible uptick in popularity on the platform. Longer-running series include Far Lands or Bust, dedicated to reaching the obsolete "Far Lands" glitch by foot on an older version of the game. YouTube announced that on 14 December 2021 that the total amount of Minecraft-related views on the website had exceeded one trillion. Minecraft has been referenced by other video games, such as Torchlight II, Team Fortress 2, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, and FTL: Faster Than Light. Minecraft is officially represented in downloadable content for the crossover fighter Super Smash Bros. Ultimate, with Steve as a playable character with a moveset including references to building, crafting, and redstone, alongside an Overworld-themed stage. It was also referenced by electronic music artist Deadmau5 in his performances. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. In 2025, A Minecraft Movie was released. It made $313 million in the box office in the first week, a record-breaking opening for a video game adaptation. Minecraft has been noted as a cultural touchstone for Generation Z, as many of the generation's members played the game at a young age. The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design (CAD) and education. In a panel at Minecon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap. In September 2012, Mojang began the Block by Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements and is in the planning phase. The Block by Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions. In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at 171 meters (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft was around 192 meters above in-game sea level when the project was completed. Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders has used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people. Despite its unpredictable nature, Minecraft speedrunning, where players time themselves from spawning into a new world to reaching The End and defeating the Ender Dragon boss, is popular. Some speedrunners use a combination of mods, external programs, and debug menus, while other runners play the game in a more vanilla or more consistency-oriented way. Minecraft has been used in educational settings through initiatives such as MinecraftEdu, founded in 2011 to make the game affordable and accessible for schools in collaboration with Mojang. MinecraftEdu provided features allowing teachers to monitor student progress, including screenshot submissions as evidence of lesson completion, and by 2012 reported that approximately 250,000 students worldwide had access to the platform. Mojang also developed Minecraft: Education Edition with pre-built lesson plans for up to 30 students in a closed environment. Educators have used Minecraft to teach subjects such as history, language arts, and science through custom-built environments, including reconstructions of historical landmarks and large-scale models of biological structures such as animal cells. The introduction of redstone blocks enabled the construction of functional virtual machines such as a hard drive and an 8-bit computer. Mods have been created to use these mechanics for teaching programming. In 2014, the British Museum announced a project to reproduce its building and exhibits in Minecraft in collaboration with the public. Microsoft and Code.org have offered Minecraft-based tutorials and activities designed to teach programming, reporting by 2018 that more than 85 million children had used their resources. In 2025, the Musée de Minéralogie in Paris held a temporary exhibition titled "Minerals in Minecraft." Following the initial surge in popularity of Minecraft in 2010, other video games were criticised for having various similarities to Minecraft, and some were described as being "clones", often due to a direct inspiration from Minecraft, or a superficial similarity. Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, BlockWorld 3D, Total Miner, and Luanti (formerly Minetest). David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system. In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms not to officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded, with official Minecraft releases on Nintendo consoles eventually resuming. Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011. In 2025, Persson announced through a poll on his X account that he was considering developing a spiritual successor to Minecraft. He later clarified that he was "100% serious", and that he had "basically announced Minecraft 2". Within days, however, Persson cancelled the plans after speaking to his team. In November 2024, artificial intelligence companies Decart and Etched released Oasis, an artificially generated version of Minecraft, as a proof of concept. Every in-game element is completely AI-generated in real time and the model does not store world data, leading to "hallucinations" such as items and blocks appearing that were not there before. In January 2026, indie game developer Unomelon announced that their voxel sandbox game Allumeria would be playable in Steam Next Fest that year. On 10 February, Mojang issued a DMCA takedown of Allumeria on Steam through Valve, alleging the game was infringing on Minecraft's copyright. Some reports suggested that the takedown may have used an automatic AI copyright claiming service. The DMCA was later withdrawn. Minecon was an annual official fan convention dedicated to Minecraft. The first full Minecon was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community. In 2016, Minecon was held in-person for the last time, with the following years featuring annual "Minecon Earth" livestreams on minecraft.net and YouTube instead. These livestreams, later rebranded to "Minecraft Live", included the mob/biome votes, and announcements of new game updates. In 2025, "Minecraft Live" became a biannual event as part of Minecraft's changing update schedule.[citation needed] Notes References External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.