text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/K._Eric_Drexler] | [TOKENS: 1490]
Contents K. Eric Drexler Kim Eric Drexler (born April 25, 1955) is an American engineer best known for introducing molecular nanotechnology (MNT), and his studies of its potential from the 1970s and 1980s. His 1991 doctoral thesis at Massachusetts Institute of Technology (MIT) was revised and published as the book Nanosystems: Molecular Machinery Manufacturing and Computation (1992), which received the Association of American Publishers award for Best Computer Science Book of 1992. He has been called the "godfather of nanotechnology". Life and work K. Eric Drexler was strongly influenced by ideas on limits to growth in the early 1970s. During his first year at Massachusetts Institute of Technology, he sought out someone who was working on extraterrestrial resources. He found Gerard K. O'Neill of Princeton University, a physicist famous for his work on storage rings for particle accelerators and his landmark work on the concepts of space colonization. Drexler participated in NASA summer studies on space colonies in 1975 and 1976. He fabricated metal thin films a few tens of nanometers thick on a wax support to demonstrate the potentials of high-performance solar sails. He was active in space politics, helping the L5 Society defeat the Moon Treaty in 1980. Besides working summers for O'Neill, building mass driver prototypes, Drexler delivered papers at the first three Space Manufacturing conferences at Princeton. The 1977 and 1979 papers were co-authored with Keith Henson, and patents were issued on both subjects, vapor phase fabrication and space radiators. During the late 1970s, Drexler began to develop ideas about molecular nanotechnology (MNT). In 1979, he encountered Richard Feynman's provocative 1959 talk "There's Plenty of Room at the Bottom". In 1981, Drexler wrote a seminal research article, published by PNAS, "Molecular engineering: An approach to the development of general capabilities for molecular manipulation". This article has continued to be cited, more than 620 times, during the following 35 years. The term "nano-technology" had been coined by the Tokyo University of Science professor Norio Taniguchi in 1974 to describe the precision manufacture of materials with nanometer tolerances, and Drexler unknowingly used a related term in his 1986 book Engines of Creation: The Coming Era of Nanotechnology to describe what later became known as molecular nanotechnology (MNT). In that book, he proposed the idea of a nanoscale "assembler" which would be able to build a copy of itself and of other items of arbitrary complexity. He also first published the term "grey goo" to describe what might happen if a hypothetical self-replicating molecular assembler went out of control. He has subsequently tried to clarify his concerns about out-of-control self-replicators, and make the case that molecular manufacturing does not require such devices. Drexler holds three degrees from MIT. He received his B.S. in Interdisciplinary Sciences in 1977 and his M.S. in 1979 in Astro/Aerospace Engineering with a master's thesis titled "Design of a High Performance Solar Sail System". In 1991, he earned a Ph.D. through the MIT Media Lab (formally, the Media Arts and Sciences Section, School of Architecture and Planning) after the department of electrical engineering and computer science refused to approve Drexler's plan of study. His Ph.D. work was the first doctoral degree on the topic of molecular nanotechnology and his thesis, "Molecular Machinery and Manufacturing with Applications to Computation", was published (with minor editing) as Nanosystems: Molecular Machinery, Manufacturing and Computation (1992), which received the Association of American Publishers award for Best Computer Science Book of 1992. In 1981, Drexler married Christine Peterson. The marriage ended in 2002. In 2006, Drexler married Rosa Wang, a former investment banker who works with Ashoka: Innovators for the Public on improving the social capital markets. Drexler has arranged to be cryonically preserved in the event of legal death. Reception Drexler's work on nanotechnology was criticized as naive by Nobel Prize winner Richard Smalley in a 2001 Scientific American article. Smalley first argued that "fat fingers" made MNT impossible. He later argued that nanomachines would have to resemble chemical enzymes more than Drexler's assemblers and could only work in water. Drexler maintained that both were straw man arguments, and in the case of enzymes, wrote that "Prof. Klibanov wrote in 1994, ' ... using an enzyme in organic solvents eliminates several obstacles ... '" Drexler had difficulty in getting Smalley to respond, but in December 2003, Chemical and Engineering news carried a four-part debate. Ray Kurzweil disputes Smalley's arguments. The National Academies of Sciences, Engineering, and Medicine, in its 2006 review of the National Nanotechnology Initiative, argues that it is difficult to predict the future capabilities of nanotechnology: Although theoretical calculations can be made today, the eventually attainable range of chemical reaction cycles, error rates, speed of operation, and thermodynamic efficiencies of such bottom-up manufacturing systems cannot be reliably predicted at this time. Thus, the eventually attainable perfection and complexity of manufactured products, while they can be calculated in theory, cannot be predicted with confidence. Finally, the optimum research paths that might lead to systems which greatly exceed the thermodynamic efficiencies and other capabilities of biological systems cannot be reliably predicted at this time. Research funding that is based on the ability of investigators to produce experimental demonstrations that link to abstract models and guide long-term vision is most appropriate to achieve this goal. Drexler and his work on nanotechnology have been referenced in various media, particularly in science fiction literature. In Neal Stephenson's science fiction novel The Diamond Age, he is portrayed as one of the heroes of a future world shaped by nanotechnology. In the science fiction novel Newton's Wake by Ken MacLeod, a 'drexler' is a nanotech assembler of pretty much anything that can fit in the volume of the particular machine—from socks to starships. Drexler is also mentioned in the science fiction book Decipher by Stel Pavlou; his book is mentioned as one of the starting points of nanomachine construction, as well as giving a better understanding of the way carbon 60 was to be applied. James Rollins references Drexler's Engines of Creation in his novel Excavation, using his theory of a molecular machine in two sections as a possible explanation for the mysterious "Substance Z" in the story. He is also mentioned in Timothy Leary's Design for Dying, and in Michael Crichton's 2002 novel Prey. Drexler was mentioned in DC Comics' Doom Patrol in 1992. The Drexler Facility (ドレクサー機関) of molecular nanotechnology research in the Japanese eroge visual novels Baldr Sky is named after him. The "Assemblers" are its key invention. Works See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Animal#cite_ref-Minelli2009_131-0] | [TOKENS: 6011]
Contents Animal Animals are multicellular, eukaryotic organisms belonging to the biological kingdom Animalia (/ˌænɪˈmeɪliə/). With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Animals form a clade, meaning that they arose from a single common ancestor. Over 1.5 million living animal species have been described, of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are as many as 7.77 million animal species on Earth. Animal body lengths range from 8.5 μm (0.00033 in) to 33.6 m (110 ft). They have complex ecologies and interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology, and the study of animal behaviour is known as ethology. The animal kingdom is divided into five major clades, namely Porifera, Ctenophora, Placozoa, Cnidaria and Bilateria. Most living animal species belong to the clade Bilateria, a highly proliferative clade whose members have a bilaterally symmetric and significantly cephalised body plan, and the vast majority of bilaterians belong to two large clades: the protostomes, which includes organisms such as arthropods, molluscs, flatworms, annelids and nematodes; and the deuterostomes, which include echinoderms, hemichordates and chordates, the latter of which contains the vertebrates. The much smaller basal phylum Xenacoelomorpha have an uncertain position within Bilateria. Animals first appeared in the fossil record in the late Cryogenian period and diversified in the subsequent Ediacaran period in what is known as the Avalon explosion. Nearly all modern animal phyla first appeared in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago (Mya), and most classes during the Ordovician radiation 485.4 Mya. Common to all living animals, 6,331 groups of genes have been identified that may have arisen from a single common ancestor that lived about 650 Mya during the Cryogenian period. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa. Humans make use of many other animal species for food (including meat, eggs, and dairy products), for materials (such as leather, fur, and wool), as pets and as working animals for transportation, and services. Dogs, the first domesticated animal, have been used in hunting, in security and in warfare, as have horses, pigeons and birds of prey; while other terrestrial and aquatic animals are hunted for sports, trophies or profits. Non-human animals are also an important cultural element of human evolution, having appeared in cave arts and totems since the earliest times, and are frequently featured in mythology, religion, arts, literature, heraldry, politics, and sports. Etymology The word animal comes from the Latin noun animal of the same meaning, which is itself derived from Latin animalis 'having breath or soul'. The biological definition includes all members of the kingdom Animalia. In colloquial usage, the term animal is often used to refer only to nonhuman animals. The term metazoa is derived from Ancient Greek μετα meta 'after' (in biology, the prefix meta- stands for 'later') and ζῷᾰ zōia 'animals', plural of ζῷον zōion 'animal'. A metazoan is any member of the group Metazoa. Characteristics Animals have several characteristics that they share with other living things. Animals are eukaryotic, multicellular, and aerobic, as are plants and fungi. Unlike plants and algae, which produce their own food, animals cannot produce their own food, a feature they share with fungi. Animals ingest organic material and digest it internally. Animals have structural characteristics that set them apart from all other living things: Typically, there is an internal digestive chamber with either one opening (in Ctenophora, Cnidaria, and flatworms) or two openings (in most bilaterians). Animal development is controlled by Hox genes, which signal the times and places to develop structures such as body segments and limbs. During development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised into specialised tissues and organs, making the formation of complex structures possible, and allowing cells to be differentiated. The extracellular matrix may be calcified, forming structures such as shells, bones, and spicules. In contrast, the cells of other multicellular organisms (primarily algae, plants, and fungi) are held in place by cell walls, and so develop by progressive growth. Nearly all animals make use of some form of sexual reproduction. They produce haploid gametes by meiosis; the smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm. In most cases, a third germ layer, the mesoderm, also develops between them. These germ layers then differentiate to form tissues and organs. Repeated instances of mating with a close relative during sexual reproduction generally leads to inbreeding depression within a population due to the increased prevalence of harmful recessive traits. Animals have evolved numerous mechanisms for avoiding close inbreeding. Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentation; budding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids. Ecology Animals are categorised into ecological groups depending on their trophic levels and how they consume organic material. Such groupings include carnivores (further divided into subcategories such as piscivores, insectivores, ovivores, etc.), herbivores (subcategorised into folivores, graminivores, frugivores, granivores, nectarivores, algivores, etc.), omnivores, fungivores, scavengers/detritivores, and parasites. Interactions between animals of each biome form complex food webs within that ecosystem. In carnivorous or omnivorous species, predation is a consumer–resource interaction where the predator feeds on another organism, its prey, who often evolves anti-predator adaptations to avoid being fed upon. Selective pressures imposed on one another lead to an evolutionary arms race between predator and prey, resulting in various antagonistic/competitive coevolutions. Almost all multicellular predators are animals. Some consumers use multiple methods; for example, in parasitoid wasps, the larvae feed on the hosts' living tissues, killing them in the process, but the adults primarily consume nectar from flowers. Other animals may have very specific feeding behaviours, such as hawksbill sea turtles which mainly eat sponges. Most animals rely on biomass and bioenergy produced by plants and phytoplanktons (collectively called producers) through photosynthesis. Herbivores, as primary consumers, eat the plant material directly to digest and absorb the nutrients, while carnivores and other animals on higher trophic levels indirectly acquire the nutrients by eating the herbivores or other animals that have eaten the herbivores. Animals oxidise carbohydrates, lipids, proteins and other biomolecules in cellular respiration, which allows the animal to grow and to sustain basal metabolism and fuel other biological processes such as locomotion. Some benthic animals living close to hydrothermal vents and cold seeps on the dark sea floor consume organic matter produced through chemosynthesis (via oxidising inorganic compounds such as hydrogen sulfide) by archaea and bacteria. Animals originated in the ocean; all extant animal phyla, except for Micrognathozoa and Onychophora, feature at least some marine species. However, several lineages of arthropods begun to colonise land around the same time as land plants, probably between 510 and 471 million years ago, during the Late Cambrian or Early Ordovician. Vertebrates such as the lobe-finned fish Tiktaalik started to move on to land in the late Devonian, about 375 million years ago. Other notable animal groups that colonized land environments are Mollusca, Platyhelmintha, Annelida, Tardigrada, Onychophora, Rotifera, Nematoda. Animals occupy virtually all of earth's habitats and microhabitats, with faunas adapted to salt water, hydrothermal vents, fresh water, hot springs, swamps, forests, pastures, deserts, air, and the interiors of other organisms. Animals are however not particularly heat tolerant; very few of them can survive at constant temperatures above 50 °C (122 °F) or in the most extreme cold deserts of continental Antarctica. The collective global geomorphic influence of animals on the processes shaping the Earth's surface remains largely understudied, with most studies limited to individual species and well-known exemplars. Diversity The blue whale (Balaenoptera musculus) is the largest animal that has ever lived, weighing up to 190 tonnes and measuring up to 33.6 metres (110 ft) long. The largest extant terrestrial animal is the African bush elephant (Loxodonta africana), weighing up to 12.25 tonnes and measuring up to 10.67 metres (35.0 ft) long. The largest terrestrial animals that ever lived were titanosaur sauropod dinosaurs such as Argentinosaurus, which may have weighed as much as 73 tonnes, and Supersaurus which may have reached 39 metres. Several animals are microscopic; some Myxozoa (obligate parasites within the Cnidaria) never grow larger than 20 μm, and one of the smallest species (Myxobolus shekel) is no more than 8.5 μm when fully grown. The following table lists estimated numbers of described extant species for the major animal phyla, along with their principal habitats (terrestrial, fresh water, and marine), and free-living or parasitic ways of life. Species estimates shown here are based on numbers described scientifically; much larger estimates have been calculated based on various means of prediction, and these can vary wildly. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million. Using patterns within the taxonomic hierarchy, the total number of animal species—including those not yet described—was calculated to be about 7.77 million in 2011.[a] 3,000–6,500 4,000–25,000 Evolutionary origin Evidence of animals is found as long ago as the Cryogenian period. 24-Isopropylcholestane (24-ipc) has been found in rocks from roughly 650 million years ago; it is only produced by sponges and pelagophyte algae. Its likely origin is from sponges based on molecular clock estimates for the origin of 24-ipc production in both groups. Analyses of pelagophyte algae consistently recover a Phanerozoic origin, while analyses of sponges recover a Neoproterozoic origin, consistent with the appearance of 24-ipc in the fossil record. The first body fossils of animals appear in the Ediacaran, represented by forms such as Charnia and Spriggina. It had long been doubted whether these fossils truly represented animals, but the discovery of the animal lipid cholesterol in fossils of Dickinsonia establishes their nature. Animals are thought to have originated under low-oxygen conditions, suggesting that they were capable of living entirely by anaerobic respiration, but as they became specialised for aerobic metabolism they became fully dependent on oxygen in their environments. Many animal phyla first appear in the fossil record during the Cambrian explosion, starting about 539 million years ago, in beds such as the Burgess Shale. Extant phyla in these rocks include molluscs, brachiopods, onychophorans, tardigrades, arthropods, echinoderms and hemichordates, along with numerous now-extinct forms such as the predatory Anomalocaris. The apparent suddenness of the event may however be an artefact of the fossil record, rather than showing that all these animals appeared simultaneously. That view is supported by the discovery of Auroralumina attenboroughii, the earliest known Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predators, catching small prey with its nematocysts as modern cnidarians do. Some palaeontologists have suggested that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Early fossils that might represent animals appear for example in the 665-million-year-old rocks of the Trezona Formation of South Australia. These fossils are interpreted as most probably being early sponges. Trace fossils such as tracks and burrows found in the Tonian period (from 1 gya) may indicate the presence of triploblastic worm-like animals, roughly as large (about 5 mm wide) and complex as earthworms. However, similar tracks are produced by the giant single-celled protist Gromia sphaerica, so the Tonian trace fossils may not indicate early animal evolution. Around the same time, the layered mats of microorganisms called stromatolites decreased in diversity, perhaps due to grazing by newly evolved animals. Objects such as sediment-filled tubes that resemble trace fossils of the burrows of wormlike animals have been found in 1.2 gya rocks in North America, in 1.5 gya rocks in Australia and North America, and in 1.7 gya rocks in Australia. Their interpretation as having an animal origin is disputed, as they might be water-escape or other structures. Phylogeny Animals are monophyletic, meaning they are derived from a common ancestor. Animals are the sister group to the choanoflagellates, with which they form the Choanozoa. Ros-Rocher and colleagues (2021) trace the origins of animals to unicellular ancestors, providing the external phylogeny shown in the cladogram. Uncertainty of relationships is indicated with dashed lines. The animal clade had certainly originated by 650 mya, and may have come into being as much as 800 mya, based on molecular clock evidence for different phyla. Holomycota (inc. fungi) Ichthyosporea Pluriformea Filasterea The relationships at the base of the animal tree have been debated. Other than Ctenophora, the Bilateria and Cnidaria are the only groups with symmetry, and other evidence shows they are closely related. In addition to sponges, Placozoa has no symmetry and was often considered a "missing link" between protists and multicellular animals. The presence of hox genes in Placozoa shows that they were once more complex. The Porifera (sponges) have long been assumed to be sister to the rest of the animals, but there is evidence that the Ctenophora may be in that position. Molecular phylogenetics has supported both the sponge-sister and ctenophore-sister hypotheses. In 2017, Roberto Feuda and colleagues, using amino acid differences, presented both, with the following cladogram for the sponge-sister view that they supported (their ctenophore-sister tree simply interchanging the places of ctenophores and sponges): Porifera Ctenophora Placozoa Cnidaria Bilateria Conversely, a 2023 study by Darrin Schultz and colleagues uses ancient gene linkages to construct the following ctenophore-sister phylogeny: Ctenophora Porifera Placozoa Cnidaria Bilateria Sponges are physically very distinct from other animals, and were long thought to have diverged first, representing the oldest animal phylum and forming a sister clade to all other animals. Despite their morphological dissimilarity with all other animals, genetic evidence suggests sponges may be more closely related to other animals than the comb jellies are. Sponges lack the complex organisation found in most other animal phyla; their cells are differentiated, but in most cases not organised into distinct tissues, unlike all other animals. They typically feed by drawing in water through pores, filtering out small particles of food. The Ctenophora and Cnidaria are radially symmetric and have digestive chambers with a single opening, which serves as both mouth and anus. Animals in both phyla have distinct tissues, but these are not organised into discrete organs. They are diploblastic, having only two main germ layers, ectoderm and endoderm. The tiny placozoans have no permanent digestive chamber and no symmetry; they superficially resemble amoebae. Their phylogeny is poorly defined, and under active research. The remaining animals, the great majority—comprising some 29 phyla and over a million species—form the Bilateria clade, which have a bilaterally symmetric body plan. The Bilateria are triploblastic, with three well-developed germ layers, and their tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and in the Nephrozoa there is an internal body cavity, a coelom or pseudocoelom. These animals have a head end (anterior) and a tail end (posterior), a back (dorsal) surface and a belly (ventral) surface, and a left and a right side. A modern consensus phylogenetic tree for the Bilateria is shown below. Xenacoelomorpha Ambulacraria Chordata Ecdysozoa Spiralia Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Many bilaterians have a combination of circular muscles that constrict the body, making it longer, and an opposing set of longitudinal muscles, that shorten the body; these enable soft-bodied animals with a hydrostatic skeleton to move by peristalsis. They also have a gut that extends through the basically cylindrical body from mouth to anus. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. However, over evolutionary time, descendant spaces have evolved which have lost one or more of each of these characteristics. For example, adult echinoderms are radially symmetric (unlike their larvae), while some parasitic worms have extremely simplified body structures. Genetic studies have considerably changed zoologists' understanding of the relationships within the Bilateria. Most appear to belong to two major lineages, the protostomes and the deuterostomes. It is often suggested that the basalmost bilaterians are the Xenacoelomorpha, with all other bilaterians belonging to the subclade Nephrozoa. However, this suggestion has been contested, with other studies finding that xenacoelomorphs are more closely related to Ambulacraria than to other bilaterians. Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage. Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm. The main deuterostome taxa are the Ambulacraria and the Chordata. Ambulacraria are exclusively marine and include acorn worms, starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals. The protostomes include the Ecdysozoa, named after their shared trait of ecdysis, growth by moulting, Among the largest ecdysozoan phyla are the arthropods and the nematodes. The rest of the protostomes are in the Spiralia, named for their pattern of developing by spiral cleavage in the early embryo. Major spiralian phyla include the annelids and molluscs. History of classification In the classical era, Aristotle divided animals,[d] based on his own observations, into those with blood (roughly, the vertebrates) and those without. The animals were then arranged on a scale from man (with blood, two legs, rational soul) down through the live-bearing tetrapods (with blood, four legs, sensitive soul) and other groups such as crustaceans (no blood, many legs, sensitive soul) down to spontaneously generating creatures like sponges (no blood, no legs, vegetable soul). Aristotle was uncertain whether sponges were animals, which in his system ought to have sensation, appetite, and locomotion, or plants, which did not: he knew that sponges could sense touch and would contract if about to be pulled off their rocks, but that they were rooted like plants and never moved about. In 1758, Carl Linnaeus created the first hierarchical classification in his Systema Naturae. In his original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then, the last four have all been subsumed into a single phylum, the Chordata, while his Insecta (which included the crustaceans and arachnids) and Vermes have been renamed or broken up. The process was begun in 1793 by Jean-Baptiste de Lamarck, who called the Vermes une espèce de chaos ('a chaotic mess')[e] and split the group into three new phyla: worms, echinoderms, and polyps (which contained corals and jellyfish). By 1809, in his Philosophie Zoologique, Lamarck had created nine phyla apart from vertebrates (where he still had four phyla: mammals, birds, reptiles, and fish) and molluscs, namely cirripedes, annelids, crustaceans, arachnids, insects, worms, radiates, polyps, and infusorians. In his 1817 Le Règne Animal, Georges Cuvier used comparative anatomy to group the animals into four embranchements ('branches' with different body plans, roughly corresponding to phyla), namely vertebrates, molluscs, articulated animals (arthropods and annelids), and zoophytes (radiata) (echinoderms, cnidaria and other forms). This division into four was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860. In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals, with five phyla: coelenterates, echinoderms, articulates, molluscs, and vertebrates) and Protozoa (single-celled animals), including a sixth animal phylum, sponges. The protozoa were later moved to the former kingdom Protista, leaving only the Metazoa as a synonym of Animalia. In human culture The human population exploits a large number of other animal species for food, both of domesticated livestock species in animal husbandry and, mainly at sea, by hunting wild species. Marine fish of many species are caught commercially for food. A smaller number of species are farmed commercially. Humans and their livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined. Invertebrates including cephalopods, crustaceans, insects—principally bees and silkworms—and bivalve or gastropod molluscs are hunted or farmed for food, fibres. Chickens, cattle, sheep, pigs, and other animals are raised as livestock for meat across the world. Animal fibres such as wool and silk are used to make textiles, while animal sinews have been used as lashings and bindings, and leather is widely used to make shoes and other items. Animals have been hunted and farmed for their fur to make items such as coats and hats. Dyestuffs including carmine (cochineal), shellac, and kermes have been made from the bodies of insects. Working animals including cattle and horses have been used for work and transport from the first days of agriculture. Animals such as the fruit fly Drosophila melanogaster serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug trabectedin are based on toxins or other molecules of animal origin. People have used hunting dogs to help chase down and retrieve animals, and birds of prey to catch birds and mammals, while tethered cormorants have been used to catch fish. Poison dart frogs have been used to poison the tips of blowpipe darts. A wide variety of animals are kept as pets, from invertebrates such as tarantulas, octopuses, and praying mantises, reptiles such as snakes and chameleons, and birds including canaries, parakeets, and parrots all finding a place. However, the most kept pet species are mammals, namely dogs, cats, and rabbits. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own. A wide variety of terrestrial and aquatic animals are hunted for sport. The signs of the Western and Chinese zodiacs are based on animals. In China and Japan, the butterfly has been seen as the personification of a person's soul, and in classical representation the butterfly is also the symbol of the soul. Animals have been the subjects of art from the earliest times, both historical, as in ancient Egypt, and prehistoric, as in the cave paintings at Lascaux. Major animal paintings include Albrecht Dürer's 1515 The Rhinoceros, and George Stubbs's c. 1762 horse portrait Whistlejacket. Insects, birds and mammals play roles in literature and film, such as in giant bug movies. Animals including insects and mammals feature in mythology and religion. The scarab beetle was sacred in ancient Egypt, and the cow is sacred in Hinduism. Among other mammals, deer, horses, lions, bats, bears, and wolves are the subjects of myths and worship. See also Notes References External links
========================================
[SOURCE: https://www.mako.co.il/food-feed/2026-m02_w03/shorts-8146f9b33d06c91027.htm] | [TOKENS: 502]
נראה טעים? זה אפילו עוד יותר טעים ממה שנראהביצת עין היא סוג של קסם כשלעצמה, והיום אנחנו רוצים להכיר לכם שדרוג שלוקח את הקסם הזה למחוזות חדשים. כל מה שצריך להוסיף זה דבש ומוצרלה מגוררת, ואנחנו נשבעים: יש סיכוי שזו הביצה הכי טעימה שאכלתם בחיים יעל קצב15.02.2026למתכון נראה טעים? זה אפילו עוד יותר טעים ממה שנראה ביצת עין היא סוג של קסם כשלעצמה, והיום אנחנו רוצים להכיר לכם שדרוג שלוקח את הקסם הזה למחוזות חדשים. כל מה שצריך להוסיף זה דבש ומוצרלה מגוררת, ואנחנו נשבעים: יש סיכוי שזו הביצה הכי טעימה שאכלתם בחיים יעל קצב
========================================
[SOURCE: https://en.wikipedia.org/wiki/North_China_Plain] | [TOKENS: 1215]
Contents North China Plain The North China Plain (simplified Chinese: 华北平原; traditional Chinese: 華北平原; pinyin: Huáběi Píngyuán) is a large-scale downfaulted rift basin formed in the late Paleogene and Neogene and then modified by the deposits of the Yellow River. It is the largest alluvial plain of China. The plain is bordered to the north by the Yanshan Mountains, to the west by the Taihang Mountains, to the south by the Dabie Mountains, and to the east by the Yellow Sea and Bohai Sea. The Yellow River flows through the plain, before its waters empty into the Bohai Sea. The part of the North China Plain around the banks of the middle and lower Yellow River is commonly referred to as the Central Plain (pinyin: Zhōngyuán). This portion of the North China Plain formed the cradle of Chinese civilization, and is the region from which the Han Chinese people emerged. Beijing, the capital of China, is located on the northeast edge of the plain, with Tianjin, an important industrial city and commercial port, near its northeast coast. Jinan (the capital of Shandong province) and Zhengzhou (the capital of Henan province) lie on the plain as well, along the banks of the Yellow River. Additionally, the capitals of several Imperial Chinese dynasties were located on the plain, including Luoyang (which at various points was the capital of the Han, Jin, Sui, and Tang dynasties) and Kaifeng (the capital of the Northern Song dynasty). The multipurpose Xiaolangdi Dam marks the location of the Yellow River's last valley before its waters flow onto the North China Plain, a great delta created from silt deposited at the Yellow River's mouth over millennia. The North China Plain encompasses much of Henan, Hebei, and Shandong provinces, as well as the northern portions of Jiangsu and Anhui. Further south, the North China Plain merges with the similarly flat Yangtze Delta. The North China Plain is fertile, and it is one of the most densely populated regions in the world. The plain is one of China's most important agricultural regions, producing wheat, maize, sorghum, millet, peanuts, sesame seed, cotton, and various vegetables. It is the main area of sorghum, millet, maize, and cotton production in China. In the eastern part of the plain, Shandong's Shengli Oil Field serves as an important petroleum base. Due to its yellow soil, the North China Plain's nickname is "Land of the yellow earth". The plain covers an area of about 409,500 square kilometers (158,100 sq mi), most of which is less than 50 metres (160 ft) above sea level. Historical significance The geography of the North China Plain has had profound cultural and political implications. Unlike areas to the south of the Yangtze, the plain generally runs uninterrupted by mountains and has far fewer rivers. As a result, communication by horse is rapid within the plain, and the spoken language of the plain is relatively uniform, in contrast to the plethora of languages and dialects in southern China. In addition the possibility of rapid communication has meant that the political center of China has tended to be located here. Because the fertile soil of the North China Plain gradually merges with the steppes and deserts of Dzungaria, Inner Mongolia, and Northeast China, the plain has been prone to invasion from nomadic or semi-nomadic tribes originating from those regions, prompting the construction of the Great Wall of China. Although the soil of the North China Plain is fertile, the weather is unpredictable, being at the intersection of humid winds from the Pacific and dry winds from the interior of the Asian continent. This makes the plain prone to both floods and drought. Moreover, the flatness of the plain promotes massive flooding when river works are damaged. Many historians have proposed that these factors have encouraged the development of a centralized Chinese state to manage granaries, maintain hydraulic works, and administer fortifications against the steppe peoples. (The "hydraulic society" school holds that early states developed in the valleys of the Nile, Euphrates, Indus and Yellow Rivers due to the need to supervise large numbers of laborers to build irrigation canals and control floods.) Philosophically, the North China Plain was also the birthplace of Confucius, the traditional patriarch of East Asian philosophy. Confucius lived and taught in the State of Lu from 551 to 479 BCE. His teachings, recorded in The Analects, eventually became the school of thought known as Confucianism. Tied to the Classical Chinese writing system, Confucianism swept throughout China and onto Korea, Japan, and Vietnam, heavily influencing their respective political, legal, and educational bureaucracies. The initial project of the Great Leap Forward was accelerating the construction of waterworks on the North China Plain during the 1957–1958 winter.: 82 Climate change As climate change increases the Earth's average temperature, and has a disproportionate effect on extreme temperatures, it will also increase heat stress felt in areas that are already hot and/or with high humidity. The North China Plain is expected to be highly affected, as the region's extensive irrigation networks result in unusually moist air. In scenarios without aggressive action to stop climate change, the worst heatwaves are projected to become severe enough to cause mass mortality in agricultural labourers working outdoors. Under the most extreme climate change scenario, the warming reached by 2100 would be sufficient to cause such heatwaves across the North China Plain approximately once per decade. References External links 36°34′48″N 117°09′36″E / 36.58000°N 117.16000°E / 36.58000; 117.16000
========================================
[SOURCE: https://en.wikipedia.org/wiki/United_States#cite_note-350] | [TOKENS: 17273]
Contents United States The United States of America (USA), also known as the United States (U.S.) or America, is a country primarily located in North America. It is a federal republic of 50 states and a federal capital district, Washington, D.C. The 48 contiguous states border Canada to the north and Mexico to the south, with the semi-exclave of Alaska in the northwest and the archipelago of Hawaii in the Pacific Ocean. The United States also asserts sovereignty over five major island territories and various uninhabited islands in Oceania and the Caribbean.[j] It is a megadiverse country, with the world's third-largest land area[c] and third-largest population, exceeding 341 million.[k] Paleo-Indians first migrated from North Asia to North America at least 15,000 years ago, and formed various civilizations. Spanish colonization established Spanish Florida in 1513, the first European colony in what is now the continental United States. British colonization followed with the 1607 settlement of Virginia, the first of the Thirteen Colonies. Enslavement of Africans was practiced in all colonies by 1770 and supplied most of the labor for the Southern Colonies' plantation economy. Clashes with the British Crown began as a civil protest over the illegality of taxation without representation in Parliament and the denial of other English rights. They evolved into the American Revolution, which led to the Declaration of Independence and a society based on universal rights. Victory in the 1775–1783 Revolutionary War brought international recognition of U.S. sovereignty and fueled westward expansion, further dispossessing native inhabitants. As more states were admitted, a North–South division over slavery led the Confederate States of America to declare secession and fight the Union in the 1861–1865 American Civil War. With the United States' victory and reunification, slavery was abolished nationally. By the late 19th century, the U.S. economy outpaced the French, German and British economies combined. As of 1900, the country had established itself as a great power, a status solidified after its involvement in World War I. Following Japan's attack on Pearl Harbor in 1941, the U.S. entered World War II. Its aftermath left the U.S. and the Soviet Union as rival superpowers, competing for ideological dominance and international influence during the Cold War. The Soviet Union's collapse in 1991 ended the Cold War, leaving the U.S. as the world's sole superpower. The U.S. federal government is a representative democracy with a president and a constitution that grants separation of powers under three branches: legislative, executive, and judicial. The United States Congress is a bicameral national legislature composed of the House of Representatives (a lower house based on population) and the Senate (an upper house based on equal representation for each state). Federalism grants substantial autonomy to the 50 states. In addition, 574 Native American tribes have sovereignty rights, and there are 326 Native American reservations. Since the 1850s, the Democratic and Republican parties have dominated American politics. American ideals and values are based on a democratic tradition inspired by the American Enlightenment movement. A developed country, the U.S. ranks high in economic competitiveness, innovation, and higher education. Accounting for over a quarter of nominal global GDP, its economy has been the world's largest since about 1890. It is the wealthiest country, with the highest disposable household income per capita among OECD members, though its wealth inequality is highly pronounced. Shaped by centuries of immigration, the culture of the U.S. is diverse and globally influential. Making up more than a third of global military spending, the country has one of the strongest armed forces and is a designated nuclear state. A member of numerous international organizations, the U.S. plays a major role in global political, cultural, economic, and military affairs. Etymology Documented use of the phrase "United States of America" dates back to January 2, 1776. On that day, Stephen Moylan, a Continental Army aide to General George Washington, wrote a letter to Joseph Reed, Washington's aide-de-camp, seeking to go "with full and ample powers from the United States of America to Spain" to seek assistance in the Revolutionary War effort. The first known public usage is an anonymous essay published in the Williamsburg newspaper The Virginia Gazette on April 6, 1776. Sometime on or after June 11, 1776, Thomas Jefferson wrote "United States of America" in a rough draft of the Declaration of Independence, which was adopted by the Second Continental Congress on July 4, 1776. The term "United States" and its initialism "U.S.", used as nouns or as adjectives in English, are common short names for the country. The initialism "USA", a noun, is also common. "United States" and "U.S." are the established terms throughout the U.S. federal government, with prescribed rules.[l] "The States" is an established colloquial shortening of the name, used particularly from abroad; "stateside" is the corresponding adjective or adverb. "America" is the feminine form of the first word of Americus Vesputius, the Latinized name of Italian explorer Amerigo Vespucci (1454–1512);[m] it was first used as a place name by the German cartographers Martin Waldseemüller and Matthias Ringmann in 1507.[n] Vespucci first proposed that the West Indies discovered by Christopher Columbus in 1492 were part of a previously unknown landmass and not among the Indies at the eastern limit of Asia. In English, the term "America" usually does not refer to topics unrelated to the United States, despite the usage of "the Americas" to describe the totality of the continents of North and South America. History The first inhabitants of North America migrated from Siberia approximately 15,000 years ago, either across the Bering land bridge or along the now-submerged Ice Age coastline. Small isolated groups of hunter-gatherers are said to have migrated alongside herds of large herbivores far into Alaska, with ice-free corridors developing along the Pacific coast and valleys of North America in c. 16,500 – c. 13,500 BCE (c. 18,500 – c. 15,500 BP). The Clovis culture, which appeared around 11,000 BCE, is believed to be the first widespread culture in the Americas. Over time, Indigenous North American cultures grew increasingly sophisticated, and some, such as the Mississippian culture, developed agriculture, architecture, and complex societies. In the post-archaic period, the Mississippian cultures were located in the midwestern, eastern, and southern regions, and the Algonquian in the Great Lakes region and along the Eastern Seaboard, while the Hohokam culture and Ancestral Puebloans inhabited the Southwest. Native population estimates of what is now the United States before the arrival of European colonizers range from around 500,000 to nearly 10 million. Christopher Columbus began exploring the Caribbean for Spain in 1492, leading to Spanish-speaking settlements and missions from what are now Puerto Rico and Florida to New Mexico and California. The first Spanish colony in the present-day continental United States was Spanish Florida, chartered in 1513. After several settlements failed there due to starvation and disease, Spain's first permanent town, Saint Augustine, was founded in 1565. France established its own settlements in French Florida in 1562, but they were either abandoned (Charlesfort, 1578) or destroyed by Spanish raids (Fort Caroline, 1565). Permanent French settlements were founded much later along the Great Lakes (Fort Detroit, 1701), the Mississippi River (Saint Louis, 1764) and especially the Gulf of Mexico (New Orleans, 1718). Early European colonies also included the thriving Dutch colony of New Nederland (settled 1626, present-day New York) and the small Swedish colony of New Sweden (settled 1638 in what became Delaware). British colonization of the East Coast began with the Virginia Colony (1607) and the Plymouth Colony (Massachusetts, 1620). The Mayflower Compact in Massachusetts and the Fundamental Orders of Connecticut established precedents for local representative self-governance and constitutionalism that would develop throughout the American colonies. While European settlers in what is now the United States experienced conflicts with Native Americans, they also engaged in trade, exchanging European tools for food and animal pelts.[o] Relations ranged from close cooperation to warfare and massacres. The colonial authorities often pursued policies that forced Native Americans to adopt European lifestyles, including conversion to Christianity. Along the eastern seaboard, settlers trafficked Africans through the Atlantic slave trade, largely to provide manual labor on plantations. The original Thirteen Colonies[p] that would later found the United States were administered as possessions of the British Empire by Crown-appointed governors, though local governments held elections open to most white male property owners. The colonial population grew rapidly from Maine to Georgia, eclipsing Native American populations; by the 1770s, the natural increase of the population was such that only a small minority of Americans had been born overseas. The colonies' distance from Britain facilitated the entrenchment of self-governance, and the First Great Awakening, a series of Christian revivals, fueled colonial interest in guaranteed religious liberty. Following its victory in the French and Indian War, Britain began to assert greater control over local affairs in the Thirteen Colonies, resulting in growing political resistance. One of the primary grievances of the colonists was the denial of their rights as Englishmen, particularly the right to representation in the British government that taxed them. To demonstrate their dissatisfaction and resolve, the First Continental Congress met in 1774 and passed the Continental Association, a colonial boycott of British goods enforced by local "committees of safety" that proved effective. The British attempt to then disarm the colonists resulted in the 1775 Battles of Lexington and Concord, igniting the American Revolutionary War. At the Second Continental Congress, the colonies appointed George Washington commander-in-chief of the Continental Army, and created a committee that named Thomas Jefferson to draft the Declaration of Independence. Two days after the Second Continental Congress passed the Lee Resolution to create an independent, sovereign nation, the Declaration was adopted on July 4, 1776. The political values of the American Revolution evolved from an armed rebellion demanding reform within an empire to a revolution that created a new social and governing system founded on the defense of liberty and the protection of inalienable natural rights; sovereignty of the people; republicanism over monarchy, aristocracy, and other hereditary political power; civic virtue; and an intolerance of political corruption. The Founding Fathers of the United States, who included Washington, Jefferson, John Adams, Benjamin Franklin, Alexander Hamilton, John Jay, James Madison, Thomas Paine, and many others, were inspired by Classical, Renaissance, and Enlightenment philosophies and ideas. Though in practical effect since its drafting in 1777, the Articles of Confederation was ratified in 1781 and formally established a decentralized government that operated until 1789. After the British surrender at the siege of Yorktown in 1781, American sovereignty was internationally recognized by the Treaty of Paris (1783), through which the U.S. gained territory stretching west to the Mississippi River, north to present-day Canada, and south to Spanish Florida. The Northwest Ordinance (1787) established the precedent by which the country's territory would expand with the admission of new states, rather than the expansion of existing states. The U.S. Constitution was drafted at the 1787 Constitutional Convention to overcome the limitations of the Articles. It went into effect in 1789, creating a federal republic governed by three separate branches that together formed a system of checks and balances. George Washington was elected the country's first president under the Constitution, and the Bill of Rights was adopted in 1791 to allay skeptics' concerns about the power of the more centralized government. His resignation as commander-in-chief after the Revolutionary War and his later refusal to run for a third term as the country's first president established a precedent for the supremacy of civil authority in the United States and the peaceful transfer of power. In the late 18th century, American settlers began to expand westward in larger numbers, many with a sense of manifest destiny. The Louisiana Purchase of 1803 from France nearly doubled the territory of the United States. Lingering issues with Britain remained, leading to the War of 1812, which was fought to a draw. Spain ceded Florida and its Gulf Coast territory in 1819. The Missouri Compromise of 1820, which admitted Missouri as a slave state and Maine as a free state, attempted to balance the desire of northern states to prevent the expansion of slavery into new territories with that of southern states to extend it there. Primarily, the compromise prohibited slavery in all other lands of the Louisiana Purchase north of the 36°30′ parallel. As Americans expanded further into territory inhabited by Native Americans, the federal government implemented policies of Indian removal or assimilation. The most significant such legislation was the Indian Removal Act of 1830, a key policy of President Andrew Jackson. It resulted in the Trail of Tears (1830–1850), in which an estimated 60,000 Native Americans living east of the Mississippi River were forcibly removed and displaced to lands far to the west, causing 13,200 to 16,700 deaths along the forced march. Settler expansion as well as this influx of Indigenous peoples from the East resulted in the American Indian Wars west of the Mississippi. During the colonial period, slavery became legal in all the Thirteen colonies, but by 1770 it provided the main labor force in the large-scale, agriculture-dependent economies of the Southern Colonies from Maryland to Georgia. The practice began to be significantly questioned during the American Revolution, and spurred by an active abolitionist movement that had reemerged in the 1830s, states in the North enacted laws to prohibit slavery within their boundaries. At the same time, support for slavery had strengthened in Southern states, with widespread use of inventions such as the cotton gin (1793) having made slavery immensely profitable for Southern elites. The United States annexed the Republic of Texas in 1845, and the 1846 Oregon Treaty led to U.S. control of the present-day American Northwest. Dispute with Mexico over Texas led to the Mexican–American War (1846–1848). After the victory of the U.S., Mexico recognized U.S. sovereignty over Texas, New Mexico, and California in the 1848 Mexican Cession; the cession's lands also included the future states of Nevada, Colorado and Utah. The California gold rush of 1848–1849 spurred a huge migration of white settlers to the Pacific coast, leading to even more confrontations with Native populations. One of the most violent, the California genocide of thousands of Native inhabitants, lasted into the mid-1870s. Additional western territories and states were created. Throughout the 1850s, the sectional conflict regarding slavery was further inflamed by national legislation in the U.S. Congress and decisions of the Supreme Court. In Congress, the Fugitive Slave Act of 1850 mandated the forcible return to their owners in the South of slaves taking refuge in non-slave states, while the Kansas–Nebraska Act of 1854 effectively gutted the anti-slavery requirements of the Missouri Compromise. In its Dred Scott decision of 1857, the Supreme Court ruled against a slave brought into non-slave territory, simultaneously declaring the entire Missouri Compromise to be unconstitutional. These and other events exacerbated tensions between North and South that would culminate in the American Civil War (1861–1865). Beginning with South Carolina, 11 slave-state governments voted to secede from the United States in 1861, joining to create the Confederate States of America. All other state governments remained loyal to the Union.[q] War broke out in April 1861 after the Confederacy bombarded Fort Sumter. Following the Emancipation Proclamation on January 1, 1863, many freed slaves joined the Union army. The war began to turn in the Union's favor following the 1863 Siege of Vicksburg and Battle of Gettysburg, and the Confederates surrendered in 1865 after the Union's victory in the Battle of Appomattox Court House. Efforts toward reconstruction in the secessionist South had begun as early as 1862, but it was only after President Lincoln's assassination that the three Reconstruction Amendments to the Constitution were ratified to protect civil rights. The amendments codified nationally the abolition of slavery and involuntary servitude except as punishment for crimes, promised equal protection under the law for all persons, and prohibited discrimination on the basis of race or previous enslavement. As a result, African Americans took an active political role in ex-Confederate states in the decade following the Civil War. The former Confederate states were readmitted to the Union, beginning with Tennessee in 1866 and ending with Georgia in 1870. National infrastructure, including transcontinental telegraph and railroads, spurred growth in the American frontier. This was accelerated by the Homestead Acts, through which nearly 10 percent of the total land area of the United States was given away free to some 1.6 million homesteaders. From 1865 through 1917, an unprecedented stream of immigrants arrived in the United States, including 24.4 million from Europe. Most came through the Port of New York, as New York City and other large cities on the East Coast became home to large Jewish, Irish, and Italian populations. Many Northern Europeans as well as significant numbers of Germans and other Central Europeans moved to the Midwest. At the same time, about one million French Canadians migrated from Quebec to New England. During the Great Migration, millions of African Americans left the rural South for urban areas in the North. Alaska was purchased from Russia in 1867. The Compromise of 1877 is generally considered the end of the Reconstruction era, as it resolved the electoral crisis following the 1876 presidential election and led President Rutherford B. Hayes to reduce the role of federal troops in the South. Immediately, the Redeemers began evicting the Carpetbaggers and quickly regained local control of Southern politics in the name of white supremacy. African Americans endured a period of heightened, overt racism following Reconstruction, a time often considered the nadir of American race relations. A series of Supreme Court decisions, including Plessy v. Ferguson, emptied the Fourteenth and Fifteenth Amendments of their force, allowing Jim Crow laws in the South to remain unchecked, sundown towns in the Midwest, and segregation in communities across the country, which would be reinforced in part by the policy of redlining later adopted by the federal Home Owners' Loan Corporation. An explosion of technological advancement, accompanied by the exploitation of cheap immigrant labor, led to rapid economic expansion during the Gilded Age of the late 19th century. It continued into the early 20th, when the United States already outpaced the economies of Britain, France, and Germany combined. This fostered the amassing of power by a few prominent industrialists, largely by their formation of trusts and monopolies to prevent competition. Tycoons led the nation's expansion in the railroad, petroleum, and steel industries. The United States emerged as a pioneer of the automotive industry. These changes resulted in significant increases in economic inequality, slum conditions, and social unrest, creating the environment for labor unions and socialist movements to begin to flourish. This period eventually ended with the advent of the Progressive Era, which was characterized by significant economic and social reforms. Pro-American elements in Hawaii overthrew the Hawaiian monarchy; the islands were annexed in 1898. That same year, Puerto Rico, the Philippines, and Guam were ceded to the U.S. by Spain after the latter's defeat in the Spanish–American War. (The Philippines was granted full independence from the U.S. on July 4, 1946, following World War II. Puerto Rico and Guam have remained U.S. territories.) American Samoa was acquired by the United States in 1900 after the Second Samoan Civil War. The U.S. Virgin Islands were purchased from Denmark in 1917. The United States entered World War I alongside the Allies in 1917 helping to turn the tide against the Central Powers. In 1920, a constitutional amendment granted nationwide women's suffrage. During the 1920s and 1930s, radio for mass communication and early television transformed communications nationwide. The Wall Street Crash of 1929 triggered the Great Depression, to which President Franklin D. Roosevelt responded with the New Deal plan of "reform, recovery and relief", a series of unprecedented and sweeping recovery programs and employment relief projects combined with financial reforms and regulations. Initially neutral during World War II, the U.S. began supplying war materiel to the Allies of World War II in March 1941 and entered the war in December after Japan's attack on Pearl Harbor. Agreeing to a "Europe first" policy, the U.S. concentrated its wartime efforts on Japan's allies Italy and Germany until their final defeat in May 1945. The U.S. developed the first nuclear weapons and used them against the Japanese cities of Hiroshima and Nagasaki in August 1945, ending the war. The United States was one of the "Four Policemen" who met to plan the post-war world, alongside the United Kingdom, the Soviet Union, and China. The U.S. emerged relatively unscathed from the war, with even greater economic power and international political influence. The end of World War II in 1945 left the U.S. and the Soviet Union as superpowers, each with its own political, military, and economic sphere of influence. Geopolitical tensions between the two superpowers soon led to the Cold War. The U.S. implemented a policy of containment intended to limit the Soviet Union's sphere of influence; engaged in regime change against governments perceived to be aligned with the Soviets; and prevailed in the Space Race, which culminated with the first crewed Moon landing in 1969. Domestically, the U.S. experienced economic growth, urbanization, and population growth following World War II. The civil rights movement emerged, with Martin Luther King Jr. becoming a prominent leader in the early 1960s. The Great Society plan of President Lyndon B. Johnson's administration resulted in groundbreaking and broad-reaching laws, policies and a constitutional amendment to counteract some of the worst effects of lingering institutional racism. The counterculture movement in the U.S. brought significant social changes, including the liberalization of attitudes toward recreational drug use and sexuality. It also encouraged open defiance of the military draft (leading to the end of conscription in 1973) and wide opposition to U.S. intervention in Vietnam, with the U.S. totally withdrawing in 1975. A societal shift in the roles of women was significantly responsible for the large increase in female paid labor participation starting in the 1970s, and by 1985 the majority of American women aged 16 and older were employed. The Fall of Communism and the dissolution of the Soviet Union from 1989 to 1991 marked the end of the Cold War and left the United States as the world's sole superpower. This cemented the United States' global influence, reinforcing the concept of the "American Century" as the U.S. dominated international political, cultural, economic, and military affairs. The 1990s saw the longest recorded economic expansion in American history, a dramatic decline in U.S. crime rates, and advances in technology. Throughout this decade, technological innovations such as the World Wide Web, the evolution of the Pentium microprocessor in accordance with Moore's law, rechargeable lithium-ion batteries, the first gene therapy trial, and cloning either emerged in the U.S. or were improved upon there. The Human Genome Project was formally launched in 1990, while Nasdaq became the first stock market in the United States to trade online in 1998. In the Gulf War of 1991, an American-led international coalition of states expelled an Iraqi invasion force that had occupied neighboring Kuwait. The September 11 attacks on the United States in 2001 by the pan-Islamist militant organization al-Qaeda led to the war on terror and subsequent military interventions in Afghanistan and in Iraq. The U.S. housing bubble culminated in 2007 with the Great Recession, the largest economic contraction since the Great Depression. In the 2010s and early 2020s, the United States has experienced increased political polarization and democratic backsliding. The country's polarization was violently reflected in the January 2021 Capitol attack, when a mob of insurrectionists entered the U.S. Capitol and sought to prevent the peaceful transfer of power in an attempted self-coup d'état. Geography The United States is the world's third-largest country by total area behind Russia and Canada.[c] The 48 contiguous states and the District of Columbia have a combined area of 3,119,885 square miles (8,080,470 km2). In 2021, the United States had 8% of the Earth's permanent meadows and pastures and 10% of its cropland. Starting in the east, the coastal plain of the Atlantic seaboard gives way to inland forests and rolling hills in the Piedmont plateau region. The Appalachian Mountains and the Adirondack Massif separate the East Coast from the Great Lakes and the grasslands of the Midwest. The Mississippi River System, the world's fourth-longest river system, runs predominantly north–south through the center of the country. The flat and fertile prairie of the Great Plains stretches to the west, interrupted by a highland region in the southeast. The Rocky Mountains, west of the Great Plains, extend north to south across the country, peaking at over 14,000 feet (4,300 m) in Colorado. The supervolcano underlying Yellowstone National Park in the Rocky Mountains, the Yellowstone Caldera, is the continent's largest volcanic feature. Farther west are the rocky Great Basin and the Chihuahuan, Sonoran, and Mojave deserts. In the northwest corner of Arizona, carved by the Colorado River, is the Grand Canyon, a steep-sided canyon and popular tourist destination known for its overwhelming visual size and intricate, colorful landscape. The Cascade and Sierra Nevada mountain ranges run close to the Pacific coast. The lowest and highest points in the contiguous United States are in the State of California, about 84 miles (135 km) apart. At an elevation of 20,310 feet (6,190.5 m), Alaska's Denali (also called Mount McKinley) is the highest peak in the country and on the continent. Active volcanoes in the U.S. are common throughout Alaska's Alexander and Aleutian Islands. Located entirely outside North America, the archipelago of Hawaii consists of volcanic islands, physiographically and ethnologically part of the Polynesian subregion of Oceania. In addition to its total land area, the United States has one of the world's largest marine exclusive economic zones spanning approximately 4.5 million square miles (11.7 million km2) of ocean. With its large size and geographic variety, the United States includes most climate types. East of the 100th meridian, the climate ranges from humid continental in the north to humid subtropical in the south. The western Great Plains are semi-arid. Many mountainous areas of the American West have an alpine climate. The climate is arid in the Southwest, Mediterranean in coastal California, and oceanic in coastal Oregon, Washington, and southern Alaska. Most of Alaska is subarctic or polar. Hawaii, the southern tip of Florida and U.S. territories in the Caribbean and Pacific are tropical. The United States receives more high-impact extreme weather incidents than any other country. States bordering the Gulf of Mexico are prone to hurricanes, and most of the world's tornadoes occur in the country, mainly in Tornado Alley. Due to climate change in the country, extreme weather has become more frequent in the U.S. in the 21st century, with three times the number of reported heat waves compared to the 1960s. Since the 1990s, droughts in the American Southwest have become more persistent and more severe. The regions considered as the most attractive to the population are the most vulnerable. The U.S. is one of 17 megadiverse countries containing large numbers of endemic species: about 17,000 species of vascular plants occur in the contiguous United States and Alaska, and over 1,800 species of flowering plants are found in Hawaii, few of which occur on the mainland. The United States is home to 428 mammal species, 784 birds, 311 reptiles, 295 amphibians, and around 91,000 insect species. There are 63 national parks, and hundreds of other federally managed monuments, forests, and wilderness areas, administered by the National Park Service and other agencies. About 28% of the country's land is publicly owned and federally managed, primarily in the Western States. Most of this land is protected, though some is leased for commercial use, and less than one percent is used for military purposes. Environmental issues in the United States include debates on non-renewable resources and nuclear energy, air and water pollution, biodiversity, logging and deforestation, and climate change. The U.S. Environmental Protection Agency (EPA) is the federal agency charged with addressing most environmental-related issues. The idea of wilderness has shaped the management of public lands since 1964, with the Wilderness Act. The Endangered Species Act of 1973 provides a way to protect threatened and endangered species and their habitats. The United States Fish and Wildlife Service implements and enforces the Act. In 2024, the U.S. ranked 35th among 180 countries in the Environmental Performance Index. Government and politics The United States is a federal republic of 50 states and a federal capital district, Washington, D.C. The U.S. asserts sovereignty over five unincorporated territories and several uninhabited island possessions. It is the world's oldest surviving federation, and its presidential system of federal government has been adopted, in whole or in part, by many newly independent states worldwide following their decolonization. The Constitution of the United States serves as the country's supreme legal document. Most scholars describe the United States as a liberal democracy.[r] Composed of three branches, all headquartered in Washington, D.C., the federal government is the national government of the United States. The U.S. Constitution establishes a separation of powers intended to provide a system of checks and balances to prevent any of the three branches from becoming supreme. The three-branch system is known as the presidential system, in contrast to the parliamentary system where the executive is part of the legislative body. Many countries around the world adopted this aspect of the 1789 Constitution of the United States, especially in the postcolonial Americas. In the U.S. federal system, sovereign powers are shared between three levels of government specified in the Constitution: the federal government, the states, and Indian tribes. The U.S. also asserts sovereignty over five permanently inhabited territories: American Samoa, Guam, the Northern Mariana Islands, Puerto Rico, and the U.S. Virgin Islands. Residents of the 50 states are governed by their elected state government, under state constitutions compatible with the national constitution, and by elected local governments that are administrative divisions of a state. States are subdivided into counties or county equivalents, and (except for Hawaii) further divided into municipalities, each administered by elected representatives. The District of Columbia is a federal district containing the U.S. capital, Washington, D.C. The federal district is an administrative division of the federal government. Indian country is made up of 574 federally recognized tribes and 326 Indian reservations. They hold a government-to-government relationship with the U.S. federal government in Washington and are legally defined as domestic dependent nations with inherent tribal sovereignty rights. In addition to the five major territories, the U.S. also asserts sovereignty over the United States Minor Outlying Islands in the Pacific Ocean and the Caribbean. The seven undisputed islands without permanent populations are Baker Island, Howland Island, Jarvis Island, Johnston Atoll, Kingman Reef, Midway Atoll, and Palmyra Atoll. U.S. sovereignty over the unpopulated Bajo Nuevo Bank, Navassa Island, Serranilla Bank, and Wake Island is disputed. The Constitution is silent on political parties. However, they developed independently in the 18th century with the Federalist and Anti-Federalist parties. Since then, the United States has operated as a de facto two-party system, though the parties have changed over time. Since the mid-19th century, the two main national parties have been the Democratic Party and the Republican Party. The former is perceived as relatively liberal in its political platform while the latter is perceived as relatively conservative in its platform. The United States has an established structure of foreign relations, with the world's second-largest diplomatic corps as of 2024[update]. It is a permanent member of the United Nations Security Council and home to the United Nations headquarters. The United States is a member of the G7, G20, and OECD intergovernmental organizations. Almost all countries have embassies and many have consulates (official representatives) in the country. Likewise, nearly all countries host formal diplomatic missions with the United States, except Iran, North Korea, and Bhutan. Though Taiwan does not have formal diplomatic relations with the U.S., it maintains close unofficial relations. The United States regularly supplies Taiwan with military equipment to deter potential Chinese aggression. Its geopolitical attention also turned to the Indo-Pacific when the United States joined the Quadrilateral Security Dialogue with Australia, India, and Japan. The United States has a "Special Relationship" with the United Kingdom and strong ties with Canada, Australia, New Zealand, the Philippines, Japan, South Korea, Israel, and several European Union countries such as France, Italy, Germany, Spain, and Poland. The U.S. works closely with its NATO allies on military and national security issues, and with countries in the Americas through the Organization of American States and the United States–Mexico–Canada Free Trade Agreement. The U.S. exercises full international defense authority and responsibility for Micronesia, the Marshall Islands, and Palau through the Compact of Free Association. It has increasingly conducted strategic cooperation with India, while its ties with China have steadily deteriorated. Beginning in 2014, the U.S. had become a key ally of Ukraine. After Donald Trump was elected U.S. president in 2024, he sought to negotiate an end to the Russo-Ukrainian War. He paused all military aid to Ukraine in March 2025, although the aid resumed later. Trump also ended U.S. intelligence sharing with the country, but this too was eventually restored. The president is the commander-in-chief of the United States Armed Forces and appoints its leaders, the secretary of defense and the Joint Chiefs of Staff. The Department of Defense, headquartered at the Pentagon near Washington, D.C., administers five of the six service branches, which are made up of the U.S. Army, Marine Corps, Navy, Air Force, and Space Force. The Coast Guard is administered by the Department of Homeland Security in peacetime and can be transferred to the Department of the Navy in wartime. Total strength of the entire military is about 1.3 million active duty with an additional 400,000 in reserve. The United States spent $997 billion on its military in 2024, which is by far the largest amount of any country, making up 37% of global military spending and accounting for 3.4% of the country's GDP. The U.S. possesses 42% of the world's nuclear weapons—the second-largest stockpile after that of Russia. The U.S. military is widely regarded as the most powerful and advanced in the world. The United States has the third-largest combined armed forces in the world, behind the Chinese People's Liberation Army and Indian Armed Forces. The U.S. military operates about 800 bases and facilities abroad, and maintains deployments greater than 100 active duty personnel in 25 foreign countries. The United States has engaged in over 400 military interventions since its founding in 1776, with over half of these occurring between 1950 and 2019 and 25% occurring in the post-Cold War era. State defense forces (SDFs) are military units that operate under the sole authority of a state government. SDFs are authorized by state and federal law but are under the command of the state's governor. By contrast, the 54 U.S. National Guard organizations[t] fall under the dual control of state or territorial governments and the federal government; their units can also become federalized entities, but SDFs cannot be federalized. The National Guard personnel of a state or territory can be federalized by the president under the National Defense Act Amendments of 1933; this legislation created the Guard and provides for the integration of Army National Guard and Air National Guard units and personnel into the U.S. Army and (since 1947) the U.S. Air Force. The total number of National Guard members is about 430,000, while the estimated combined strength of SDFs is less than 10,000. There are about 18,000 U.S. police agencies from local to national level in the United States. Law in the United States is mainly enforced by local police departments and sheriff departments in their municipal or county jurisdictions. The state police departments have authority in their respective state, and federal agencies such as the Federal Bureau of Investigation (FBI) and the U.S. Marshals Service have national jurisdiction and specialized duties, such as protecting civil rights, national security, enforcing U.S. federal courts' rulings and federal laws, and interstate criminal activity. State courts conduct almost all civil and criminal trials, while federal courts adjudicate the much smaller number of civil and criminal cases that relate to federal law. There is no unified "criminal justice system" in the United States. The American prison system is largely heterogenous, with thousands of relatively independent systems operating across federal, state, local, and tribal levels. In 2025, "these systems hold nearly 2 million people in 1,566 state prisons, 98 federal prisons, 3,116 local jails, 1,277 juvenile correctional facilities, 133 immigration detention facilities, and 80 Indian country jails, as well as in military prisons, civil commitment centers, state psychiatric hospitals, and prisons in the U.S. territories." Despite disparate systems of confinement, four main institutions dominate: federal prisons, state prisons, local jails, and juvenile correctional facilities. Federal prisons are run by the Federal Bureau of Prisons and hold pretrial detainees as well as people who have been convicted of federal crimes. State prisons, run by the department of corrections of each state, hold people sentenced and serving prison time (usually longer than one year) for felony offenses. Local jails are county or municipal facilities that incarcerate defendants prior to trial; they also hold those serving short sentences (typically under a year). Juvenile correctional facilities are operated by local or state governments and serve as longer-term placements for any minor adjudicated as delinquent and ordered by a judge to be confined. In January 2023, the United States had the sixth-highest per capita incarceration rate in the world—531 people per 100,000 inhabitants—and the largest prison and jail population in the world, with more than 1.9 million people incarcerated. An analysis of the World Health Organization Mortality Database from 2010 showed U.S. homicide rates "were 7 times higher than in other high-income countries, driven by a gun homicide rate that was 25 times higher". Economy The U.S. has a highly developed mixed economy that has been the world's largest nominally since about 1890. Its 2024 gross domestic product (GDP)[e] of more than $29 trillion constituted over 25% of nominal global economic output, or 15% at purchasing power parity (PPP). From 1983 to 2008, U.S. real compounded annual GDP growth was 3.3%, compared to a 2.3% weighted average for the rest of the G7. The country ranks first in the world by nominal GDP, second when adjusted for purchasing power parities (PPP), and ninth by PPP-adjusted GDP per capita. In February 2024, the total U.S. federal government debt was $34.4 trillion. Of the world's 500 largest companies by revenue, 138 were headquartered in the U.S. in 2025, the highest number of any country. The U.S. dollar is the currency most used in international transactions and the world's foremost reserve currency, backed by the country's dominant economy, its military, the petrodollar system, its large U.S. treasuries market, and its linked eurodollar. Several countries use it as their official currency, and in others it is the de facto currency. The U.S. has free trade agreements with several countries, including the USMCA. Although the United States has reached a post-industrial level of economic development and is often described as having a service economy, it remains a major industrial power; in 2024, the U.S. manufacturing sector was the world's second-largest by value output after China's. New York City is the world's principal financial center, and its metropolitan area is the world's largest metropolitan economy. The New York Stock Exchange and Nasdaq, both located in New York City, are the world's two largest stock exchanges by market capitalization and trade volume. The United States is at the forefront of technological advancement and innovation in many economic fields, especially in artificial intelligence; electronics and computers; pharmaceuticals; and medical, aerospace and military equipment. The country's economy is fueled by abundant natural resources, a well-developed infrastructure, and high productivity. The largest trading partners of the United States are the European Union, Mexico, Canada, China, Japan, South Korea, the United Kingdom, Vietnam, India, and Taiwan. The United States is the world's largest importer and second-largest exporter.[u] It is by far the world's largest exporter of services. Americans have the highest average household and employee income among OECD member states, and the fourth-highest median household income in 2023, up from sixth-highest in 2013. With personal consumption expenditures of over $18.5 trillion in 2023, the U.S. has a heavily consumer-driven economy and is the world's largest consumer market. The U.S. ranked first in the number of dollar billionaires and millionaires in 2023, with 735 billionaires and nearly 22 million millionaires. Wealth in the United States is highly concentrated; in 2011, the richest 10% of the adult population owned 72% of the country's household wealth, while the bottom 50% owned just 2%. U.S. wealth inequality increased substantially since the late 1980s, and income inequality in the U.S. reached a record high in 2019. In 2024, the country had some of the highest wealth and income inequality levels among OECD countries. Since the 1970s, there has been a decoupling of U.S. wage gains from worker productivity. In 2016, the top fifth of earners took home more than half of all income, giving the U.S. one of the widest income distributions among OECD countries. There were about 771,480 homeless persons in the U.S. in 2024. In 2022, 6.4 million children experienced food insecurity. Feeding America estimates that around one in five, or approximately 13 million, children experience hunger in the U.S. and do not know where or when they will get their next meal. Also in 2022, about 37.9 million people, or 11.5% of the U.S. population, were living in poverty. The United States has a smaller welfare state and redistributes less income through government action than most other high-income countries. It is the only advanced economy that does not guarantee its workers paid vacation nationally and one of a few countries in the world without federal paid family leave as a legal right. The United States has a higher percentage of low-income workers than almost any other developed country, largely because of a weak collective bargaining system and lack of government support for at-risk workers. The United States has been a leader in technological innovation since the late 19th century and scientific research since the mid-20th century. Methods for producing interchangeable parts and the establishment of a machine tool industry enabled the large-scale manufacturing of U.S. consumer products in the late 19th century. By the early 20th century, factory electrification, the introduction of the assembly line, and other labor-saving techniques created the system of mass production. In the 21st century, the United States continues to be one of the world's foremost scientific powers, though China has emerged as a major competitor in many fields. The U.S. has the highest research and development expenditures of any country and ranks ninth as a percentage of GDP. In 2022, the United States was (after China) the country with the second-highest number of published scientific papers. In 2021, the U.S. ranked second (also after China) by the number of patent applications, and third by trademark and industrial design applications (after China and Germany), according to World Intellectual Property Indicators. In 2025 the United States ranked third (after Switzerland and Sweden) in the Global Innovation Index. The United States is considered to be a world leader in the development of artificial intelligence technology. In 2023, the United States was ranked the second most technologically advanced country in the world (after South Korea) by Global Finance magazine. The United States has maintained a space program since the late 1950s, beginning with the establishment of the National Aeronautics and Space Administration (NASA) in 1958. NASA's Apollo program (1961–1972) achieved the first crewed Moon landing with the 1969 Apollo 11 mission; it remains one of the agency's most significant milestones. Other major endeavors by NASA include the Space Shuttle program (1981–2011), the Voyager program (1972–present), the Hubble and James Webb space telescopes (launched in 1990 and 2021, respectively), and the multi-mission Mars Exploration Program (Spirit and Opportunity, Curiosity, and Perseverance). NASA is one of five agencies collaborating on the International Space Station (ISS); U.S. contributions to the ISS include several modules, including Destiny (2001), Harmony (2007), and Tranquility (2010), as well as ongoing logistical and operational support. The United States private sector dominates the global commercial spaceflight industry. Prominent American spaceflight contractors include Blue Origin, Boeing, Lockheed Martin, Northrop Grumman, and SpaceX. NASA programs such as the Commercial Crew Program, Commercial Resupply Services, Commercial Lunar Payload Services, and NextSTEP have facilitated growing private-sector involvement in American spaceflight. In 2023, the United States received approximately 84% of its energy from fossil fuel, and its largest source of energy was petroleum (38%), followed by natural gas (36%), renewable sources (9%), coal (9%), and nuclear power (9%). In 2022, the United States constituted about 4% of the world's population, but consumed around 16% of the world's energy. The U.S. ranks as the second-highest emitter of greenhouse gases behind China. The U.S. is the world's largest producer of nuclear power, generating around 30% of the world's nuclear electricity. It also has the highest number of nuclear power reactors of any country. From 2024, the U.S. plans to triple its nuclear power capacity by 2050. The United States' 4 million miles (6.4 million kilometers) of road network, owned almost entirely by state and local governments, is the longest in the world. The extensive Interstate Highway System that connects all major U.S. cities is funded mostly by the federal government but maintained by state departments of transportation. The system is further extended by state highways and some private toll roads. The U.S. is among the top ten countries with the highest vehicle ownership per capita (850 vehicles per 1,000 people) in 2022. A 2022 study found that 76% of U.S. commuters drive alone and 14% ride a bicycle, including bike owners and users of bike-sharing networks. About 11% use some form of public transportation. Public transportation in the United States is well developed in the largest urban areas, notably New York City, Washington, D.C., Boston, Philadelphia, Chicago, and San Francisco; otherwise, coverage is generally less extensive than in most other developed countries. The U.S. also has many relatively car-dependent localities. Long-distance intercity travel is provided primarily by airlines, but travel by rail is more common along the Northeast Corridor, the only high-speed rail in the U.S. that meets international standards. Amtrak, the country's government-sponsored national passenger rail company, has a relatively sparse network compared to that of Western European countries. Service is concentrated in the Northeast, California, the Midwest, the Pacific Northwest, and Virginia/Southeast. The United States has an extensive air transportation network. U.S. civilian airlines are all privately owned. The three largest airlines in the world, by total number of passengers carried, are U.S.-based; American Airlines became the global leader after its 2013 merger with US Airways. Of the 50 busiest airports in the world, 16 are in the United States, as well as five of the top 10. The world's busiest airport by passenger volume is Hartsfield–Jackson Atlanta International in Atlanta, Georgia. In 2022, most of the 19,969 U.S. airports were owned and operated by local government authorities, and there are also some private airports. Some 5,193 are designated as "public use", including for general aviation. The Transportation Security Administration (TSA) has provided security at most major airports since 2001. The country's rail transport network, the longest in the world at 182,412.3 mi (293,564.2 km), handles mostly freight (in contrast to more passenger-centered rail in Europe). Because they are often privately owned operations, U.S. railroads lag behind those of the rest of the world in terms of electrification. The country's inland waterways are the world's fifth-longest, totaling 25,482 mi (41,009 km). They are used extensively for freight, recreation, and a small amount of passenger traffic. Of the world's 50 busiest container ports, four are located in the United States, with the busiest in the country being the Port of Los Angeles. Demographics The U.S. Census Bureau reported 331,449,281 residents on April 1, 2020,[v] making the United States the third-most-populous country in the world, after India and China. The Census Bureau's official 2025 population estimate was 341,784,857, an increase of 3.1% since the 2020 census. According to the Bureau's U.S. Population Clock, on July 1, 2024, the U.S. population had a net gain of one person every 16 seconds, or about 5400 people per day. In 2023, 51% of Americans age 15 and over were married, 6% were widowed, 10% were divorced, and 34% had never been married. In 2023, the total fertility rate for the U.S. stood at 1.6 children per woman, and, at 23%, it had the world's highest rate of children living in single-parent households in 2019. Most Americans live in the suburbs of major metropolitan areas. The United States has a diverse population; 37 ancestry groups have more than one million members. White Americans with ancestry from Europe, the Middle East, or North Africa form the largest racial and ethnic group at 57.8% of the United States population. Hispanic and Latino Americans form the second-largest group and are 18.7% of the United States population. African Americans constitute the country's third-largest ancestry group and are 12.1% of the total U.S. population. Asian Americans are the country's fourth-largest group, composing 5.9% of the United States population. The country's 3.7 million Native Americans account for about 1%, and some 574 native tribes are recognized by the federal government. In 2024, the median age of the United States population was 39.1 years. While many languages and dialects are spoken in the United States, English is by far the most commonly spoken and written. De facto, English is the official language of the United States, and in 2025, Executive Order 14224 declared English official. However, the U.S. has never had a de jure official language, as Congress has never passed a law to designate English as official for all three federal branches. Some laws, such as U.S. naturalization requirements, nonetheless standardize English. Twenty-eight states and the United States Virgin Islands have laws that designate English as the sole official language; 19 states and the District of Columbia have no official language. Three states and four U.S. territories have recognized local or indigenous languages in addition to English: Hawaii (Hawaiian), Alaska (twenty Native languages),[w] South Dakota (Sioux), American Samoa (Samoan), Puerto Rico (Spanish), Guam (Chamorro), and the Northern Mariana Islands (Carolinian and Chamorro). In total, 169 Native American languages are spoken in the United States. In Puerto Rico, Spanish is more widely spoken than English. According to the American Community Survey (2020), some 245.4 million people in the U.S. age five and older spoke only English at home. About 41.2 million spoke Spanish at home, making it the second most commonly used language. Other languages spoken at home by one million people or more include Chinese (3.40 million), Tagalog (1.71 million), Vietnamese (1.52 million), Arabic (1.39 million), French (1.18 million), Korean (1.07 million), and Russian (1.04 million). German, spoken by 1 million people at home in 2010, fell to 857,000 total speakers in 2020. America's immigrant population is by far the world's largest in absolute terms. In 2022, there were 87.7 million immigrants and U.S.-born children of immigrants in the United States, accounting for nearly 27% of the overall U.S. population. In 2017, out of the U.S. foreign-born population, some 45% (20.7 million) were naturalized citizens, 27% (12.3 million) were lawful permanent residents, 6% (2.2 million) were temporary lawful residents, and 23% (10.5 million) were unauthorized immigrants. In 2019, the top countries of origin for immigrants were Mexico (24% of immigrants), India (6%), China (5%), the Philippines (4.5%), and El Salvador (3%). In fiscal year 2022, over one million immigrants (most of whom entered through family reunification) were granted legal residence. The undocumented immigrant population in the U.S. reached a record high of 14 million in 2023. The First Amendment guarantees the free exercise of religion in the country and forbids Congress from passing laws respecting its establishment. Religious practice is widespread, among the most diverse in the world, and profoundly vibrant. The country has the world's largest Christian population, which includes the fourth-largest population of Catholics. Other notable faiths include Judaism, Buddhism, Hinduism, Islam, New Age, and Native American religions. Religious practice varies significantly by region. "Ceremonial deism" is common in American culture. The overwhelming majority of Americans believe in a higher power or spiritual force, engage in spiritual practices such as prayer, and consider themselves religious or spiritual. In the Southern United States' "Bible Belt", evangelical Protestantism plays a significant role culturally; New England and the Western United States tend to be more secular. Mormonism, a Restorationist movement founded in the U.S. in 1847, is the predominant religion in Utah and a major religion in Idaho. About 82% of Americans live in metropolitan areas, particularly in suburbs; about half of those reside in cities with populations over 50,000. In 2022, 333 incorporated municipalities had populations over 100,000, nine cities had more than one million residents, and four cities—New York City, Los Angeles, Chicago, and Houston—had populations exceeding two million. Many U.S. metropolitan populations are growing rapidly, particularly in the South and West. According to the Centers for Disease Control and Prevention (CDC), average U.S. life expectancy at birth reached 79.0 years in 2024, its highest recorded level. This was an increase of 0.6 years over 2023. The CDC attributed the improvement to a significant fall in the number of fatal drug overdoses in the country, noting that "heart disease continues to be the leading cause of death in the United States, followed by cancer and unintentional injuries." In 2024, life expectancy at birth for American men rose to 76.5 years (+0.7 years compared to 2023), while life expectancy for women was 81.4 years (+0.3 years). Starting in 1998, life expectancy in the U.S. fell behind that of other wealthy industrialized countries, and Americans' "health disadvantage" gap has been increasing ever since. The Commonwealth Fund reported in 2020 that the U.S. had the highest suicide rate among high-income countries. Approximately one-third of the U.S. adult population is obese and another third is overweight. The U.S. healthcare system far outspends that of any other country, measured both in per capita spending and as a percentage of GDP, but attains worse healthcare outcomes when compared to peer countries for reasons that are debated. The United States is the only developed country without a system of universal healthcare, and a significant proportion of the population that does not carry health insurance. Government-funded healthcare coverage for the poor (Medicaid) and for those age 65 and older (Medicare) is available to Americans who meet the programs' income or age qualifications. In 2010, then-President Obama passed the Patient Protection and Affordable Care Act.[x] Abortion in the United States is not federally protected, and is illegal or restricted in 17 states. American primary and secondary education, known in the U.S. as K–12 ("kindergarten through 12th grade"), is decentralized. School systems are operated by state, territorial, and sometimes municipal governments and regulated by the U.S. Department of Education. In general, children are required to attend school or an approved homeschool from the age of five or six (kindergarten or first grade) until they are 18 years old. This often brings students through the 12th grade, the final year of a U.S. high school, but some states and territories allow them to leave school earlier, at age 16 or 17. The U.S. spends more on education per student than any other country, an average of $18,614 per year per public elementary and secondary school student in 2020–2021. Among Americans age 25 and older, 92.2% graduated from high school, 62.7% attended some college, 37.7% earned a bachelor's degree, and 14.2% earned a graduate degree. The U.S. literacy rate is near-universal. The U.S. has produced the most Nobel Prize winners of any country, with 411 (having won 413 awards). U.S. tertiary or higher education has earned a global reputation. Many of the world's top universities, as listed by various ranking organizations, are in the United States, including 19 of the top 25. American higher education is dominated by state university systems, although the country's many private universities and colleges enroll about 20% of all American students. Local community colleges generally offer open admissions, lower tuition, and coursework leading to a two-year associate degree or a non-degree certificate. As for public expenditures on higher education, the U.S. spends more per student than the OECD average, and Americans spend more than all nations in combined public and private spending. Colleges and universities directly funded by the federal government do not charge tuition and are limited to military personnel and government employees, including: the U.S. service academies, the Naval Postgraduate School, and military staff colleges. Despite some student loan forgiveness programs in place, student loan debt increased by 102% between 2010 and 2020, and exceeded $1.7 trillion in 2022. Culture and society The United States is home to a wide variety of ethnic groups, traditions, and customs. The country has been described as having the values of individualism and personal autonomy, as well as a strong work ethic and competitiveness. Voluntary altruism towards others also plays a major role; according to a 2016 study by the Charities Aid Foundation, Americans donated 1.44% of total GDP to charity—the highest rate in the world by a large margin. Americans have traditionally been characterized by a unifying political belief in an "American Creed" emphasizing consent of the governed, liberty, equality under the law, democracy, social equality, property rights, and a preference for limited government. The U.S. has acquired significant hard and soft power through its diplomatic influence, economic power, military alliances, and cultural exports such as American movies, music, video games, sports, and food. The influence that the United States exerts on other countries through soft power is referred to as Americanization. Nearly all present Americans or their ancestors came from Europe, Africa, or Asia (the "Old World") within the past five centuries. Mainstream American culture is a Western culture largely derived from the traditions of European immigrants with influences from many other sources, such as traditions brought by slaves from Africa. More recent immigration from Asia and especially Latin America has added to a cultural mix that has been described as a homogenizing melting pot, and a heterogeneous salad bowl, with immigrants contributing to, and often assimilating into, mainstream American culture. Under the First Amendment to the Constitution, the United States is considered to have the strongest protections of free speech of any country. Flag desecration, hate speech, blasphemy, and lese majesty are all forms of protected expression. A 2016 Pew Research Center poll found that Americans were the most supportive of free expression of any polity measured. Additionally, they are the "most supportive of freedom of the press and the right to use the Internet without government censorship". The U.S. is a socially progressive country with permissive attitudes surrounding human sexuality. LGBTQ rights in the United States are among the most advanced by global standards. The American Dream, or the perception that Americans enjoy high levels of social mobility, plays a key role in attracting immigrants. Whether this perception is accurate has been a topic of debate. While mainstream culture holds that the United States is a classless society, scholars identify significant differences between the country's social classes, affecting socialization, language, and values. Americans tend to greatly value socioeconomic achievement, but being ordinary or average is promoted by some as a noble condition as well. The National Foundation on the Arts and the Humanities is an agency of the United States federal government that was established in 1965 with the purpose to "develop and promote a broadly conceived national policy of support for the humanities and the arts in the United States, and for institutions which preserve the cultural heritage of the United States." It is composed of four sub-agencies: Colonial American authors were influenced by John Locke and other Enlightenment philosophers. The American Revolutionary Period (1765–1783) is notable for the political writings of Benjamin Franklin, Alexander Hamilton, Thomas Paine, and Thomas Jefferson. Shortly before and after the Revolutionary War, the newspaper rose to prominence, filling a demand for anti-British national literature. An early novel is William Hill Brown's The Power of Sympathy, published in 1791. Writer and critic John Neal in the early- to mid-19th century helped advance America toward a unique literature and culture by criticizing predecessors such as Washington Irving for imitating their British counterparts, and by influencing writers such as Edgar Allan Poe, who took American poetry and short fiction in new directions. Ralph Waldo Emerson and Margaret Fuller pioneered the influential Transcendentalism movement; Henry David Thoreau, author of Walden, was influenced by this movement. The conflict surrounding abolitionism inspired writers, like Harriet Beecher Stowe, and authors of slave narratives, such as Frederick Douglass. Nathaniel Hawthorne's The Scarlet Letter (1850) explored the dark side of American history, as did Herman Melville's Moby-Dick (1851). Major American poets of the 19th century American Renaissance include Walt Whitman, Melville, and Emily Dickinson. Mark Twain was the first major American writer to be born in the West. Henry James achieved international recognition with novels like The Portrait of a Lady (1881). As literacy rates rose, periodicals published more stories centered around industrial workers, women, and the rural poor. Naturalism, regionalism, and realism were the major literary movements of the period. While modernism generally took on an international character, modernist authors working within the United States more often rooted their work in specific regions, peoples, and cultures. Following the Great Migration to northern cities, African-American and black West Indian authors of the Harlem Renaissance developed an independent tradition of literature that rebuked a history of inequality and celebrated black culture. An important cultural export during the Jazz Age, these writings were a key influence on Négritude, a philosophy emerging in the 1930s among francophone writers of the African diaspora. In the 1950s, an ideal of homogeneity led many authors to attempt to write the Great American Novel, while the Beat Generation rejected this conformity, using styles that elevated the impact of the spoken word over mechanics to describe drug use, sexuality, and the failings of society. Contemporary literature is more pluralistic than in previous eras, with the closest thing to a unifying feature being a trend toward self-conscious experiments with language. Twelve American laureates have won the Nobel Prize in Literature. Media in the United States is broadly uncensored, with the First Amendment providing significant protections, as reiterated in New York Times Co. v. United States. The four major broadcasters in the U.S. are the National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), American Broadcasting Company (ABC), and Fox Broadcasting Company (Fox). The four major broadcast television networks are all commercial entities. The U.S. cable television system offers hundreds of channels catering to a variety of niches. In 2021, about 83% of Americans over age 12 listened to broadcast radio, while about 40% listened to podcasts. In the prior year, there were 15,460 licensed full-power radio stations in the U.S. according to the Federal Communications Commission (FCC). Much of the public radio broadcasting is supplied by National Public Radio (NPR), incorporated in February 1970 under the Public Broadcasting Act of 1967. U.S. newspapers with a global reach and reputation include The Wall Street Journal, The New York Times, The Washington Post, and USA Today. About 800 publications are produced in Spanish. With few exceptions, newspapers are privately owned, either by large chains such as Gannett or McClatchy, which own dozens or even hundreds of newspapers; by small chains that own a handful of papers; or, in an increasingly rare situation, by individuals or families. Major cities often have alternative newspapers to complement the mainstream daily papers, such as The Village Voice in New York City and LA Weekly in Los Angeles. The five most-visited websites in the world are Google, YouTube, Facebook, Instagram, and ChatGPT—all of them American-owned. Other popular platforms used include X (formerly Twitter) and Amazon. In 2025, the U.S. was the world's second-largest video game market by revenue (after China). In 2015, the U.S. video game industry consisted of 2,457 companies that employed around 220,000 jobs and generated $30.4 billion in revenue. There are 444 game publishers, developers, and hardware companies in California alone. According to the Game Developers Conference (GDC), the U.S. is the top location for video game development, with 58% of the world's game developers based there in 2025. The United States is well known for its theater. Mainstream theater in the United States derives from the old European theatrical tradition and has been heavily influenced by the British theater. By the middle of the 19th century, America had created new distinct dramatic forms in the Tom Shows, the showboat theater and the minstrel show. The central hub of the American theater scene is the Theater District in Manhattan, with its divisions of Broadway, off-Broadway, and off-off-Broadway. Many movie and television celebrities have gotten their big break working in New York productions. Outside New York City, many cities have professional regional or resident theater companies that produce their own seasons. The biggest-budget theatrical productions are musicals. U.S. theater has an active community theater culture. The Tony Awards recognizes excellence in live Broadway theater and are presented at an annual ceremony in Manhattan. The awards are given for Broadway productions and performances. One is also given for regional theater. Several discretionary non-competitive awards are given as well, including a Special Tony Award, the Tony Honors for Excellence in Theatre, and the Isabelle Stevenson Award. Folk art in colonial America grew out of artisanal craftsmanship in communities that allowed commonly trained people to individually express themselves. It was distinct from Europe's tradition of high art, which was less accessible and generally less relevant to early American settlers. Cultural movements in art and craftsmanship in colonial America generally lagged behind those of Western Europe. For example, the prevailing medieval style of woodworking and primitive sculpture became integral to early American folk art, despite the emergence of Renaissance styles in England in the late 16th and early 17th centuries. The new English styles would have been early enough to make a considerable impact on American folk art, but American styles and forms had already been firmly adopted. Not only did styles change slowly in early America, but there was a tendency for rural artisans there to continue their traditional forms longer than their urban counterparts did—and far longer than those in Western Europe. The Hudson River School was a mid-19th-century movement in the visual arts tradition of European naturalism. The 1913 Armory Show in New York City, an exhibition of European modernist art, shocked the public and transformed the U.S. art scene. American Realism and American Regionalism sought to reflect and give America new ways of looking at itself. Georgia O'Keeffe, Marsden Hartley, and others experimented with new and individualistic styles, which would become known as American modernism. Major artistic movements such as the abstract expressionism of Jackson Pollock and Willem de Kooning and the pop art of Andy Warhol and Roy Lichtenstein developed largely in the United States. Major photographers include Alfred Stieglitz, Edward Steichen, Dorothea Lange, Edward Weston, James Van Der Zee, Ansel Adams, and Gordon Parks. The tide of modernism and then postmodernism has brought global fame to American architects, including Frank Lloyd Wright, Philip Johnson, and Frank Gehry. The Metropolitan Museum of Art in Manhattan is the largest art museum in the United States and the fourth-largest in the world. American folk music encompasses numerous music genres, variously known as traditional music, traditional folk music, contemporary folk music, or roots music. Many traditional songs have been sung within the same family or folk group for generations, and sometimes trace back to such origins as the British Isles, mainland Europe, or Africa. The rhythmic and lyrical styles of African-American music in particular have influenced American music. Banjos were brought to America through the slave trade. Minstrel shows incorporating the instrument into their acts led to its increased popularity and widespread production in the 19th century. The electric guitar, first invented in the 1930s, and mass-produced by the 1940s, had an enormous influence on popular music, in particular due to the development of rock and roll. The synthesizer, turntablism, and electronic music were also largely developed in the U.S. Elements from folk idioms such as the blues and old-time music were adopted and transformed into popular genres with global audiences. Jazz grew from blues and ragtime in the early 20th century, developing from the innovations and recordings of composers such as W.C. Handy and Jelly Roll Morton. Louis Armstrong and Duke Ellington increased its popularity early in the 20th century. Country music developed in the 1920s, bluegrass and rhythm and blues in the 1940s, and rock and roll in the 1950s. In the 1960s, Bob Dylan emerged from the folk revival to become one of the country's most celebrated songwriters. The musical forms of punk and hip hop both originated in the United States in the 1970s. The United States has the world's largest music market, with a total retail value of $15.9 billion in 2022. Most of the world's major record companies are based in the U.S.; they are represented by the Recording Industry Association of America (RIAA). Mid-20th-century American pop stars, such as Frank Sinatra and Elvis Presley, became global celebrities and best-selling music artists, as have artists of the late 20th century, such as Michael Jackson, Madonna, Whitney Houston, and Mariah Carey, and of the early 21st century, such as Eminem, Britney Spears, Lady Gaga, Katy Perry, Taylor Swift and Beyoncé. The United States has the world's largest apparel market by revenue. Apart from professional business attire, American fashion is eclectic and predominantly informal. Americans' diverse cultural roots are reflected in their clothing; however, sneakers, jeans, T-shirts, and baseball caps are emblematic of American styles. New York, with its Fashion Week, is considered to be one of the "Big Four" global fashion capitals, along with Paris, Milan, and London. A study demonstrated that general proximity to Manhattan's Garment District has been synonymous with American fashion since its inception in the early 20th century. A number of well-known designer labels, among them Tommy Hilfiger, Ralph Lauren, Tom Ford and Calvin Klein, are headquartered in Manhattan. Labels cater to niche markets, such as preteens. New York Fashion Week is one of the most influential fashion shows in the world, and is held twice each year in Manhattan; the annual Met Gala, also in Manhattan, has been called the fashion world's "biggest night". The U.S. film industry has a worldwide influence and following. Hollywood, a district in central Los Angeles, the nation's second-most populous city, is also metonymous for the American filmmaking industry. The major film studios of the United States are the primary source of the most commercially successful movies selling the most tickets in the world. Largely centered in the New York City region from its beginnings in the late 19th century through the first decades of the 20th century, the U.S. film industry has since been primarily based in and around Hollywood. Nonetheless, American film companies have been subject to the forces of globalization in the 21st century, and an increasing number of films are made elsewhere. The Academy Awards, popularly known as "the Oscars", have been held annually by the Academy of Motion Picture Arts and Sciences since 1929, and the Golden Globe Awards have been held annually since January 1944. The industry peaked in what is commonly referred to as the "Golden Age of Hollywood", from the early sound period until the early 1960s, with screen actors such as John Wayne and Marilyn Monroe becoming iconic figures. In the 1970s, "New Hollywood", or the "Hollywood Renaissance", was defined by grittier films influenced by French and Italian realist pictures of the post-war period. The 21st century has been marked by the rise of American streaming platforms, which came to rival traditional cinema. Early settlers were introduced by Native Americans to foods such as turkey, sweet potatoes, corn, squash, and maple syrup. Of the most enduring and pervasive examples are variations of the native dish called succotash. Early settlers and later immigrants combined these with foods they were familiar with, such as wheat flour, beef, and milk, to create a distinctive American cuisine. New World crops, especially pumpkin, corn, potatoes, and turkey as the main course are part of a shared national menu on Thanksgiving, when many Americans prepare or purchase traditional dishes to celebrate the occasion. Characteristic American dishes such as apple pie, fried chicken, doughnuts, french fries, macaroni and cheese, ice cream, hamburgers, hot dogs, and American pizza derive from the recipes of various immigrant groups. Mexican dishes such as burritos and tacos preexisted the United States in areas later annexed from Mexico, and adaptations of Chinese cuisine as well as pasta dishes freely adapted from Italian sources are all widely consumed. American chefs have had a significant impact on society both domestically and internationally. In 1946, the Culinary Institute of America was founded by Katharine Angell and Frances Roth. This would become the United States' most prestigious culinary school, where many of the most talented American chefs would study prior to successful careers. The United States restaurant industry was projected at $899 billion in sales for 2020, and employed more than 15 million people, representing 10% of the nation's workforce directly. It is the country's second-largest private employer and the third-largest employer overall. The United States is home to over 220 Michelin star-rated restaurants, 70 of which are in New York City. Wine has been produced in what is now the United States since the 1500s, with the first widespread production beginning in what is now New Mexico in 1628. In the modern U.S., wine production is undertaken in all fifty states, with California producing 84 percent of all U.S. wine. With more than 1,100,000 acres (4,500 km2) under vine, the United States is the fourth-largest wine-producing country in the world, after Italy, Spain, and France. The classic American diner, a casual restaurant type originally intended for the working class, emerged during the 19th century from converted railroad dining cars made stationary. The diner soon evolved into purpose-built structures whose number expanded greatly in the 20th century. The American fast-food industry developed alongside the nation's car culture. American restaurants developed the drive-in format in the 1920s, which they began to replace with the drive-through format by the 1940s. American fast-food restaurant chains, such as McDonald's, Burger King, Chick-fil-A, Kentucky Fried Chicken, Dunkin' Donuts and many others, have numerous outlets around the world. The most popular spectator sports in the U.S. are American football, basketball, baseball, soccer, and ice hockey. Their premier leagues are, respectively, the National Football League, the National Basketball Association, Major League Baseball, Major League Soccer, and the National Hockey League, All these leagues enjoy wide-ranging domestic media coverage and, except for the MLS, all are considered the preeminent leagues in their respective sports in the world. While most major U.S. sports such as baseball and American football have evolved out of European practices, basketball, volleyball, skateboarding, and snowboarding are American inventions, many of which have become popular worldwide. Lacrosse and surfing arose from Native American and Native Hawaiian activities that predate European contact. The market for professional sports in the United States was approximately $69 billion in July 2013, roughly 50% larger than that of Europe, the Middle East, and Africa combined. American football is by several measures the most popular spectator sport in the United States. Although American football does not have a substantial following in other nations, the NFL does have the highest average attendance (67,254) of any professional sports league in the world. In the year 2024, the NFL generated over $23 billion, making them the most valued professional sports league in the United States and the world. Baseball has been regarded as the U.S. "national sport" since the late 19th century. The most-watched individual sports in the U.S. are golf and auto racing, particularly NASCAR and IndyCar. On the collegiate level, earnings for the member institutions exceed $1 billion annually, and college football and basketball attract large audiences, as the NCAA March Madness tournament and the College Football Playoff are some of the most watched national sporting events. In the U.S., the intercollegiate sports level serves as the main feeder system for professional and Olympic sports, with significant exceptions such as Minor League Baseball. This differs greatly from practices in nearly all other countries, where publicly and privately funded sports organizations serve this function. Eight Olympic Games have taken place in the United States. The 1904 Summer Olympics in St. Louis, Missouri, were the first-ever Olympic Games held outside of Europe. The Olympic Games will be held in the U.S. for a ninth time when Los Angeles hosts the 2028 Summer Olympics. U.S. athletes have won a total of 2,968 medals (1,179 gold) at the Olympic Games, the most of any country. In other international competition, the United States is the home of a number of prestigious events, including the America's Cup, World Baseball Classic, the U.S. Open, and the Masters Tournament. The U.S. men's national soccer team has qualified for eleven World Cups, while the women's national team has won the FIFA Women's World Cup and Olympic soccer tournament four and five times, respectively. The 1999 FIFA Women's World Cup was hosted by the United States. Its final match was attended by 90,185, setting the world record for largest women's sporting event crowd at the time. The United States hosted the 1994 FIFA World Cup and will co-host, along with Canada and Mexico, the 2026 FIFA World Cup. See also Notes References This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023​, FAO, FAO. External links 40°N 100°W / 40°N 100°W / 40; -100 (United States of America)
========================================
[SOURCE: https://en.wikipedia.org/wiki/Mutasarrifate_of_Jerusalem] | [TOKENS: 1540]
Contents Mutasarrifate of Jerusalem The Mutasarrifate of Jerusalem (Ottoman Turkish: قُدس شَرِيف مُتَصَرِّفلغى, Kudüs-i Şerif Mutasarrıflığı; Arabic: متصرفية القدس الشريف, Mutaṣarrifiyyat al-quds aš-šarīf, French: Moutassarifat de Jérusalem), also known as the Sanjak of Jerusalem, was a district in Ottoman Syria with special administrative status established in 1872. The district encompassed Jerusalem as well as Hebron, Jaffa, Gaza and Beersheba. Many documents during the Late Ottoman period refer to the Mutasarrifate of Jerusalem as Palestine; one such describes Palestine as including the Sanjak of Nablus and Sanjak of Akka (Acre) as well, more in line with European usage.[nb 1] It was the seventh most heavily populated region of the Ottoman Empire's 36 provinces. The district was separated from the Damascus Eyalet and placed directly under the supervision of the Ottoman central government in Constantinople (now Istanbul) in 1841, and formally created as an independent province in 1872 by Grand Vizier Mahmud Nedim Pasha. Scholars provide a variety of reasons for the separation, including increased European interest in the region, and strengthening of the southern border of the Empire against the Khedivate of Egypt. Initially, the Mutasarrifate of Acre and Mutasarrifate of Nablus were combined with the province of Jerusalem, with the combined province being referred to in the register of the court of Jerusalem as the "Jerusalem Eyalet", and referred to by the British consul as the creation of "Palestine into a separate eyalet". After less than two months, the sanjaks of Nablus and Acre were separated and added to the Vilayet of Beirut, leaving just the Mutasarrifate of Jerusalem. In 1906, the Kaza of Nazareth was added to the Jerusalem Mutasarrifate as an exclave, primarily in order to allow the issuance of a single tourist permit to Christian travellers. The area was conquered by the Allied Forces in 1917 during World War I and a military Occupied Enemy Territory Administration, OETA South, was set up to replace the Ottoman administration. OETA South consisted of the Ottoman sanjaks of Jerusalem, Nablus and Acre. The military administration was replaced by a British civilian administration in 1920 and the area of OETA South was incorporated into the British Mandate of Palestine in 1923. The political status of the Mutasarrifate of Jerusalem was unique from other Ottoman provinces as it was under the direct authority of the Ottoman government in Constantinople. The inhabitants identified themselves primarily on religious terms, 84% being Muslim Arabs. The district's villages were normally inhabited by farmers while its towns were populated by merchants, artisans, landowners and money-lenders. The elite consisted of the religious leadership, wealthy landlords and high-ranking civil servants. History In 1841, the district was separated from Damascus Eyalet and placed directly under Constantinople and formally created as an independent Mutasarrifate in 1872. Before 1872, the Mutasarrifate of Jerusalem was officially a sanjak within the Syria Vilayet (created in 1864, following the Tanzimat reforms). The southern border of the Mutasarrifate of Jerusalem was redrawn in 1906, at the instigation of the British, who were interested in safeguarding their imperial interests and in making the border as short and patrollable as possible. In the mid-19th century the inhabitants of Palestine identified themselves primarily in terms of religious affiliation. The population was more than 80% Muslim Arab, 10% Christian (mostly Arab), 5% Jewish, and 1% Druze. Towards the end of the 19th century, the idea that the region of Palestine or the Mutasarrifate of Jerusalem formed a separate political entity became widespread among the district's educated Arab classes. In 1904, former Jerusalem official Najib Azuri formed in Paris, France the Ligue de la Patrie Arabe ("Arab Fatherland League") whose goal was to free Ottoman Syria and Iraq from Turkish domination. In 1908, Azuri proposed the elevation of the mutasarrifate to the status of vilayet to the Ottoman Parliament after the 1908 Young Turk Revolution. A section of the 1914 Ottoman census[nb 2] listed its population figures. The area was conquered by the Allied Forces in 1917 during the Palestine campaign of World War I and a military Occupied Enemy Territory Administration (OETA South) set up to replace the Ottoman administration. OETA South consisted of the Ottoman sanjaks of Jerusalem, Nablus and Acre. The military administration was replaced by a British civilian administration in 1920 and the area of OETA South became the territory of the British Mandate of Palestine in 1923, with some border adjustments with Lebanon and Syria. Boundaries The division was bounded on the west by the Mediterranean, on the east by the River Jordan and the Dead Sea, on the north by a line from the mouth of the river Auja to the bridge over the Jordan near Jericho, and on the south by a line from midway between Gaza and Arish to Aqaba. Below are a series of contemporary Ottoman maps showing the "Quds Al-Sharif Sancağı" or "Quds Al-Sharif Mutasarrıflığı". The 1907 maps show the 1860 borders between Ottoman Syria and the Khedivate of Egypt, although the border was moved to the current Israel-Egypt border in 1906, and the area north of the Negev Desert is labelled "Filastin" (Palestine). Administrative divisions C.R. Condor described the administrative duties which he saw performed in Palestine in 1874: The whole of Syria is under the Wâly of Damascus, and Palestine is under the Mutaserifs of Acre and Jerusalem, who are appointed by that Wâly. These provinces are again subdivided, and Kaimakâms or lieutenant-governors, are placed in such towns as Jaffa, Ramleh, Jenin, etc. ... The system of government is simple. The only duties are to collect the taxes, and to put down riots, which constantly occur. The crown-lands are farmed to the highest bidder... Soldiers are sent to collect the money, and the crop is assessed before reaping... The tax in the Mulk-lands has been definitely fixed, without regard to the difference of the harvests in good and bad years. — C.R. Condor, Tent Work in Palestine Administrative divisions of the Mutasarrifate (1872–1909): Mutasarrıfs of Jerusalem The Mutasarrıfs of Jerusalem were appointed by the Sublime Porte to govern the district. They were usually experienced civil servants who spoke little or no Arabic, but knew a European language - most commonly French - in addition to Ottoman Turkish. List of mutasarrıfs after the 1908 Young Turk Revolution: See also Notes References Bibliography
========================================
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_ref-180] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/EWorld] | [TOKENS: 1726]
Contents eWorld eWorld was an online service operated by Apple Inc. between June 1994 and March 1996. The services included email (eMail Center), news, software installs and a bulletin board system (Community Center). Users of eWorld were often referred to as "ePeople." Based on a similar service from America Online, eWorld was expensive compared to other services and not well marketed, and failed to attract a high number of subscribers. The service was only available on Apple's Macintosh and Apple IIGS and had limited support on the Newton MessagePad handheld devices, though a PC version had been planned. History In the early 1990s, online services were becoming widely popular, just as Apple was looking into replacing their aging online service known as AppleLink. AppleLink had originally been developed at the urging of Jon Ebbs, Apple's head of support, who convinced the management that they could lower support costs using an online service. AppleLink had initially been available only to dealers when it launched in 1985, but was later opened to developers and became the de facto internal e-mail service within Apple. The "back end" of AppleLink was hosted by GE Information Services who charged Apple about $300,000 a year, as well as charging the end users up to $15 for daytime access to the system. Apple had tried to negotiate a better rate on several occasions, but GE knew that switching would cost Apple even more, and refused to lower the costs. Nevertheless, Apple began to implement changes. Before the advent of eWorld, Apple had started a consumer-oriented online support service known as AppleLink Personal Edition. Related to the older system in name only, this service was run by Quantum Computer Services, who earlier had established the Q-Link online service for the Commodore 64 personal computer. Quantum's Steve Case moved to California for three months to convince Apple to let Quantum run their new consumer service. In 1987, Apple allowed Quantum to run the service and granted them use of the Apple logo. Apple received a 10% royalty for all the system's users while Quantum generated revenue by running the service. The ideologies of the companies soon clashed. Quantum wanted to bundle the AppleLink software with new Macs and distribute it through direct marketing. At the time Apple did not believe in giving away non-system software for free. That, coupled with Apple's strict design guidelines, caused Quantum to eventually terminate their contract. Steve Case had, however, negotiated a rather beneficial contract, granting Quantum rights to the use of the Apple logo and preventing Apple from marketing its own online service. In 1991, Quantum was renamed America Online and the service was opened up to PC and Macintosh users. Apple wanted out of their contract with GE, which was costing them far more money than it was saving, and wanted to provide their own Mac-only competition to AOL's service. They canceled their GE contract and formed an Online Services Group. The group licensed the original AppleLink Personal Edition software from AOL and developed it into what would be known as eWorld. The group also struck a deal with AOL to help develop the service and spent 1993 working on the new software and various services to be offered. According to an AOL press release on January 5, 1994, eWorld was "created using technology licensed from America Online. The two companies have been collaborating to build the platform for Apple’s online services since December 1992, when America Online granted Apple a non-exclusive license to use the company’s interactive services platform." On January 5, 1994, Apple announced eWorld at the 1994 Macworld Conference & Expo, where they invited attendees to become beta testers for the service. On June 20 of that year, the service went into full operation. The eWorld service was a combination of the vast technical and support archives of the previous AppleLink services and a more traditional community service like AOL and CompuServe. The eWorld service was only accessible from Macs and in parts, Newton OS-operated devices. A Windows version was promised to appear in 1995; it never left the early beta stage. Features The primary portal of the service was the eWorld software. The software was based around a "town hall" metaphor where each of the service’s branches was an individual "building". Over 400 media and technology companies created information products on the service. Several Mac software and hardware companies opened up virtual forums on the service to provide customer support and general product information to subscribers. The main eWorld portal also encompassed a wide variety of news and information services. In addition to information access, two heavily used areas of eWorld were the eMail Center and Community Center. The Community Center offered chat rooms and an online BBS where thousands of "ePeople" (eWorld users) congregated to chat about various subjects. The eMail Center was a virtual post office. The service also housed support and Apple technical documents. The eWorld Web Browser, introduced in eWorld 1.1 as part of its ‘Internet On-Ramp’ features, let users browse web pages on Internet websites. The browser had features for FTP uploading, web images and settings to configure a default homepage address. Though separate from the main eWorld application, the browser worked only through an eWorld connection, not through any other network or online service. eWorld's unique user experience was developed by Cleo Huggins, manager of human interface group in the development team. Cleo also coined the name eWorld. The signature illustrations were created by Mark Drury. The development team was led by Scott Converse, and the product management team by Richard Gingras. The eWorld project at Apple was led by Peter Friedman. Demise The service cost $8.95 per month, which included two free night-time or weekend hours. Subsequent hours were $4.95 with weekday hours (6 am–6 pm) costing $7.95. Apple kept the price high originally to keep the demand moderated but never dropped the price when the demand did not materialize. After the first year of service, eWorld had 90,000 subscribers. In 1995, limited Internet service was made available, and as of September 1995 the service had 115,000 subscribers, compared to 3.5 million subscribers of AOL (including one million outside the United States). Apple's marketing and promotion efforts were at best indifferent. The service was only available on the Macintosh, along with e-mail and system update support on the Newton handheld. Apple was in a challenging financial position at the time and CEO Michael Spindler told the Online Services Group that significant marketing for the service could not be provided so eWorld shipped on new Macs with only an icon on the desktop and a brochure in the box. There was also little if anything in the way of media marketing for the service. The promised Windows version of eWorld was not launched following a decision by Apple's senior management to position eWorld as a unique service for Macintosh owners. Apple's management decided that the product was doomed to fail in a market where AOL had such a commanding lead. The company was also cutting costs. In June 1995, the company had over $1 billion in backorders and posted a $68 million loss in the fourth quarter of 1995. In January 1996, Spindler was asked to resign as CEO, replaced by former CEO of National Semiconductor Gil Amelio. Several products and projects were scrapped in an effort to put the company back into the black. On March 31, 1996, at 12:01 am Pacific Time, the service was officially shut down. Remaining eWorld subscribers were offered incentives to switch to AOL, which had already been hosting Macintosh-oriented content within the Mac Forums of its Computing Channel. The eWorld/AppleLink technical support archives moved to Apple's website. When the Online Services Group was disbanded, many of its members left Apple. Peter Friedman eventually formed chat community website TalkCity with Chris Christensen and Jenna Woodul. Scott Converse became a senior executive at Paramount Pictures Digital Entertainment Division. James Isaacs joined Danger Inc. (acquired by Microsoft in 2008). Richard Gingras and Jonathan Rosenberg joined the newly formed broadband access venture @Home Network. As of August 2025, 28 years after its discontinuation, attempting to access the eWorld.com website still automatically redirects to the apple.com homepage. See also References
========================================
[SOURCE: https://www.wired.com/about/rss-feeds/] | [TOKENS: 537]
RSS FeedsClick on a feed to add it to your site or favorite RSS Reader:WIRED Top StoriesYour essential guide to what’s next, delivering the WIRED take on the intersection of technology, science, business, and culture.BusinessThe people and companies that matter in the business of technology.Artificial IntelligenceThe latest AI news, from machine learning to computer vision and more.CultureWorking the WIRED culture beat, from movies and music to comics and gaming.GearGet first looks at dozens of products, plus in-depth reviews of the newest, the best, and the essential.IdeasProvocative and enlightening ideas, ruminations, and theories, from the thinkers of WIRED.ScienceWhat’s new on the front lines of science, from deep space to DNA sequencing.SecurityYour daily briefing on security, freedom, and privacy in the WIRED world.BackchannelLongform narratives and investigations on how emerging technologies affect culture, the economy, and politics.WIRED GuidesEverything we know about everything that matters. Deep dives into big issues.Most PopularThe Big StoryInside the Gay Tech MafiaPoliticsDHS Opens a Billion-Dollar Tab With PalantirGearA $10K Bounty Awaits Anyone Who Can Hack Ring Cameras to Stop Sharing Data With AmazonBusinessInside the Rolling Layoffs at Jack Dorsey’s Block RSS Feeds WIRED Top StoriesYour essential guide to what’s next, delivering the WIRED take on the intersection of technology, science, business, and culture. BusinessThe people and companies that matter in the business of technology. Artificial IntelligenceThe latest AI news, from machine learning to computer vision and more. CultureWorking the WIRED culture beat, from movies and music to comics and gaming. GearGet first looks at dozens of products, plus in-depth reviews of the newest, the best, and the essential. IdeasProvocative and enlightening ideas, ruminations, and theories, from the thinkers of WIRED. ScienceWhat’s new on the front lines of science, from deep space to DNA sequencing. SecurityYour daily briefing on security, freedom, and privacy in the WIRED world. BackchannelLongform narratives and investigations on how emerging technologies affect culture, the economy, and politics. WIRED GuidesEverything we know about everything that matters. Deep dives into big issues. © 2026 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices
========================================
[SOURCE: https://www.theverge.com/tech/881878/whatsapp-helps-you-catch-the-group-chat-up] | [TOKENS: 523]
Posted Feb 20, 2026 at 1:03 PM UTCDDominic PrestonWhatsApp helps you catch the group chat up.It’s rolling out Group Message History, which lets group chat admins (and members, depending on permissions) share the most recent messages with new members of a chat. That should make it easier to catch people up without blasting old messages back to the whole group.You can share up to 100 messages at once. Image: WhatsAppFollow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Dominic PrestonCloseDominic PrestonNews EditorPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Dominic PrestonMetaCloseMetaPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All MetaNewsCloseNewsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All NewsTechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechCommentsLoading commentsGetting the conversation ready...Most PopularMost PopularXbox chief Phil Spencer is leaving MicrosoftRead Microsoft gaming CEO Asha Sharma’s first memo on the future of XboxThe RAM shortage is coming for everything you care aboutAmazon blames human employees for an AI coding agent’s mistakeWill Stancil, man of the people or just an annoying guy?The Verge DailyA free daily digest of the news that matters most.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. It’s rolling out Group Message History, which lets group chat admins (and members, depending on permissions) share the most recent messages with new members of a chat. That should make it easier to catch people up without blasting old messages back to the whole group. Posts from this author will be added to your daily email digest and your homepage feed. See All by Dominic Preston Posts from this topic will be added to your daily email digest and your homepage feed. See All Meta Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech Most Popular The Verge Daily A free daily digest of the news that matters most. More in Tech This is the title for the native ad Top Stories © 2026 Vox Media, LLC. All Rights Reserved
========================================
[SOURCE: https://www.mako.co.il/food-cooking_magazine/10_minute_recipes/Recipe-46f8f9608906c91026.htm] | [TOKENS: 1514]
ביצת עין עם מוצרלה ודבשביצת עין היא סוג של קסם כשלעצמה, והיום אנחנו רוצים להכיר לכם שדרוג שלוקח את הקסם הזה למחוזות חדשים. כל מה שצריך להוסיף זה דבש ומוצרלה מגוררת, ואנחנו נשבעים: יש סיכוי שזו הביצה הכי טעימה שאכלתם בחייםיעל קצבmako אוכלדרגוהקישור הועתקזמן עבודהעשר דקותזמן כוללעשר דקותרמת קושיכל אחד יכולכשרותחלביהמרכיביםטבלת המרות(1 מנה)1 כף דבש2-3 כפות גבינת מוצרלה מגוררת2 ביציםמלחפלפל שחורעירית קצוצה, להגשהאופן ההכנה01מזלפים את הדבש בתחתית מחבת נון סטיק.02מפזרים את המוצרלה על הדבש, ושוברים את הביצים על המוצרלה. מתבלים במלח ופלפל.ביצת עין עם מוצרלה ודבש. תוספות קטנות שעושות הבדל עצום | צילום: יעל קצב, mako אוכל03מכסים את המחבת ומבשלים על להבה בינונית עד למידת העשייה הרצויה של הביצים.04מקפלים את החביתה, מפזרים עירית קצוצה ואוכלים מיד.ממליצים לגלול בטיקטוק של יעל קצב לעוד מתכונים מעוליםאו באינסטגרם המהמם שלהביצת עין עם מוצרלה ודבש. מדמיינים את הביס? | צילום: יעל קצב, mako אוכלכבר לא צריך לדמיין | צילום: יעל קצב, mako אוכלבתיאבון!ביציםביצת עיןמצאתם טעות לשון? ביצת עין עם מוצרלה ודבש ביצת עין היא סוג של קסם כשלעצמה, והיום אנחנו רוצים להכיר לכם שדרוג שלוקח את הקסם הזה למחוזות חדשים. כל מה שצריך להוסיף זה דבש ומוצרלה מגוררת, ואנחנו נשבעים: יש סיכוי שזו הביצה הכי טעימה שאכלתם בחיים זמן עבודה זמן כולל רמת קושי כשרות (1 מנה) מזלפים את הדבש בתחתית מחבת נון סטיק. מפזרים את המוצרלה על הדבש, ושוברים את הביצים על המוצרלה. מתבלים במלח ופלפל. מכסים את המחבת ומבשלים על להבה בינונית עד למידת העשייה הרצויה של הביצים. מקפלים את החביתה, מפזרים עירית קצוצה ואוכלים מיד. ממליצים לגלול בטיקטוק של יעל קצב לעוד מתכונים מעולים או באינסטגרם המהמם שלה בתיאבון!
========================================
[SOURCE: https://www.ynet.co.il/sport/article/hj6lnciubx] | [TOKENS: 461]
רצה להנציח קולגות שנהרגו במלחמה, הושעה מהאולימפיאדה - והפך לסמל גולש הסקלטון האוקראיני, ולדיסלב הרסקביץ', החליט לחבוש באולימפיאדת החורף קסדה שעליה נכתבו שמות של 24 ספורטאים ומאמנים שקיפחו את חייהם בתקיפות של רוסיה ובמהלך הלחימה בחזית, אך הוועד האולימפי החליט לא לאפשר זאת והשעה אותו בהחלטה מקוממת. ההתעקשות שלו חשפה צביעות, אבל עוררה גם תנועה חדשה במדינתו, ואף זיכתה אותו במדליה אחרת - מדליית החרות. מצורפים שמותיהם של הקורבנות שהוא רצה להזכיר, והוועד האולימפי העדיף לשכוח
========================================
[SOURCE: https://en.wikipedia.org/wiki/Emirate_of_Transjordan] | [TOKENS: 4923]
Contents Emirate of Transjordan The Emirate of Transjordan (Arabic: إمارة شرق الأردن, romanized: Imārat Sharq al-Urdun, lit. 'the emirate east of the Jordan'), officially the Amirate of Trans-Jordan, was a British protectorate under the League of Nations mandate established on 11 April 1921, which remained as such until achieving formal independence as the Kingdom of Transjordan in 1946. After the Ottoman defeat in World War I, the Transjordan region was administered within OETA East; after the British withdrawal in 1919, this region gained de facto recognition as part of the Hashemite-ruled Arab Kingdom of Syria, administering an area broadly comprising the areas of the modern countries of Syria and Jordan. Transjordan became a no man's land following the July 1920 Battle of Maysalun, during which period the British in neighbouring Mandatory Palestine chose to avoid "any definite connection between it and Palestine". Abdullah entered the region in November 1920, moving to Amman on 2 March 1921; later in the month a conference was held with the British during which it was agreed that Abdullah bin Hussein would administer the territory under the auspices of the British Mandate for Palestine with a fully autonomous governing system. The Hashemite dynasty ruled the protectorate, as well as the neighbouring Mandatory Iraq and, until 1925, the Kingdom of Hejaz to the south. On 25 May 1946, the emirate became the "Hashemite Kingdom of Transjordan", achieving full independence on 17 June 1946 when in accordance with the Treaty of London ratifications were exchanged in Amman. In 1949, after annexing the West Bank in Palestine, and "uniting" both banks of the Jordan river, it was constitutionally renamed the "Hashemite Kingdom of Jordan", commonly referred to as Jordan. Background From July 1915 to March 1916, a series of ten letters were exchanged between Hussein bin Ali, Sharif of Mecca, and Lieutenant Colonel Sir Henry McMahon, British High Commissioner to Egypt. In the letters – particularly that of 24 October 1915 – the British government agreed to recognize Arab independence after the war in exchange for the Sharif of Mecca launching the Arab Revolt against the Ottoman Empire. The area of Arab independence was defined to be "in the limits and boundaries proposed by the Sherif of Mecca", with the exception of "portions of Syria" lying to the west of "the districts of Damascus, Homs, Hama and Aleppo"; conflicting interpretations of this description was to cause great controversy in subsequent years. Around the same time, another secret treaty was negotiated between the United Kingdom and France, with assent from the Russian Empire and Italy, to define their mutually agreed spheres of influence and control in an eventual partition of the Ottoman Empire. The primary negotiations leading to the agreement occurred between 23 November 1915 and 3 January 1916, on which date the British and French diplomats, Mark Sykes and François Georges-Picot, initialled an agreed memorandum. The agreement was ratified by their respective governments on 9 and 16 May 1916. The agreement allocated to Britain control of what is today southern Israel and Palestine, Jordan and southern Iraq, and an additional small area that included the ports of Haifa and Acre to allow access to the Mediterranean. The Palestine region, with smaller boundaries than the later Mandatory Palestine, was to fall under an "international administration". The agreement was initially used directly as the basis for the 1918 Anglo–French Modus Vivendi which agreed on a framework for the Occupied Enemy Territory Administration in the Levant. Shortly after the war, the French ceded Palestine and Mosul to the British. The geographical area that was later to become Transjordan was allocated to Britain. Under the Ottoman Empire, most of Transjordan was part of the Syria Vilayet, primarily the sanjaks of Hauran and Ma'an. The inhabitants of northern Transjordan had traditionally associated with Syria, and those of southern Transjordan with the Arabian Peninsula. There was no Ottoman district known as Transjordan, there were the districts Ajlun, al-Balqa, al-Karak and Ma'an. In the second half of the nineteenth century, The Tanzimat laid the foundation for state formation in the area. The Hejaz railway was completed in 1908 and greatly facilitated the Hajj pilgrimage along the Syrian route from Damascus as well as extending the Ottoman military and administrative reach southwards. Establishment of the Emirate During World War I, Transjordan saw much of the fighting of the Arab Revolt against Ottoman rule. Assisted by the British army officer T. E. Lawrence, the Sharif of Mecca Hussein bin Ali led the successful revolt which contributed to the Ottoman defeat and breaking up of its empire. Ottoman forces were forced to withdraw from Aqaba in 1917 after the Battle of Aqaba. In 1918 the British Foreign Office noted the Arab position East of the Jordan, Biger wrote: "At the beginning of 1918, soon after the southern part of Palestine was conquered, the Foreign Office determined that Faisal’s authority over the area that he controls on the Eastern side of the Jordan river should be recognized. We can confirm this recognition of ours even if our forces do not currently control major parts of Transjordan.’" In March 1920, the Hashemite Kingdom of Syria was declared by Faisal bin Hussein in Damascus which encompassed most of what later became Transjordan. At this point, the sparsely inhabited southern part of Transjordan was claimed by both Faisal's Syria and his father's Kingdom of Hejaz. Following the provision of mandate to France and Britain at the San Remo conference in April, the British appointed Sir Herbert Samuel High Commissioner in Palestine from 1 July 1920 with a remit over the area west of the Jordan. After the French ended the Kingdom of Syria at the battle of Maysalun, Transjordan became, for a short time, a no man's land or, as Samuel put it, "..left politically derelict". In August 1920, Sir Herbert Samuel's request to extend the frontier of British territory beyond the River Jordan and to bring Transjordan under his administrative control was rejected. The British Foreign Secretary, Lord Curzon, proposed instead that British influence in Transjordan should be advanced by sending a few political officers, without military escort, to encourage self-government and give advice to local leaders in the territory. Following Curzon's instruction Samuel set up a meeting with Transjordanian leaders where he presented British plans for the territory. The local leaders were reassured that Transjordan would not come under Palestinian administration and that there would be no disarmament or conscription. Samuel's terms were accepted, he returned to Jerusalem, leaving Captain Alec Kirkbride as the British representative east of the Jordan until the arrival on 21 November 1920 of Abdullah, the brother of recently deposed king Faisal, marched into Ma'an at the head of an army of 300 men from the Hejazi tribe of 'Utaybah. Without facing opposition Abdullah and his army had effectively occupied most of Transjordan by March 1921. In early 1921, prior to the convening of the Cairo Conference, the Middle East Department of the Colonial Office set out the situation as follows: Distinction to be drawn between Palestine and Trans-Jordan under the Mandate. His Majesty's Government are responsible under the terms of the Mandate for establishing in Palestine a national home for the Jewish people. They are also pledged by the assurances given to the Sherif of Mecca in 1915 to recognise and support the independence of the Arabs in those portions of the (Turkish) vilayet of Damascus in which they are free to act without detriment to French interests. The western boundary of the Turkish vilayet of Damascus before the war was the River Jordan. Palestine and Trans-Jordan do not, therefore, stand upon quite the same footing. At the same time, the two areas are economically interdependent, and their development must be considered as a single problem. Further, His Majesty's Government have been entrusted with the Mandate for "Palestine". If they wish to assert their claim to Trans-Jordan and to avoid raising with other Powers the legal status of that area, they can only do so by proceeding upon the assumption that Trans-Jordan forms part of the area covered by the Palestine Mandate. In default of this assumption Trans-Jordan would be left, under article 132 of the Treaty of Sèvres, to the disposal of the principal Allied Powers. Some means must be found of giving effect in Trans-Jordan to the terms of the Mandate consistently with "recognition and support of the independence of the Arabs". The Cairo Conference of March 1921 was convened by Winston Churchill, then Britain's Colonial Secretary. With the mandates of Palestine and Iraq awarded to Britain, Churchill wished to consult with Middle East experts. At his request, Gertrude Bell, Sir Percy Cox, T. E. Lawrence, Sir Kinahan Cornwallis, Sir Arnold T. Wilson, Iraqi minister of war Jaʿfar alAskari, Iraqi minister of finance Sasun Effendi (Sasson Heskayl), and others gathered in Cairo, Egypt. An additional outstanding question was the policy to be adopted in Transjordan to prevent anti-French military actions from being launched within the allied British zone of influence. The Hashemites were Associated Powers during the war, and a peaceful solution was urgently needed. The two most significant decisions of the conference were to offer the throne of Iraq to emir Faisal ibn Hussein (who became Faisal I of Iraq) and an emirate of Transjordan (now Jordan) to his brother Abdullah ibn Hussein (who became Abdullah I of Jordan). The conference provided the political blueprint for British administration in both Iraq and Transjordan, and in offering these two regions to the sons of Hussein bin Ali, Churchill stated that the spirit, if not the letter, of Britain's wartime promises to the Arabs might be fulfilled. After further discussions between Churchill and Abdullah in Jerusalem, it was mutually agreed that Transjordan was accepted into the Palestine mandatory area as an Arab country apart from Palestine with the proviso that it would be, initially for six months, under the nominal rule of the emir Abdullah and that it would not form part of the Jewish national home to be established west of the River Jordan. Abdullah was then appointed Emir of the Transjordania region in April 1921. On 21 March 1921, the Foreign and Colonial office legal advisers decided to introduce Article 25 into the Mandate for Palestine, which brought Transjordan under the Palestine mandate and stated that in that territory, Britain could 'postpone or withhold' those articles of the Mandate concerning a Jewish national home. It was approved by Curzon on 31 March 1921, and the revised final draft of the mandate (including Transjordan) was forwarded to the League of Nations on 22 July 1922. In August 1922, the British government presented a memorandum to the League of Nations stating that Transjordan would be excluded from all the provisions dealing with Jewish settlement, and this memorandum was communicated to the League on 12 August and approved by it on 16 September. Abdullah established his government on 11 April 1921. Britain administered the part west of the Jordan as Palestine, and the part east of the Jordan as Transjordan. Technically they remained one mandate, but most official documents referred to them as if they were two separate mandates. The Palestine Order in Council, 1922, which established the legal basis for the Mandatory Government in Palestine, explicitly excluded Transjordan from its application apart from giving the High Commissioner some discretionary power there. In April/May 1923 Transjordan was granted a degree of independence with Abdullah as ruler and St John Philby as chief representative. The Hashemite emir Abdullah, elder son of Britain's wartime Arab ally Hussein bin Ali, was placed on the throne of Transjordan. The applicable parts of the Mandate for Palestine were stated in a decision of 16 September 1922, which provided for the separate administration of Transjordan. The government of the territory was, subject to the mandate, formed by Abdullah, brother of King Faisal I of Iraq, who had been at Amman since February 1921. Britain recognized Transjordan as an independent government on 15 May 1923, and gradually relinquished control, limiting its oversight to financial, military and foreign policy matters. This affected the goals of Revisionist Zionism, which sought a state on both banks of the Jordan. The movement claimed that it effectively severed Transjordan from Palestine, and so reduced the area on which a future Jewish state in the region could be established. The southern border between Transjordan and Arabia was considered strategic for Transjordan in order to avoid being landlocked, with intended access to the sea via the Port of Aqaba. The southern region of Ma'an-Aqaba, a large area with a small population of just 10,000, was administered by OETA East (later the Arab Kingdom of Syria, and then Mandatory Transjordan) and claimed by the Kingdom of Hejaz. In OETA East, Faisal had appointed a kaymakam (or sub-governor) at Ma'an, whereas the kaymakam at Aqaba, who "disregarded both Husein in Mecca and Feisal in Damascus with impunity" had been instructed by Hussein to extend his authority to Ma'an. This technical dispute did not rise to any form of open struggle, and the Kingdom of Hejaz was to take de facto control after Faisal's administration was defeated by the French.[b] Following the 1924–25 Saudi conquest of Hejaz, Hussein's army fled to the Ma'an region, which was then formally announced as annexed by Abdullah's Transjordan. Ibn Saud privately agreed to respect this position in an exchange of letters at the time of the 1927 Treaty of Jeddah. The Negev region was added to Palestine on 10 July 1922, having been conceded by British representative John Philby "in Trans-Jordan's name".[c] Abdullah made a request for the Negev to be added to Transjordan in late 1922, and again in 1925, but this was rejected. The location of the Eastern border between Transjordan and Iraq was considered strategic with respect to the proposed construction of what became the Kirkuk–Haifa oil pipeline. It was first set out on 2 December 1922, in a treaty to which Transjordan was not party to – the Uqair Protocol between Iraq and Nejd. It described the western end of the Iraq-Nejd boundary as "the Jebel Anazan situated in the neighbourhood of the intersection of latitude 32 degrees north longitude 39 degrees east where the Iraq-Najd boundary terminated", thereby implicitly confirming this as the point at which the Iraq-Nejd boundary became the Transjordan-Nejd boundary. This followed a proposal from Lawrence in January 1922 that Transjordan be extended to include Wadi Sirhan as far south as al-Jauf, in order to protect Britain's route to India and contain Ibn Saud. France transferred the District of Ramtha from Syria in 1921. With respect to the demographics, in 1924 the British stated: "No census of the population has been taken, but the figure is thought to be in the neighbourhood of 200,000, of whom some 10,000 are Circassians and Chechen; there are about 15,000 Christians and the remainder, in the main, are Moslem Arabs." No census was taken throughout the British mandate period, but the population was estimated to have grown to 300,000 – 350,000 by the early 1940s. The most serious threats to Abdullah's position in Transjordan were repeated Wahhabi incursions by the Ikhwan tribesmen from Najd in modern Saudi Arabia into southern parts of his territory. The emir was powerless to repel those raids by himself, and had to appeal for help to the British who maintained a military base with a small air force at Marka, close to Amman. The British military force was the primary obstacle against the Ikhwan between 1922 and 1924, and was also utilized to help Abdullah with the suppression of local rebellions at Kura, and later by Sultan Adwan, in 1921 and 1923 respectively. Establishment of the kingdom Transfer of authority to an Arab government took place gradually in Transjordan, starting with Abdullah's appointment as Emir of Transjordan on 1 April 1921, and the formation of his first government on 11 April 1921.[e] The independent administration was recognised in a statement made public (the statement had been agreed in October 1922 following the approval of the revised Mandate on 16 September 1922 with publication made conditional on completion of a probationary period) in Amman on 25 May 1923: "Subject to the approval of the League of Nations, His Britannic Majesty will recognise the existence of an independent Government in Trans-jordan under the rule of His Highness the Amir Abdullah, provided that such Government is constitutional and places His Britannic Majesty in a position to fulfil his international obligations in respect of the territory by means of an Agreement to be concluded with His Highness"[f] During the eleventh session of the League of Nations' Permanent Mandates Commission in 1927, Sir John E. Shuckburgh summarised the status of Transjordan: It is not part of Palestine but it is part of the area administered by the British Government under the authority of the Palestine Mandate. The special arrangements there really go back to the old controversy about our war time pledges to the Arabs which I have no wish to revive. The point is that on our own interpretation of those pledges the country East of the Jordan – though not the country West of the Jordan – falls within the area in respect of which we promised during the war to recognise and support the independence of the Arabs. Transjordan is in a wholly different position from Palestine and it was considered necessary that special arrangements should be made there Transfer of most administrative functions occurred in 1928, including the creation of the post of High Commissioner for Transjordan.[g] The status of the mandate was not altered by the agreement between the United Kingdom and the Emirate concluded on 20 February 1928. It recognised the existence of an independent government in Transjordan and defined and limited its powers. The ratifications were exchanged on 31 October 1929."[h] Transjordan remained under British control until the first-Transjordanian treaty was concluded in 1928. Transjordan became nominally independent, although the British still maintained a military presence and control of foreign affairs and retained some financial control over the Emirate. This failed to respond to Transjordanian demands for a fully sovereign and independent state, a failure that led to widespread disaffection with the treaty among Transjordanians, prompting them to seek a national conference (25 July 1928), the first of its kind, to examine the articles of the treaty and adopt a plan of political action. According to the U.S. State Department Digest of International Law, the status of the mandate was not altered by the agreement between the United Kingdom and the Emirate concluded on 20 February 1928 which recognized the existence of an independent government in Transjordan and defined and limited its powers. The ratifications were exchanged on 31 October 1929." On 17 January 1946, Ernest Bevin, the British Foreign Secretary, announced in a speech at the General Assembly of the United Nations that the British Government intended to take steps in the near future to establish Transjordan as a fully independent and sovereign state. The Treaty of London was signed by the British Government and the Emir of Transjordan on 22 March 1946 as a mechanism to recognise the full independence of Transjordan upon ratification by both countries parliaments. Transjordan's impending independence was recognized on 18 April 1946 by the League of Nations during the last meeting of that organization. On 25 May 1946 the Transjordan became the "Hashemite Kingdom of Transjordan" when the ruling 'Amir' was re-designated as 'King' by the parliament of Transjordan on the day it ratified the Treaty of London. 25 May is still celebrated as independence day in Jordan although officially the mandate for Transjordan ended on 17 June 1946 when in accordance with the Treaty of London the ratifications were exchanged in Amman and Transjordan gained full independence. In 1949 the country's official name was changed to the "Hashemite Kingdom of Jordan". When King Abdullah applied for membership in the newly formed United Nations, his request was vetoed by the Soviet Union, citing that the nation was not "fully independent" of British control. This resulted in another treaty in March 1948 with Britain in which all restrictions on sovereignty were removed. Despite this, Jordan was not a full member of the United Nations until 14 December 1955. The Anglo-American treaty, also known as the Palestine Mandate Convention, permitted the US to delay any unilateral British action to terminate the mandate. The earlier proclamation of the independence of Syria and Lebanon had said "the independence and sovereignty of Syria and Lebanon will not affect the juridical situation as it results from the Mandate Act. Indeed, this situation could be changed only with the agreement of the Council of the League of Nations, with the consent of the Government of the United States, a signatory of the Franco-American Convention of 4 April 1924". The U.S. adopted the policy that formal termination of the mandate with respect to Transjordan would follow the earlier precedent established by the French Mandate for Syria and the Lebanon. That meant termination would generally be recognized upon the admission of Transjordan into the United Nations as a fully independent country. Members of the U.S. Congress introduced resolutions demanding that the U.S. Representative to the United Nations be instructed to seek postponement of any international determination of the status of Transjordan until the future status of Palestine as a whole was determined. The U.S. State Department also received a legal argument from Rabbis Wise and Silver objecting to the independence of Transjordan. At the 1947 Pentagon Conference, the U.S. advised Great Britain it was withholding recognition of Transjordan pending a decision on the Palestine question by the United Nations. Transjordan applied for membership of the United Nations on 26 June 1946. The Polish representative said that he did not object to the independence of Transjordan, but requested that the application be postponed for a year on the grounds that legal procedures required by the Covenant of the League of Nations had not been carried out. The British representative responded that the League of Nations had already approved the termination of the mandate in Transjordan. When the issue was voted on, Transjordan's application achieved the required total number of votes, but was vetoed by the Soviet Union which did not approve membership of any countries with which it did not have diplomatic relations. This problem and similar problems caused by vetoes of the memberships of Ireland, Portugal, Austria, Finland and Italy took several years and many votes to solve. Jordan was finally admitted to membership on 14 December 1955. See also Notes References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/MFEM] | [TOKENS: 109]
Contents MFEM MFEM is an open-source C++ library for solving partial differential equations using the finite element method, developed and maintained by researchers at the Lawrence Livermore National Laboratory and the MFEM open-source community on GitHub. MFEM is free software released under a BSD license. The library consists of C++ classes that serve as building blocks for developing finite element solvers applicable to problems of fluid dynamics, structural mechanics, electromagnetics, radiative transfer and many other. Features Some of the features of MFEM include See also References External links
========================================
[SOURCE: https://www.ynet.co.il/sport/worldbasketball/article/s1zmzt8obg] | [TOKENS: 279]
אחרי האולסטאר: אבדיה התקרב לטריפל-דאבל - פורטלנד הושפלה ב-54 הפרש הבלייזרס חזרו רע מהפגרה: הפורוורד הישראלי קלע 15 נקודות עם 13 אסיסטים ו-8 ריבאונדים (לצד 6 איבודים), קבוצתו חטפה 157:103 בבית מדנבר נאגטס. יוקיץ' (32 נקודות) ומארי (25) הצטיינו אצל המנצחת. 8 נקודות לוולף בהפסד ברוקלין
========================================
[SOURCE: https://www.wired.com/sitemap/] | [TOKENS: 75]
Site Map Years © 2026 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices
========================================
[SOURCE: https://en.wikipedia.org/wiki/MyNetworkTV] | [TOKENS: 3940]
Contents MyNetworkTV MyNetworkTV (stylized as mynetworkTV), abbreviated as MNT or MNTV, is an American commercial broadcast television syndication service and former television network[citation needed] owned by Fox Corporation, operated by its Fox Television Stations division, and distributed via the syndication structure of Fox First Run. Under the ownership structure of Fox Corporation, the service is incorporated as a subsidiary company, Master Distribution Service, Inc. The service's weekly ten hours of programming is currently originated from the library of NBCUniversal Syndication Studios, though NBCUniversal does not hold any stake in the service.[citation needed] MyNetworkTV began its operations on September 4, 2006, with an initial affiliate lineup covering about 96% of the country, most of which consisted of stations that were former affiliates of The WB and UPN that did not join the successor of those two networks, The CW. On September 28, 2009, following disappointment with the network's results, MyNetworkTV dropped its status as a television network and transitioned into a programming service, similar to The CW Plus, relying mainly on repeats of recent broadcast and cable series.[irrelevant citation] Fox Corporation retained MyNetworkTV after the acquisition of 21st Century Fox by The Walt Disney Company was completed on March 20, 2019. History MyNetworkTV arose from the January 2006 announcement of the launch of The CW, a television network formed by CBS Corporation and Time Warner which essentially combined programming from The WB and UPN onto the scheduling model of the former of the two predecessors. The CW would go on to become not just the fifth national TV network, behind Fox, but also the fifth major TV network. As a result of several deals earlier in the decade, Fox Television Stations owned several UPN affiliates, including the network's three largest stations: WWOR-TV in Secaucus, New Jersey (part of the New York City market), KCOP-TV in Los Angeles, and WPWR-TV in Gary, Indiana (part of the Chicago market). Fox had acquired WWOR and KCOP after purchasing most of the television holdings of UPN's founding partner Chris-Craft Industries, while the company purchased WPWR in 2003 from Newsweb Corporation. Despite concerns about UPN's future that arose after Fox purchased the Chris-Craft stations, UPN signed three-year affiliation agreement renewals with its Fox-owned affiliates in 2003. Those agreements' pending expiration (along with those involving other broadcasting companies) in 2006, as well as persistent financial losses for both UPN and The WB, gave CBS Corporation and Time Warner (the respective parent companies of UPN and The WB) the rare opportunity to merge their respective struggling networks into The CW. The CW's initial affiliation agreements did not include any of the UPN stations (nor a lone independent station) owned by Fox Television Stations. In fact, as part of a 10-year affiliation deal with The WB's part-owner, Tribune Broadcasting, the coveted New York City, Los Angeles, and Chicago affiliations all went to Tribune-owned stations (WPIX, KTLA, and WGN-TV, respectively). In response to the announcement, Fox promptly removed all network references from logos and promotional materials on its UPN affiliates and ceased on-air promotion of UPN's programs altogether. However, in all three cases (especially in the cases of Los Angeles and Chicago), the UPN affiliate was the higher-rated station; CW executives were on record as preferring the "strongest" WB and UPN affiliates. Media reports speculated that the Fox-owned UPN affiliates would all revert to being independent stations, or else form another network by uniting with other UPN and WB-affiliated stations that were left out of The CW's affiliation deals. Fox chose the latter route and announced the launch of MyNetworkTV on February 22, 2006, less than a month after CBS and Time Warner announced the formation of The CW on January 24. The Guardian reported that Fox would utilize MySpace, the social networking website its parent company, News Corporation, had acquired in 2005, to help promote MyNetworkTV. Fox would also utilize MySpace's content-sharing model when it launched MyNetworkTV's website. MyNetworkTV's original telenovelas Desire scored a 1.1 household rating/2 share; Fashion House went up to 1.3/2. Fox had sold about half of its projections of $50 million in advance commercial sales. On March 7, 2007, MyNetworkTV began to be included in Nielsen's daily "Television Index" reports, alongside the other major broadcast networks, although it was still not part of the "fast nationals" that incorporate the other networks. Last-minute changes to MyNetworkTV's 2007-08 Fall schedule included re-titling the reality series Divorce Wars to Decision House, and the addition of Celebrity Exposé and Control Room Presents to the network's Monday lineup as well as a one-hour IFL Battleground, followed by NFL Total Access on Saturdays. In response to the telenovela lineup's poor ratings performance, highlighted by an average household rating of 0.7%, reports surfaced that Fox executives were planning a major revamp of MyNetworkTV's programming, decreasing its reliance on telenovelas and adding new unscripted programs to the schedule such as reality shows, game shows (such as My GamesFever), movies and sports, and a possible revisit to a deal with the Ultimate Fighting Championship. However, MyNetworkTV instead signed a deal with another mixed martial arts organization, the International Fight League, in conjunction with Fox Sports Net. On February 1, 2007, Greg Meidel, who was named to the newly created position of network president just ten days earlier, confirmed the rumors and unveiled a dramatically revamped lineup. The intent of the shakeup was to increase viewer awareness of the network (and boost viewership, in turn), as well as to satisfy local affiliates who were disappointed over the poor ratings performance of the network under its initial format. After March 7 (when Wicked Wicked Games and Watch Over Me finished their runs), telenovelas were reduced to occupying only two nights of its programming schedule, airing in two-hour movie-style blocks rather than each of the serials airing in a one-hour, five-night-a-week format. The remainder of the schedule included theatrical movies and the new IFL Battleground (originally titled Total Impact). In addition, the Saturday night telenovela recaps ended immediately, with movies running on that night until March. The 1986 film Something Wild aired on February 3, becoming the network's first non-telenovela presentation. Specials (ranging from the World Music Awards to the Hawaiian Tropic International Beauty Pageant) and reality programming were also a part of the network's reformatting, with the first two specials airing on March 7. MyNetworkTV also reduced its telenovela programming to a single night each week, with American Heiress and Saints & Sinners airing for one hour each on Wednesdays until their unexpected termination, due to incompatible flow with IFC Battleground from Monday to Tuesday as far as promotions. The new Thursday night movie block featured mostly action/adventure films, with Friday night featuring a mix of contemporary classic films, beginning on June 5. A side effect of the new programming schedule was the loss of the network's claim that it was the only U.S. broadcast network at the time to have its entire programming schedule available in high definition, due to the IFL, some of the network's movies and additional programs being produced exclusively in 480i standard definition. In the Fall of 2007, MyNetworkTV dropped telenovelas altogether, and began to air reality series and sports programs. On September 1, 2007, the network aired its first live program, the men's final of the AVP Croc Tour's Cincinnati Open. The network debuted its first sitcom, the Flavor Flav vehicle Under One Roof, on April 16, 2008; because the series used Canadian writers, it was unaffected by the 2007–08 Writers Guild strike. The network's shift from telenovelas to reality shows and movies produced only a small bump in the ratings. It averaged only a .7 household rating during September 2007. MyNetworkTV continues to be the second lowest-rated English-language broadcast network in the United States, ahead of only Ion Television. On February 26, 2008, the network announced it had picked up the rights to air WWE SmackDown, which left The CW at the end of September 2008. The first SmackDown episode on MyNetworkTV aired on October 3, 2008. The first episode of WWE SmackDown pulled in the largest audience in MyNetworkTV history with 3.2 million viewers, and for the first time, put the network in fifth place for the night – ahead of The CW – and was the top-rated program that night in the male 18-34 and 18-49 demographics. The network went back to sixth place shortly afterward. Of the six broadcast networks, Nielsen Media Research said that only MyNetworkTV had increased viewership, with 1.76 million viewers per night, up 750,000 from the previous season. On January 5, 2009, MyNetworkTV aired episodes of the 2002 revival of The Twilight Zone (which originally aired on UPN, one of the networks MyNetworkTV had replaced). The series helped the network's ratings rise, along with WWE SmackDown, becoming the second highest-rated program on the network. The highest-rated program to have ever aired on MyNetworkTV is a December 10, 2008, broadcast of the 1990 comedy film Home Alone, which brought in 3.70 million viewers (although not a record), but earned a 1.4 rating among the 18-49 adult demographic. On February 9, 2009, Fox Entertainment Group announced that MyNetworkTV would convert from a television network to a programming service, similar to that of The CW Plus, with a focus on repeats of acquired programs originally aired on broadcast and cable networks and in first-run syndication. Litton Entertainment had reportedly expressed interest in leasing MyNetworkTV's Saturday evening time slots, which MyNetworkTV chose to instead turn back over to its affiliates. MyNetworkTV began airing more syndicated programming in the fall, which included game shows and dramas, five nights a week. This required the network's affiliates to re-negotiate a new affiliate agreement with the new corporation within Fox operating MyNetworkTV, Master Distribution Service, Inc., though it also gave a full and unencumbered "out" to stations which chose to end their association with MyNetworkTV under this guise, which Ion Television did with their three affiliates. On April 12, 2010, WWE announced that WWE SmackDown would move to the Syfy cable channel that October; the move left MyNetworkTV with no first-run programming other than that it shared with its syndicators. Despite the lack of first-run programming, MyNetworkTV renewed its affiliation contracts for three more years on February 14, 2011. The programming service has seen significant viewership growth since its 2006 startup as a television network. Although ratings on MyNetworkTV do not match those of the other broadcast networks, Nexstar (future owner of rival network The CW at the time) CEO Perry Sook noted his approval of its business model at the time, saying that Nexstar's MyNetworkTV stations get 'more (local ad) inventory per hour' than they would be associated with a traditional network such as Fox or ABC. Nexstar has since become the owner of, and the largest affiliate base for, The CW, through several acquisitions and since converted three MyNetworkTV affiliations into CW affiliations including WPHL-TV which was the largest MyNetworkTV affiliate by market size that is not owned and operated by the Fox Television Stations subsidiary of Fox Corporation, which owns the programming service. In announcing its fall schedule for the 2012–13 schedule, MyNetworkTV executives revealed that the programming service increased ratings over the previous year, and rated as the #6 most-watched network during the 2011–12 season with around 2.5 million viewers. Though MyNetworkTV would earn some recognition from some as a sixth English-language broadcast television network at launch behind The CW, this tenuous status would eventually be lost as digital multicast networks such as MeTV gained wider distribution and more critical acclaim for its classic television schedule. Ion Television, which had struggled in the mid-2000s due to management struggles and programming issues after itself trying to become the sixth network as PAX TV, would also stabilize and eventually come to ratings parity with MyNetworkTV before passing it by the mid-2010s. Programming MyNetworkTV began operations on Tuesday, September 5, 2006, with the premieres of its two initial series. Some affiliates unofficially began branding their stations well beforehand in July into August to allow viewers to grow accustomed to their new brandings, though most fulfilled their existing WB and UPN network commitments and did not start branding in earnest until September 1 (the Friday before), when the majority of those affiliate agreements expired. The network provided a block of preview programming that aired the day before on September 4, though it did not launch officially that day due to the low audience figures traditionally associated with the Labor Day holiday. Initially, programming aired Monday through Saturdays from 8:00 to 10:00 p.m. (Eastern and Pacific Time). As of April 2013, MyNetworkTV broadcasts ten hours of primetime programming each week, airing on Monday through Friday evenings from 8:00 to 10:00 p.m. Eastern and Pacific. MyNetworkTV does not air programming on weekends, the only broadcast service not to in the United States. Heavy local sports preemptions were previously a problem for MyNetworkTV at its launch, as they were for all of the U.S. broadcast networks that have debuted since the January 1995 launches of The WB and UPN. These would become less of an issue with the end of the network's telenovela, where an airing of the pre-empted telenovela episode rescheduled as soon as possible on the same day as required by default rather than the flexibility that affiliates of UPN, The WB or The CW had to push a show off to a weekend slot. With the service's switch to an all-rerun schedule in 2009, this effectively allows stations to pre-empt repeat programming at will to fit in sporting events (mainly those provided by syndication services such as ESPN Regional Television and the ACC Network, as some local events that had aired on its affiliates have moved to regional sports networks in the time since MyNetworkTV launched) without much consequence. During the telenovela era, affiliates often scheduled contractual "make goods" of the network's daily schedule between 3:00 and 6:00 a.m. local time. Not only are these light viewing hours, but they air after Nielsen processes its preliminary morning network ratings. The network's original format focused on the 18-to-49-year-old, English-speaking population with programming consisting exclusively of telenovelas (a version of the soap opera format rarely attempted on American television outside of Spanish language broadcast networks, much less in primetime), starting with Desire and Fashion House. Originally, each series aired Monday through Friday in continuous cycles of 13-week seasons, with a one-hour recap of the week's episodes airing on Saturdays; when one series ended, another unrelated series would begin the following week. The fifth and sixth series, American Heiress and Saints and Sinners, appeared one hour per week on Wednesdays before abruptly vanishing from the schedule. The MyNetworkTV serial lineup was broadcast in Australia on the W. Channel under the block name FOXTELENOVELA. In Canada, the first Desire/Fashion House cycle aired weekday afternoons on Toronto independent station CKXT-TV, which decided not to air subsequent cycles for unknown reasons. The announcement of the network also stated that additional unscripted reality-based and current-affairs programming were in development. These included: MyNetworkTV abandoned the development of these programs in mid-2006, choosing to focus solely on telenovelas. Later announcements by Fox regarding additional programming to air on MyNetworkTV owned-and-operated stations – such as Desperate Housewives repeats in traditional weekend syndication, a trial run of the sitcom Tyler Perry's House of Payne (which later moved to TBS), and the daytime viewer-participation game show My GamesFever – never applied to the network as a whole. To satisfy E/I requirements, some affiliates carry the Litton Go Time block while others carry Xploration Station. Affiliates and branding At launch, MyNetworkTV's affiliation base consisted of former WB or UPN affiliates. Along with Fox's existing UPN station group, three Tribune WB stations and three CBS-owned UPN stations signed up with the network. Sinclair Broadcast Group signed up 17 of their stations on March 6, 2006; this was followed by deals with Raycom Media and Capitol Broadcasting Company one day later. Four LIN Media stations agreed to affiliate on April 26, 2006; additional affiliation deals were later announced that placed MyNetworkTV on digital subchannels or on stations that already agreed to carry The CW, including KNVA in Austin, Texas, and KWKB in Iowa City, Iowa. Carriage in Miami, New Orleans, Denver and Boston was secured by July and most remaining vacancies in the top 100 television markets were filled by August. The Boston affiliate, WZMY-TV in Derry, New Hampshire, already filed a trademark for "MyTV" on July 6, 2005, lending to speculation it would file a lawsuit against Fox over the name. Most affiliates, including all stations owned by Fox Television Stations, initially utilized a naming convention including the "My" moniker and network logo, but have been downplayed following MyNetworkTV's business model shift. In particular, Cincinnati's WSTR-TV revived its former "Star 64" brand, WPMY in Pittsburgh rebranded as "22 The Point" WPNT, and KAUT-TV rebranded as "OK43", and again as "Freedom 43"; both WPNT and KAUT switched to The CW in 2023. Some MyNetworkTV stations have rebranded as extensions of a parent station, particularly Fox's owned-and-operated stations (with the exceptions of WWOR-TV and KTXH), such as WDCA becoming "Fox 5 Plus". By 2014, when the service acquired off-network reruns of The Walking Dead, MyNetworkTV boasted a carriage rate of 97 percent of U.S. television households. The following are MyNetworkTV stations owned and operated by Fox Television Stations, LLC, a subsidiarity of the Fox Corporation. (Operated through an SSA by Sinclair Broadcast Group) See also References External links
========================================
[SOURCE: https://www.theverge.com/transportation/881873/tesla-cybertruck-awd-price-cut] | [TOKENS: 1751]
TransportationCloseTransportationPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TransportationNewsCloseNewsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All NewsElectric CarsCloseElectric CarsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All Electric CarsTesla’s cheaper $60,000 Cybertruck is still a CybertruckThe price is lower, but the stigma remains the same.The price is lower, but the stigma remains the same.by Dominic PrestonCloseDominic PrestonNews EditorPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Dominic PrestonFeb 20, 2026, 12:47 PM UTCLinkShareGiftImage: TeslaPart OfTesla Cybertruck: all the news about Elon Musk’s futuristic pickup trucksee all updates Dominic PrestonCloseDominic PrestonPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Dominic Preston is a news editor with over a decade’s experience in journalism. He previously worked at Android Police and Tech Advisor.Tesla has announced a new all-wheel drive Cybertruck that starts at $59,990, the cheapest the controversial truck has been sold for yet — though still well above the $40,000 price tag Elon Musk had initially promised. It’s been joined by a $15,000 price cut for the high-end Cyberbeast variant, as Tesla doubles down on its efforts to turn slow Cybertruck sales around.The new dual motor AWD variant is available now from Tesla’s site. It’s cheaper than the rear-wheel drive version that was launched last year and discontinued a few short months later, but includes features not seen on that model like a powered tonneau cover, bed outlets, and adaptive damping. The only downside is a slightly shorter range thanks to the second motor: 325 miles instead of 350.The price may not last long though — in a reply posted on X, Musk said that the AWD Cybertruck will cost $59,990 “for the next 10 days,” though gave no further indication on what will happen after that.At the same time, Tesla has reduced the price of the tri-motor Cyberbeast to $99,990. If that sounds familiar it’s because it’s back at this price for the third time. The Cyberbeast launched at $99,990, then went up by $20,000, back down by the same, up again by $15,000, and is now right back at square one. I guess that means if you’re tempted you should go for it, because there’s no telling when the price will go back up.Then again, you might expect Tesla to keep Cybertrucks as cheap as it can. Sales plummeted 48 percent in 2025 according to one estimate, to just over 20,000 trucks across the whole year — just a little below Musk’s early predictions that Tesla would sell 250,000 Cybertrucks a year.Update, February 20th: Added Elon Musk post indicating the new low price may be temporary.Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Dominic PrestonCloseDominic PrestonNews EditorPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Dominic PrestonElectric CarsCloseElectric CarsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All Electric CarsNewsCloseNewsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All NewsTeslaCloseTeslaPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TeslaTransportationCloseTransportationPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TransportationMore in: Tesla Cybertruck: all the news about Elon Musk’s futuristic pickup truckThe Tesla Cybertruck’s highway to (sales) hell.Andrew J. HawkinsJan 20Tesla is still figuring out how to make the Cybertruck’s massive windshield wiper work.Richard LawlerDec 25, 2025Tesla Cybertruck sales are flatliningAndrew J. HawkinsOct 13, 2025Most PopularMost PopularXbox chief Phil Spencer is leaving MicrosoftThe RAM shortage is coming for everything you care aboutRead Microsoft gaming CEO Asha Sharma’s first memo on the future of XboxAmazon blames human employees for an AI coding agent’s mistakeWill Stancil, man of the people or just an annoying guy?The Verge DailyA free daily digest of the news that matters most.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.Advertiser Content FromThis is the title for the native ad Posts from this topic will be added to your daily email digest and your homepage feed. See All Transportation Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All Electric Cars Tesla’s cheaper $60,000 Cybertruck is still a Cybertruck The price is lower, but the stigma remains the same. The price is lower, but the stigma remains the same. Posts from this author will be added to your daily email digest and your homepage feed. See All by Dominic Preston Posts from this author will be added to your daily email digest and your homepage feed. See All by Dominic Preston Tesla has announced a new all-wheel drive Cybertruck that starts at $59,990, the cheapest the controversial truck has been sold for yet — though still well above the $40,000 price tag Elon Musk had initially promised. It’s been joined by a $15,000 price cut for the high-end Cyberbeast variant, as Tesla doubles down on its efforts to turn slow Cybertruck sales around. The new dual motor AWD variant is available now from Tesla’s site. It’s cheaper than the rear-wheel drive version that was launched last year and discontinued a few short months later, but includes features not seen on that model like a powered tonneau cover, bed outlets, and adaptive damping. The only downside is a slightly shorter range thanks to the second motor: 325 miles instead of 350. The price may not last long though — in a reply posted on X, Musk said that the AWD Cybertruck will cost $59,990 “for the next 10 days,” though gave no further indication on what will happen after that. At the same time, Tesla has reduced the price of the tri-motor Cyberbeast to $99,990. If that sounds familiar it’s because it’s back at this price for the third time. The Cyberbeast launched at $99,990, then went up by $20,000, back down by the same, up again by $15,000, and is now right back at square one. I guess that means if you’re tempted you should go for it, because there’s no telling when the price will go back up. Then again, you might expect Tesla to keep Cybertrucks as cheap as it can. Sales plummeted 48 percent in 2025 according to one estimate, to just over 20,000 trucks across the whole year — just a little below Musk’s early predictions that Tesla would sell 250,000 Cybertrucks a year. Update, February 20th: Added Elon Musk post indicating the new low price may be temporary. Posts from this author will be added to your daily email digest and your homepage feed. See All by Dominic Preston Posts from this topic will be added to your daily email digest and your homepage feed. See All Electric Cars Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All Tesla Posts from this topic will be added to your daily email digest and your homepage feed. See All Transportation More in: Tesla Cybertruck: all the news about Elon Musk’s futuristic pickup truck Most Popular The Verge Daily A free daily digest of the news that matters most. This is the title for the native ad More in Transportation This is the title for the native ad Top Stories © 2026 Vox Media, LLC. All Rights Reserved
========================================
[SOURCE: https://en.wikipedia.org/wiki/FitFinder] | [TOKENS: 520]
Contents FitFinder FitFinder was a social networking website primarily based in the United Kingdom. FitFinder is described by its creator, Rich Martell, as localised anonymous microblogging. FitFinder is based on the concept of anonymously posting both a location and description of an attractive person whom one has spotted; this post is then immediately placed on the FitFeed, where it can be viewed by anyone. Background The concept of the FitFinder website was conceived by Rich Martell, a computer science undergraduate studying at University College London (UCL). The website, launched in April 2010, was originally a joke between Martell and his rugby friends who would text each other when they spotted an attractive girl. The website immediately spread and became instantly viral. In the first few hours the site had over 2,000 users and had to be taken down. Once the site was back online again, its popularity grew to nearly 20,000 visitors in the first weekend. The initial success was met with huge demand for the expansion of FitFinder to more universities across the UK. By the time the site was taken down the site was reported to have had over 250,000 users over several countries with more than 5 million page views. Coverage The FitFinder Network covered 52 UK universities, including Oxbridge, Durham University, UCL, Manchester University, Leeds, Warwick, Bath, LSE, KCL, Imperial College London, and most Red Brick universities. Prior to its closure, Martell had said that FitFinder was going to be expanded outside of universities in the near future, possibly covering sporting events and music festivals. Controversy Because of the nature of the user-generated content on FitFinder, many commentators accused it of being offensive and inappropriate. Less than one week after the site went live, the London School of Economics emailed all their students warning them about the site. A number of complaints about the site prompted JANET, the UK network provider that serves universities, to block the site. The ban itself sparked more complaints, which led to its own reversal. In addition, UCL fined Martell for refusing to take the site down. On 28 May 2010, the FitFinder website was taken down because of "increasing pressure from universities" and the maximum fine UCL imposed on Martell for bringing the university into disrepute. Floxx In January 2011, Martell launched a new social network - this time called Floxx. Floxx is designed to be more of a location platform which encourages location-based sharing. See also References Sources External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Microlensing] | [TOKENS: 5807]
Contents Gravitational microlensing Gravitational microlensing is an astronomical phenomenon caused by the gravitational lens effect. It can be used to detect objects that range from the mass of a planet to the mass of a star, regardless of the light they emit. Typically, astronomers can only detect bright objects that emit much light (stars) or large objects that block background light (clouds of gas and dust). These objects make up only a minor portion of the mass of a galaxy.[clarification needed] Microlensing allows the study of objects that emit little or no light. When a distant star or quasar gets sufficiently aligned with a massive compact foreground object, the bending of light due to its gravitational field, as discussed by Albert Einstein in 1915, leads to two distorted images (generally unresolved), resulting in an observable magnification. The time-scale of the transient brightening depends on the mass of the foreground object as well as on the relative proper motion between the background 'source' and the foreground 'lens' object. Ideally aligned microlensing produces a clear buffer between the radiation from the lens and source objects. It magnifies the distant source, revealing it or enhancing its size and/or brightness. It enables the study of the population of faint or dark objects such as brown dwarfs, red dwarfs, planets, white dwarfs, neutron stars, black holes, and massive compact halo objects. Such lensing works at all wavelengths, magnifying and producing a wide range of possible warping for distant source objects that emit any kind of electromagnetic radiation. Microlensing by an isolated object was first detected in 1989. Since then, microlensing has been used to constrain the nature of the dark matter, detect exoplanets, study limb darkening in distant stars, constrain the binary star population, and constrain the structure of the Milky Way's disk. Microlensing has also been proposed as a means to find dark objects like brown dwarfs and black holes, study starspots, measure stellar rotation, and probe quasars including their accretion disks. Microlensing was used in 2018 to detect Icarus, then the most distant star ever observed. How it works Microlensing is based on the gravitational lens effect. A massive object (the lens) will bend the light of a bright background object (the source). This can generate multiple distorted, magnified, and brightened images of the background source. Microlensing is caused by the same physical effect as strong gravitational lensing and weak gravitational lensing but it is studied by very different observational techniques. In strong and weak lensing, the mass of the lens is large enough (mass of a galaxy or galaxy cluster) that the displacement of light by the lens can be resolved with a high resolution telescope such as the Hubble Space Telescope. With microlensing, the lens mass is too low (mass of a planet or a star) for the displacement of light to be observed easily, but the apparent brightening of the source may still be detected. In such a situation, the lens will pass by the source in a reasonable amount of time, seconds to years instead of millions of years. As the alignment changes, the source's apparent brightness changes, and this can be monitored to detect and study the event. Thus, unlike with strong and weak gravitational lenses, microlensing is a transient astronomical event from a human timescale perspective, thus a subject of time-domain astronomy. Unlike with strong and weak lensing, no single observation can establish that microlensing is occurring. Instead, the rise and fall of the source brightness must be monitored over time using photometry. This function of brightness versus time is known as a light curve. A typical microlensing light curve is shown below: A typical microlensing event like this one has a very simple shape, and only one physical parameter can be extracted: the time scale, which is related to the lens mass, distance, and velocity. There are several effects, however, that contribute to the shape of more atypical lensing events: Most focus is currently on the more unusual microlensing events, especially those that might lead to the discovery of extrasolar planets. Another way to get more information from microlensing events involves measuring the astrometric shifts in the source position during the course of the event and even resolving the separate images with interferometry. The first successful resolution of microlensing images was achieved with the GRAVITY instrument on the Very Large Telescope Interferometer (VLTI). When the two images of the source are not resolved (that is, are not separately detectable by the available instruments), the measured position is an average of the two positions, weighted by their brightness. This is called the position of the centroid. If the source is, say, far to the "right" of the lens, then one image will be very close to the true position of the source and the other will be very close to the lens on its left side, and very small or dim. In this case, the centroid is practically in the same position as the source. If the sky position of the source is close to that of the lens and on the right, the main image will be a bit further to the right of the true source position, and the centroid will be to the right of the true position. But as the source gets even closer in the sky to the lens position, the two images become symmetrical and equal in brightness, and the centroid will again be very close to the true position of the source. When alignment is perfect, the centroid is exactly at the same position as the source (and the lens). In this case, there will not be two images but an Einstein ring around the lens. Observing microlensing In practice, because the alignment needed is so precise and difficult to predict, microlensing is very rare. Events, therefore, are generally found with surveys, which photometrically monitor tens of millions of potential source stars, every few days for several years. Dense background fields suitable for such surveys are nearby galaxies, such as the Magellanic Clouds and the Andromeda galaxy, and the Milky Way bulge. In each case, the lens population studied comprises the objects between Earth and the source field: for the bulge, the lens population is the Milky Way disk stars, and for external galaxies, the lens population is the Milky Way halo, as well as objects in the other galaxy itself. The density, mass, and location of the objects in these lens populations determines the frequency of microlensing along that line of sight, which is characterized by a value known as the optical depth due to microlensing. (This is not to be confused with the more common meaning of optical depth, although it shares some properties.) The optical depth is, roughly speaking, the average fraction of source stars undergoing microlensing at a given time, or equivalently the probability that a given source star is undergoing lensing at a given time. The MACHO project found the optical depth toward the LMC to be 1.2×10−7, and the optical depth toward the bulge to be 2.43×10−6 or about 1 in 400,000. Complicating the search is the fact that for every star undergoing microlensing, there are thousands of stars changing in brightness for other reasons (about 2% of the stars in a typical source field are naturally variable stars) and other transient events (such as novae and supernovae), and these must be weeded out to find true microlensing events. After a microlensing event in progress has been identified, the monitoring program that detects it often alerts the community to its discovery, so that other specialized programs may follow the event more intensively, hoping to find interesting deviations from the typical light curve. This is because these deviations – particularly ones due to exoplanets – require hourly monitoring to be identified, which the survey programs are unable to provide while still searching for new events. The question of how to prioritize events in progress for detailed followup with limited observing resources is very important for microlensing researchers today. History Already in his book The Queries (query number 1), expanded from 1704 to 1718, Isaac Newton wondered if a light ray could be deflected by gravity. In 1801, Johann Georg von Soldner calculated the amount of deflection of a light ray from a star under Newtonian gravity. In 1915 Albert Einstein correctly predicted the amount of deflection under General Relativity, which was twice the amount predicted by von Soldner. Einstein's prediction was validated by a 1919 expedition led by Arthur Eddington, which was a great early success for General Relativity. In 1924 Orest Chwolson found that lensing could produce multiple images of the star. A correct prediction of the concomitant brightening of the source, the basis for microlensing, was published in 1936 by Einstein. Because of the unlikely alignment required, he concluded that "there is no great chance of observing this phenomenon". Gravitational lensing's modern theoretical framework was established with works by Yu Klimov (1963), Sidney Liebes (1964), and Sjur Refsdal (1964). Gravitational lensing was first observed in 1979, in the form of a quasar lensed by a foreground galaxy. That same year Kyongae Chang and Sjur Refsdal showed that individual stars in the lens galaxy could act as smaller lenses within the main lens, causing the source quasar's images to fluctuate on a timescale of months, also known as Chang–Refsdal lens. Peter J. Young then appreciated that the analysis needed to be extended to allow for the simultaneous effect of many stars. Bohdan Paczyński first used the term "microlensing" to describe this phenomenon. This type of microlensing is difficult to identify because of the intrinsic variability of quasars, but in 1989 Mike Irwin et al. published detection of microlensing of one of the four images in the "Einstein Cross" quasar in Huchra's Lens. In 1986, Paczyński proposed using microlensing to look for dark matter in the form of massive compact halo objects (MACHOs) in the Galactic halo, by observing background stars in a nearby galaxy. Two groups of particle physicists working on dark matter heard his talks and joined with astronomers to form the Anglo-Australian MACHO collaboration and the French EROS collaboration.[citation needed] In 1986, Robert J. Nemiroff predicted the likelihood of microlensing and calculated basic microlensing induced light curves for several possible lens-source configurations in his 1987 thesis. In 1991 Mao and Paczyński suggested that microlensing might be used to find binary companions to stars, and in 1992 Gould and Loeb demonstrated that microlensing can be used to detect exoplanets. In 1992, Paczyński founded the Optical Gravitational Lensing Experiment, which began searching for events in the direction of the Galactic bulge. The first two microlensing events in the direction of the Large Magellanic Cloud that might be caused by dark matter were reported in back to back Nature papers by MACHO and EROS in 1993, and in the following years, events continued to be detected. The first two events detected by EROS group later turned out to have different origin than microlensing. During this time, Sun Hong Rhie worked on the theory of exoplanet microlensing for events from the survey. The MACHO collaboration ended in 1999. Their data refuted the hypothesis that 100% of the dark halo comprises MACHOs, but they found a significant unexplained excess of roughly 20% of the halo mass, which might be due to MACHOs or to lenses within the Large Magellanic Cloud itself. EROS subsequently published even stronger upper limits on MACHOs, and it is currently uncertain as to whether there is any halo microlensing excess that could be due to dark matter at all. The SuperMACHO project currently underway seeks to locate the lenses responsible for MACHO's results.[citation needed] Despite not solving the dark matter problem, microlensing has been shown to be a useful tool for many applications. Hundreds of microlensing events are detected per year toward the Galactic bulge, where the microlensing optical depth (due to stars in the Galactic disk) is about 20 times greater than through the Galactic halo. In 2007, the OGLE project identified 611 event candidates, and the MOA project (a Japan-New Zealand collaboration) identified 488 (although not all candidates turn out to be microlensing events, and there is a significant overlap between the two projects). In addition to these surveys, follow-up projects are underway to study in detail potentially interesting events in progress, primarily with the aim of detecting extrasolar planets.[citation needed] These include MiNDSTEp, RoboNet, MicroFUN and PLANET. In September 2020, astronomers using microlensing techniques reported the detection, for the first time, of an earth-mass rogue planet unbounded by any star, and free floating in the Milky Way galaxy. Microlensing not only magnifies the source but also moves its apparent position. The duration of this is longer than that of the magnification, and can be used to find the mass of the lens. In 2022 it was reported that this technique was used to make the first unambiguous detection of an isolated stellar-mass black hole, using observations by the Hubble Space Telescope stretching over six years, starting in August 2011 shortly after the microlensing event was detected. The black hole has a mass of about 7 times the solar mass and is about 1.6 kiloparsecs (5.2 kly) away, in Sagittarius, while the star is about 6 kiloparsecs (20 kly) away. There are millions of isolated black holes in our galaxy, and being isolated very little radiation is emitted from their surroundings, so they can only be detected by microlensing. The authors expect that many more will be found with future instruments, specifically the Nancy Grace Roman Space Telescope and the Vera C. Rubin Observatory. Mathematics The mathematics of microlensing, along with modern notation, are described by Gould and we use his notation in this section, though other authors have used other notation. The Einstein radius, also called the Einstein angle, is the angular radius of the Einstein ring in the event of perfect alignment. It depends on the lens mass M, the distance of the lens dL, and the distance of the source dS: For M equal to 60 Jupiter masses, dL = 4000 parsecs, and dS = 8000 parsecs (typical for a Bulge microlensing event), the Einstein radius is 0.00024 arcseconds (angle subtended by 1 au at 4000 parsecs). By comparison, ideal Earth-based observations have angular resolution around 0.4 arcseconds, 1660 times greater. Since θ E {\displaystyle \theta _{E}} is so small, it is not generally observed for a typical microlensing event, but it can be observed in some extreme events as described below. Although there is no clear beginning or end of a microlensing event, by convention the event is said to last while the angular separation between the source and lens is less than θ E {\displaystyle \theta _{E}} . Thus the event duration is determined by the time it takes the apparent motion of the lens in the sky to cover an angular distance θ E {\displaystyle \theta _{E}} . The Einstein radius is also the same order of magnitude as the angular separation between the two lensed images, and the astrometric shift of the image positions throughout the course of the microlensing event. During a microlensing event, the brightness of the source is amplified by an amplification factor A. This factor depends only on the closeness of the alignment between observer, lens, and source. The unitless number u is defined as the angular separation of the lens and the source, divided by θ E {\displaystyle \theta _{E}} . The amplification factor is given in terms of this value: This function has several important properties. A(u) is always greater than 1, so microlensing can only increase the brightness of the source star, not decrease it. A(u) always decreases as u increases, so the closer the alignment, the brighter the source becomes. As u approaches infinity, A(u) approaches 1, so that at wide separations, microlensing has no effect. Finally, as u approaches 0, for a point source A(u) approaches infinity as the images approach an Einstein ring. For perfect alignment (u = 0), A(u) is theoretically infinite. In practice, real-world objects are not point sources, and finite source size effects will set a limit to how large an amplification can occur for very close alignment, but some microlensing events can cause a brightening by a factor of hundreds. Unlike gravitational macrolensing where the lens is a galaxy or cluster of galaxies, in microlensing u changes significantly in a short period of time. The relevant time scale is called the Einstein time t E {\displaystyle t_{E}} , and it's given by the time it takes the lens to traverse an angular distance θ E {\displaystyle \theta _{E}} relative to the source in the sky. For typical microlensing events, t E {\displaystyle t_{E}} is on the order of a few days to a few months. The function u(t) is simply determined by the Pythagorean theorem: The minimum value of u, called umin, determines the peak brightness of the event. In a typical microlensing event, the light curve is well fit by assuming that the source is a point, the lens is a single point mass, and the lens is moving in a straight line: the point source-point lens approximation. In these events, the only physically significant parameter that can be measured is the Einstein timescale t E {\displaystyle t_{E}} . Since this observable is a degenerate function of the lens mass, distance, and velocity, we cannot determine these physical parameters from a single event. However, in some extreme events, θ E {\displaystyle \theta _{E}} may be measurable while other extreme events can probe an additional parameter: the size of the Einstein ring in the plane of the observer, known as the Projected Einstein radius: r ~ E {\displaystyle {\tilde {r}}_{E}} . This parameter describes how the event will appear to be different from two observers at different locations, such as a satellite observer. The projected Einstein radius is related to the physical parameters of the lens and source by It is mathematically convenient to use the inverses of some of these quantities. These are the Einstein proper motion and the Einstein parallax These vector quantities point in the direction of the relative motion of the lens with respect to the source. Some extreme microlensing events can only constrain one component of these vector quantities. Should these additional parameters be fully measured, the physical parameters of the lens can be solved yielding the lens mass, parallax, and proper motion as Extreme microlensing events In a typical microlensing event, the light curve is well fit by assuming that the source is a point, the lens is a single point mass, and the lens is moving in a straight line: the point source-point lens approximation. In these events, the only physically significant parameter that can be measured is the Einstein timescale t E {\displaystyle t_{E}} . However, in some cases, events can be analyzed to yield the additional parameters of the Einstein angle and parallax: θ E {\displaystyle \theta _{E}} and π E {\displaystyle \pi _{E}} . These include very high magnification events, binary lenses, parallax, and xallarap events, and events where the lens is visible. Although the Einstein angle is too small to be directly visible from a ground-based telescope, several techniques have been proposed to observe it. If the lens passes directly in front of the source star, then the finite size of the source star becomes an important parameter. The source star must be treated as a disk on the sky, not a point, breaking the point-source approximation, and causing a deviation from the traditional microlensing curve that lasts as long as the time for the lens to cross the source, known as a finite source light curve. The length of this deviation can be used to determine the time needed for the lens to cross the disk of the source star t S {\displaystyle t_{S}} . If the angular size of the source θ S {\displaystyle \theta _{S}} is known, the Einstein angle can be determined as These measurements are rare, since they require an extreme alignment between source and lens. They are more likely when θ S / θ E {\displaystyle \theta _{S}/\theta _{E}} is (relatively) large, i.e., for nearby giant sources with slow-moving low-mass lenses close to the source. In finite source events, different parts of the source star are magnified at different rates at different times during the event. These events can thus be used to study the limb darkening of the source star. If the lens is a binary star with separation of roughly the Einstein radius, the magnification pattern is more complex than in the single star lenses. In this case, there are typically three images when the lens is distant from the source, but there is a range of alignments where two additional images are created. These alignments are known as caustics. At these alignments, the magnification of the source is formally infinite under the point-source approximation.[citation needed] Caustic crossings in binary lenses can happen with a wider range of lens geometries than in a single lens. Like a single lens source caustic, it takes a finite time for the source to cross the caustic. If this caustic-crossing time t S {\displaystyle t_{S}} can be measured, and if the angular radius of the source is known, then again the Einstein angle can be determined.[citation needed] As in the single lens case when the source magnification is formally infinite, caustic crossing binary lenses will magnify different portions of the source star at different times. They can thus probe the structure of the source and its limb darkening.[citation needed] In principle, the Einstein parallax can be measured by having two observers simultaneously observe the event from different locations, e.g., from the Earth and from a distant spacecraft. The difference in amplification observed by the two observers yields the component of π → E {\displaystyle {\vec {\pi }}_{E}} perpendicular to the motion of the lens while the difference in the time of peak amplification yields the component parallel to the motion of the lens. This direct measurement has been reported using the Spitzer Space Telescope. In extreme cases, the differences may even be measurable from small differences seen from telescopes at different locations on Earth, i.e. terrestrial parallax. The Einstein parallax can also be measured through orbital parallax; the motion of the observer, caused by the rotation of the Earth about the Sun and the Sun through the Galaxy means that a microlensing event is being observed from different angles at each observation epoch. This was first reported in 1995 and has been reported in a handful of events since. Parallax, in point-lens events, can best be measured for long-timescale events, with a large π E {\displaystyle \pi _{E}} , i..e. from slow-moving, low mass lenses, which are close to the observer. If the source star is a binary star, then it too will have additional relative motion, which can also cause detectable changes in the light curve. This effect is known as Xallarap (parallax spelled backwards). Detection of extrasolar planets If the lensing object is a star with a planet orbiting it, this is an extreme example of a binary lens event. If the source crosses a caustic, the deviations from a standard event can be large even for low mass planets. These deviations allow us to infer the existence and determine the mass and separation of the planet around the lens. Deviations typically last a few hours or a few days. Because the signal is strongest when the event itself is strongest, high-magnification events are the most promising candidates for detailed study. Typically, a survey team notifies the community when they discover a high-magnification event in progress. Follow-up groups then intensively monitor the ongoing event, hoping to get good coverage of the deviation if it occurs. When the event is over, the light curve is compared to theoretical models to find the physical parameters of the system. The parameters that can be determined directly from this comparison are the mass ratio of the planet to the star, and the ratio of the star-planet angular separation to the Einstein angle. From these ratios, along with assumptions about the lens star, the mass of the planet and its orbital distance can be estimated.[citation needed] The first success of this technique was made in 2003 by both OGLE and MOA of the microlensing event OGLE 2003–BLG–235 (or MOA 2003–BLG–53). Combining their data, they found the most likely planet mass to be 1.5 times the mass of Jupiter. As of April 2020, 89 exoplanets have been detected by this method. Notable examples include OGLE-2005-BLG-071Lb, OGLE-2005-BLG-390Lb, OGLE-2005-BLG-169Lb, two exoplanets around OGLE-2006-BLG-109L, and MOA-2007-BLG-192Lb. Notably, at the time of its announcement in January 2006, the planet OGLE-2005-BLG-390Lb probably had the lowest mass of any known exoplanet orbiting a regular star, with a median at 5.5 times the mass of the Earth and roughly a factor two uncertainty. This record was contested in 2007 by Gliese 581 c with a minimal mass of 5 Earth masses, and since 2009 Gliese 581 e is the lightest known "regular" exoplanet, with minimum 1.9 Earth masses. In October 2017, OGLE-2016-BLG-1190Lb, an extremely massive exoplanet (or possibly a brown dwarf), about 13.4 times the mass of Jupiter, was reported. Comparing this method of detecting extrasolar planets with other techniques such as the transit method, one advantage is that the intensity of the planetary deviation does not depend on the planet mass as strongly as effects in other techniques do. This makes microlensing well suited to finding low-mass planets. It also allows detection of planets further away from the host star than most of the other methods. One disadvantage is that followup of the lens system is very difficult after the event has ended, because it takes a long time for the lens and the source to be sufficiently separated to resolve them separately. A terrestrial atmospheric lens proposed by Yu Wang in 1998 that would use Earth's atmosphere as a large lens could also directly image nearby potentially habitable exoplanets. Microlensing experiments There are two basic types of microlensing experiments. "Search" groups use large-field images to find new microlensing events. "Follow-up" groups often coordinate telescopes around the world to provide intensive coverage of select events. The initial experiments all had somewhat risqué names until the formation of the PLANET group. There are current proposals to build new specialized microlensing satellites, or to use other satellites to study microlensing.[citation needed] See also References External links
========================================
[SOURCE: https://www.wired.com/about/accessibility-help/] | [TOKENS: 200]
Accessibility HelpWe strive to have websites that are accessible to individuals with disabilities. However, if you encounter any difficulty in using our site, please email us for assistance with accessibility issues.Most PopularThe Big StoryInside the Gay Tech MafiaPoliticsDHS Opens a Billion-Dollar Tab With PalantirGearA $10K Bounty Awaits Anyone Who Can Hack Ring Cameras to Stop Sharing Data With AmazonBusinessInside the Rolling Layoffs at Jack Dorsey’s Block Accessibility Help We strive to have websites that are accessible to individuals with disabilities. However, if you encounter any difficulty in using our site, please email us for assistance with accessibility issues. © 2026 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices
========================================
[SOURCE: https://en.wikipedia.org/wiki/Hyperlapse_(application)] | [TOKENS: 244]
Contents Hyperlapse (application) Hyperlapse is a mobile app created by Instagram that enables users to produce hyperlapse and time-lapse videos. It was released on August 26, 2014. Overview The app enables users to record videos up to 45 minutes of footage in a single take, which can be subsequently accelerated to create a hyperlapse cinematographic effect. Whereas time-lapses are normally produced by stitching together stills from traditional cameras, the app uses an image stabilization algorithm that steadies the appearance of video by eliminating jitter. Unlike Instagram, the app offers no filters. Instead, the only post-production option available to users is the modification of playback speed which can range from 1x to 40x normal playback speed. The app is only available on iOS devices, but Instagram suggested in August 2014 that an Android version would likely be made available in the near future. Fall Out Boy's music video for "Centuries" was filmed using the Hyperlapse app. Hyperlapse was removed from app stores by Instagram as of March 1, 2022. References External links This mobile software article is a stub. You can help Wikipedia by adding missing information.
========================================
[SOURCE: https://en.wikipedia.org/wiki/OpenFOAM] | [TOKENS: 1010]
Contents OpenFOAM OpenFOAM (Open Field Operation And Manipulation) is a C++ toolbox for the development of customized numerical solvers, and pre-/post-processing utilities for the solution of continuum mechanics problems, most prominently including computational fluid dynamics (CFD). The OpenFOAM software is used in research organisations, academic institutes and across many types of industries, for example, automotive, manufacturing, process engineering, environmental engineering and marine energy. OpenFOAM is open-source software which is freely available and licensed under the GNU General Public License Version 3, with the following variants: History The name FOAM has been claimed to appear for the first time as a post-processing tool written by Charlie Hill, in the early 90s in Prof. David Gosman's group in Imperial College London.[citation needed] As a counter argument [verification needed], it has been claimed that Henry Weller created the FOAM library for field operation and manipulation which interfaced to the GUISE (Graphical User Interface Software Environment) which was created by Charlie Hill for interfacing to AVS. As a continuum mechanics / computational fluid dynamics tool, the first development of FOAM (which became OpenFOAM later on) was virtually always presumed to be initiated by Henry Weller at the same institute by using the C++ programming language rather than the de facto standard programming language FORTRAN of the time to develop a powerful and flexible general simulation platform. From this initiation to the founding of a company called Nabla Ltd, (predominantly) Henry Weller and Hrvoje Jasak carried out the basic development of the software for almost a decade. For a few years, FOAM was sold as a commercial code by Nabla Ltd., on 10 December 2004, it was released under GPL and renamed to OpenFOAM. In 2004, Nabla Ltd was folded. Immediately afterwards, Henry Weller, Chris Greenshields and Mattijs Janssens founded OpenCFD Ltd to develop and release OpenFOAM. At the same time, Hrvoje Jasak founded the consulting company Wikki Ltd and maintained a fork of OpenFOAM called openfoam-extend, later renamed to foam-extend. In December 2010, the OpenFOAM development moved to using GitHub for its source code repository. On 5 August 2011, OpenCFD transferred its copyrights and interests in OpenFOAM (source code) and documentation to the newly incorporated OpenFOAM Foundation Inc., registered in the state of Delaware, USA. On 8 August 2011, OpenCFD was acquired by Silicon Graphics International (SGI). On 12 September 2012, ESI Group announced the acquisition of OpenCFD Ltd, becoming a wholly-owned subsidiary of ESI Group, and OpenCFD retaining its ownership of the OpenFOAM trademark. On 25 April 2014, The OpenFOAM Foundation Ltd was incorporated in England, as a company limited by guarantee with all assets transferred to the UK and the US entity dissolved, together with changes to the governance of the Foundation. Weller and Greenshields left OpenCFD and formed CFD Direct Ltd in March 2015. On 3 September 2024, Cristel de Rouvray, CEO of ESI Group (acquired by Keysight Technologies Inc) officially resigned as Founder Member and director of The OpenFOAM Foundation Limited. The OpenFOAM Foundation Ltd directors are Henry Weller, Chris Greenshields, and Brendan Bouffler. The following are the three main variants of OpenFOAM: In 2018, OpenCFD Ltd. and some of its industrial, academic, and community partners established an administrative body, i.e. OpenFOAM Governance, to allow the OpenFOAM's user community to decide/contribute the future development and direction of their variant of the software. The structure of OpenFOAM Governance consisted of a Steering Committee and various Technical Committees. The Steering Committee comprised representatives from the main sponsors of OpenFOAM in industry, academia, release authorities and consultant organisations. The organisation composition of the initial committee involved members from OpenCFD Ltd., ESI Group, Volkswagen, General Motors, FM Global, TotalSim Ltd., TU Darmstadt, and Wikki Ltd. In addition, nine technical committees were established in the following areas: Documentation, high performance computing, meshing, multiphase, numerics, optimisation, turbulence, marine applications, and nuclear applications with the members from the organisations of OpenCFD Ltd., CINECA, University of Zagreb, TU Darmstadt, National Technical University of Athens, Upstream CFD GmbH, University of Michigan, and EPFL. Structure The OpenFOAM directory structure consists of two main directories: OpenFOAM computer simulations are configured by several plain text input files located across the following three directories: Additional directories can be generated, depending on user selections. These may include: See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_ref-Moskowitz_74-1] | [TOKENS: 10628]
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/High-mountain_Asia] | [TOKENS: 607]
Contents High-mountain Asia High-mountain Asia (HMA) or High Asia is a high-elevation geographic region in central-south Asia that includes numerous cordillera and highland systems around the Tibetan Plateau, encompassing regions of East, Southeast, South and Central Asia. The region was orogenically formed by the continental collision of the Indian Plate into (and underneath) the Eurasian Plate. According to NASA, the region is the "world's largest reservoir of perennial glaciers and snow outside of the Earth's polar ice sheets", and has been nicknamed the "Third Pole". Their meltwaters and runoffs form the headwaters of river systems that support the drinking water and food production of nearly 3 billion people, and hydrological and climate changes in the mountains affect "ecosystem services, agriculture, energy and livelihood" for all the surrounding areas. NASA has a High Mountain Asia Team (HiMAT) to study the region. In a 2020 study, the term High Asia or High Mountain Asia was used metaphorically to categorise Kashmir, Hazara, Nuristan, Laghman, Azad Kashmir, Jammu, Himachal Pradesh, Ladakh, Gilgit Baltistan, Chitral, Western Tibet, Western Xinjiang, Badakhshan, Gorno Badakhshan, Fergana, Osh and Turkistan Region. These rich resource areas are surrounded by the five major mountainous systems of Tien Shan, Pamirs, Karakoram, Hindu Kush and Western Himalayas and the three main river systems of Amu Darya, Syr Darya and Indus. The work further highlighted the role of United States, China, Russia, UK, India, Pakistan, Afghanistan, Kazakhstan, Uzbekistan, Kyrgyzstan, Tajikistan, Turkey, Iran and other players involved in The New Great Game over who will dominate High Asia in the 21st century. Toponymy Due to its inclusion of the highest mountains on Earth, the region has been metaphorically labelled the "Roof of the World". The phrase was historically applied to the Pamirs, and then to Tibet. Geography High-mountain Asia is centered around the Tibetan Plateau and extends to the surrounding regions as numerous mountain ranges: These mountain range networks contains all 14 peaks above 8,000 m (26,000 ft) and all of the peaks above 7,000 m (23,000 ft), and expand across the mountainous Chinese provinces of Tibet, Xinjiang, Sichuan and Yunnan (including the flat and depressed Tarim Basin in southern Xinjiang, which contains China's largest desert, the Taklamakan), northern Myanmar, the Himalayan nations of Nepal and Bhutan as well as north/northwestern Pakistan and northeastern India, and most of the southeastern Central Asian nations of Kyrgyzstan, Tajikistan and Afghanistan. Their rain shadows are partly responsible for the cold arid climate in parts of Central Asia and the Mongolian Plateau. References
========================================