text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/Deuterostomes] | [TOKENS: 1907]
Contents Deuterostome Deuterostomes (from Ancient Greek δεύτερος (deúteros) 'second' and στόμα (stóma) 'mouth') are bilaterian animals of the superphylum Deuterostomia (/ˌdjuːtərəˈstoʊmi.ə/), typically characterized by their anus forming before the mouth during embryonic development. Deuterostomia comprises three phyla: Chordata, Echinodermata, Hemichordata, and the extinct clade Cambroernida. In deuterostomes, the developing embryo's first opening (the blastopore) becomes the anus and cloaca, while the mouth is formed at a different site later on. This was initially the group's distinguishing characteristic, but deuterostomy has since been discovered among protostomes as well. The deuterostomes are also known as enterocoelomates, because their coelom develops through pouching of the gut, enterocoely. Deuterostomia's sister clade is Protostomia, animals that develop mouth first and whose digestive tract development is more varied. Protostomia includes the ecdysozoans and spiralians, as well as the extinct Kimberella. Together with the Xenacoelomorpha, these constitute the large clade Bilateria, i.e. animals with bilateral symmetry and three germ layers. Systematics Initially, Deuterostomia included the phyla Brachiopoda, Bryozoa, Chaetognatha, and Phoronida based on morphological and embryological characteristics. However, Deuterostomia was redefined in 1995 based on DNA molecular sequence analyses, leading to the removal of the lophophorates which was later combined with other protostome animals to form the superphylum Lophotrochozoa. The arrow worms were also considered possible deuterostomes, but molecular studies have placed them in the protostomes. Genetic studies have also revealed that deuterostomes have more than 30 genes not found in any other animal groups, but which yet are present in some marine algae and prokaryotes. This could mean that these are ancient genes that were lost in other organisms, or that a common ancestor acquired them through horizontal gene transfer. A consensus taxonomy of the deuterostomes is:[citation needed] There is a possibility that Ambulacraria is the sister clade to Xenacoelomorpha, and could form the Xenambulacraria group, but recent transcriptome analyses concluded that the phylum Xenacoelomorpha is the sister group to the Nephrozoa, which includes both the protostomes and the deuterostomes, making the phylum the basalmost bilaterian clade. This would mean they are neither deuterostomes nor protostomes. Characteristics In deuterostomes, the developing embryo's first opening, the blastopore, becomes the anus, while the gut eventually tunnels through the embryo until it reaches the other side, forming an opening that becomes the mouth. This distinguishes them from protostomes, which have a variety of patterns of development. In both deuterostomes and protostomes, a zygote first develops into a hollow ball of cells, called a blastula. In deuterostomes, the early divisions occur parallel or perpendicular to the polar axis. This is called radial cleavage, and also occurs in certain protostomes, such as the lophophorates. Most deuterostomes display indeterminate cleavage, in which the developmental fate of the cells in the developing embryo is not determined by the identity of the parent cell. Thus, if the first four cells are separated, each can develop into a complete small larva; and if a cell is removed from the blastula, the other cells will compensate. This is the source of identical twins. The mesoderm forms as evaginations of the developed gut that pinch off to form the coelom. This process is called enterocoely. Another feature present in both the Hemichordata and Chordata is pharyngotremy — the presence of spiracles or gill slits into the pharynx, which is also found in some primitive fossil echinoderms (mitrates). A hollow nerve cord is found in all chordates, including tunicates (in the larval stage). Some hemichordates also have a tubular nerve cord. In the early embryonic stage, it looks like the hollow nerve cord of chordates. Both the hemichordates and the chordates have a thickening of the aorta, homologous to the chordate heart, which contracts to pump blood. This suggests a presence in the deuterostome ancestor of the three groups, with the echinoderms having secondarily lost it.[citation needed] The highly modified nervous system of echinoderms obscures much about their ancestry, but several facts suggest that all present deuterostomes evolved from a common ancestor that had pharyngeal gill slits, a hollow nerve cord, circular and longitudinal muscles and a segmented body. Origins and evolution Bilateria, one of the five major lineages of animals, is split into two groups; the protostomes and deuterostomes. Deuterostomes consist of chordates (which include the vertebrates) and ambulacrarians. It seems likely that the 555 million year old Kimberella was a member of the protostomes. That implies that the protostome and deuterostome lineages split long before Kimberella appeared, and hence well before the start of the Cambrian 538.8 million years ago, i.e. during the earlier part of the Ediacaran Period (circa 635-539 Mya, around the end of global Marinoan glaciation in the late Neoproterozoic). It has been proposed that the ancestral deuterostome, before the chordate/ambulacrarian split, could have been a chordate-like animal with a terminal anus and pharyngeal openings but no gill slits, with active suspension feeding strategy. The last common ancestor of the deuterostomes had lost all innexin diversity. Deuterostomes have a rich fossil record with thousands of fossil species being found throughout the Phanerozoic. There are also a few earlier fossils that may represent deuterostomes, but these remain debated. The earliest of these disputed fossils are the tunicate-like organisms Burykhia and Ausia from the Ediacaran period. While these may in fact be tunicates, others have interpreted them as cnidarians or sponges, and as such their true affinity remains uncertain. Another Ediacaran fossil, Arkarua, may represent the earliest echinoderm, while Yanjiahella from the early Cambrian (Fortunian) period is another notable stem group echinoderm. Fossils of one major deuterostome group, the echinoderms (whose modern members include sea stars, sea urchins and crinoids), are quite common from the start of Stage 3 of the Cambrian, 521 million years ago starting with forms such as Helicoplacus. Two other Cambrian Stage 3 (521-514 mya) species, Haikouichthys and Myllokunmingia from the Chengjiang biota, are the earliest body fossils of fish, whereas Pikaia, discovered much earlier but from the Mid Cambrian Burgess Shale, is now regarded as a primitive chordate. The Mid Cambrian fossil Rhabdotubus johanssoni has been interpreted as a pterobranch hemichordate, whereas Spartobranchus is an acorn-worm from the Burgess Shale, providing proof that all main lineages were already well established 508 mya. On the other hand, fossils of early chordates are very rare, as non-vertebrate chordates have no bone tissue or teeth, and fossils of no Post-Cambrian non-vertebrate chordates are known aside from the Permian-aged Paleobranchiostoma, trace fossils of the Ordovician colonial tunicate Catellocaula, and various Jurassic-aged and Tertiary-aged spicules tentatively attributed to ascidians.[citation needed]. Fossils of Echinodermata are very common after the Cambrian. Fossils of Hemichordata are less common, except for graptolites until the Lower Carbonoferous. As of 2024[update], the deuterostomes are considered to be monophyletic. The ancestral deuterostome was most likely a benthic worm that possessed a cartilaginous skeleton, a central nervous system, and gill slits. Approximate dates for clades are given in millions of years ago (mya). Cephalochordata Tunicata Vertebrata Echinodermata Pterobranchia Enteropneusta Protostomia Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Computer_graphics_(computer_science)] | [TOKENS: 610]
Contents Computer graphics (computer science) Computer graphics is a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Although the term often refers to the study of three-dimensional computer graphics, it also encompasses two-dimensional computer graphics and image processing. Overview Computer graphics studies manipulation of visual and geometric information using computational techniques. It focuses on the mathematical and computational foundations of image generation and processing rather than purely aesthetic issues. Computer graphics is often differentiated from the field of visualization, although the two fields have many similarities. Connected studies include: Applications of computer graphics include: History There are several international conferences and journals where the most significant results in computer graphics are published. Among them are the SIGGRAPH and Eurographics conferences and the Association for Computing Machinery (ACM) Transactions on Graphics journal. The joint Eurographics and ACM SIGGRAPH symposium series features the major venues for the more specialized sub-fields: Symposium on Geometry Processing, Symposium on Rendering, Symposium on Computer Animation, and High Performance Graphics. As in the rest of computer science, conference publications in computer graphics are generally more significant than journal publications (and subsequently have lower acceptance rates). Subfields A broad classification of major subfields in computer graphics might be: The subfield of geometry studies the representation of three-dimensional objects in a discrete digital setting. Because the appearance of an object depends largely on its exterior, boundary representations are most commonly used. Two dimensional surfaces are a good representation for most objects, though they may be non-manifold. Since surfaces are not finite, discrete digital approximations are used. Polygonal meshes (and to a lesser extent subdivision surfaces) are by far the most common representation, although point-based representations have become more popular recently (see for instance the Symposium on Point-Based Graphics). These representations are Lagrangian, meaning the spatial locations of the samples are independent. Recently, Eulerian surface descriptions (i.e., where spatial samples are fixed) such as level sets have been developed into a useful representation for deforming surfaces which undergo many topological changes (with fluids being the most notable example). Geometry subfields include: The subfield of animation studies descriptions for surfaces (and other phenomena) that move or deform over time. Historically, most work in this field has focused on parametric and data-driven models, but recently physical simulation has become more popular as computers have become more powerful computationally. Animation subfields include: Rendering generates images from a model. Rendering may simulate light transport to create realistic images or it may create images that have a particular artistic style in non-photorealistic rendering. The two basic operations in realistic rendering are transport (how much light passes from one place to another) and scattering (how surfaces interact with light). Rendering subfields include: Notable researchers Applications for their use Bitmap Design / Image Editing Vector drawing Architecture Video editing Sculpting, Animation, and 3D Modeling Digital composition Rendering Other applications examples See also References Further reading External links Industrial labs doing "blue sky" graphics research include: Major film studios notable for graphics research include:
========================================
[SOURCE: https://en.wikipedia.org/wiki/TTCN] | [TOKENS: 393]
Contents TTCN TTCN is a programming language used for testing of communication protocols and web services. A TTCN test suite consists of many test cases written in the TTCN programming language. Until version 2 the language was written in tables and called Tree and Tabular Combined Notation. Reading and editing this language required special TTCN editors. Beginning with version 3 TTCN was renamed to Testing and Test Control Notation. It is now closer to current programming languages and can be edited with traditional editors. TTCN-3 is more flexible than TTCN-2 in that it can be used for protocol testing as well as testing traditional software. All versions of TTCN need dedicated compilers or interpreters for execution. TTCN is widely used, for example; ETSI, ITU for the testing of telecommunication protocols. Conformance test cases of ETSI standards like ISDN, DECT, GSM, EDGE, 3G, DSRC have also been written in TTCN. Recently it has also been used for testing various protocol standards e.g. Bluetooth, IP. Execution of those test cases against products (e.g. phones, mobile phones, service enablers or network elements) is used to verify that the protocol implementation in those products meet the requirements defined by telecommunication standards. TTCN is often combined with ASN.1. Versions Background information Conformance testing means a test execution against a well-defined test case, e.g. a test written in TTCN. Plug-tests, also called interoperability testing, are test cases executed with a client - server model; e.g. Mail-Client against Mail-server, phone against network, Bluetooth headset against Bluetooth phone. Plug-tests are e.g. often executed at so-called unplug-tests. Conformance-tests and plug-tests complement each other. If properly done, interoperability errors found with plug-tests should lead to definition of new conformance test cases that cover the failed test. External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Maor_Farid#cite_note-31] | [TOKENS: 1458]
Contents Maor Farid Dr. Maor Farid (Hebrew: מאור פריד; born April 20, 1992) is an Israeli scientist, engineer and artificial intelligence researcher at Massachusetts Institute of Technology, social activist, and author. He is the founder and CEO of Learn to Succeed (Hebrew: ללמוד להצליח) for empowering of youths from the Israeli socio-economic periphery and youths at risk, a regional manager of the Israeli center of ScienceAbroad at MIT, and an activist in the American Technion Society. He is an alumnus of Unit 8200, and a fellow of Fulbright Program and the Israel Scholarship Educational Foundation [he]. Dr. Farid was elected to the Forbes 30 Under 30 list of 2019, and won the Moskowitz Prize for Zionism. Early life Maor was born in Ness Ziona, a city in central Israel, as the eldest son for parents from immigrating families of Mizrahi Jews from Iraq and Libya. Maor suffered from Attention deficit hyperactivity disorder (ADHD) from a young age, and was classified as a problematic and violent student. His ADHD issues were diagnosed only after he began his university studies. However, inspired by his parents' background, he aspired to excel at school for a better future for his family. During elementary school, Maor attended local quizzes about Jewish history and Zionism, which significantly shaped his identity and national perspective. Farid graduated high school with the highest GPA in school. Later he was recruited to the Israel Defense Forces and drafted to the Brakim Program [he] – an excellence program of the Israeli Intelligence Corps for training leading R&D officers for the Israeli military and defense industry. Maor graduated the program with honors and was elected by the Israeli Prime Minister's Office and Unit 8200, where he served as an artificial intelligence researcher, officer, and commander. During his Military service, he received various honors and awards, such as the Excellent Scientist Award, given to the top three academics serving in the Israel Defense Forces. In 2019, Farid completed his military service in the rank of a Captain. Education and academic career As part of the (4 years) Brakim Program, Maor completed his Bachelor's and Master's degrees at the Technion in Mechanical Engineering with honors. Then, he initiated his Ph.D. research as a collaboration with the Israel Atomic Energy Commission (IAEC) in parallel to his duty military service. The main goals of his Ph.D. research were predicting irreversible effects of major earthquakes on Israel's nuclear facilities, and improving their seismic resistance using energy absorption technologies. The mathematical models developed by Farid were able to forecast earthquake effects on facilities with major hazard potential, and predicted the failure of liquid storage tanks due to earthquakes took place in Italy (2012) and Mexico (2017). The energy absorption technologies used, increased in up to 90% the seismic resistance abilities of those sensitive facilities. The research results were published in multiple papers in peer-reviewed academic journals and presented in international academic conferences. Later, this research expanded to an official collaboration between the Technion and the Shimon Peres Negev Nuclear Research Center, which aims to implement the findings obtained on existing sensitive systems, and won funding of 1.5 million NIS from the Pazy foundation of the Israel Atomic Energy Commission and the Council for Higher Education. In 2017, Farid completed his Ph.D. and as the youngest graduate at the Technion for that year, at the age of 24. In the graduation ceremonies, he honored his parents to receive the diplomas on his behalf. At the same year, he served as a lecturer at Ben-Gurion University in an original course he developed as a solution for knowledge gaps he identified in the Israeli defense industry. In 2018, Dr. Farid served as an artificial intelligence researcher at a Data Science team of Unit 8200, where he developed machine learning-based solutions for military and operational needs. In 2019, Farid won the Fulbright and the Israel Scholarship Educational Foundation scholarships, and was accepted to post-doctoral position at Massachusetts Institute of Technology where he develops real-time methods for predicting earthquake effects using machine learning techniques. In 2020, Farid was accepted to the Emerging Leaders Program at Harvard Kennedy School in Cambridge, Massachusetts. At the same year, he received the excellence research grant of the Israel Academy of Sciences and Humanities for leading his research in collaboration between MIT and the Technion. Social activism Farid social activism focuses on empowering youths from disadvantaged backgrounds from an early age. In 2010–2015, he served as a mentor of a robotics team from Dimona in FIRST Robotics Competition, a mathematics tutor in "Aharai!" [he] program for high-school students at risk in Dimona and Be'er Sheva, and a mentor and private tutor of adolescence and reserve duty soldiers from disadvantaged backgrounds. In 2010, he initiated "Learn to Succeed" (Hebrew: ללמוד להצליח) project, for mitigating the social gaps in the Israeli society by empowering youths from the social, economical, and geographical periphery for excellence, self-fulfillment and gaining formal education. In 2018, Learn to Succeed became an official non-profit organization. At the same year, Farid led a crowdfunding project of 150,000 NIS in order to expand the organization to a national scale. In 2019, he published the book "Learn to Succeed", in which he describes his struggle with ADHD, the violent environment in which he grew up, and the changing process he went through from being a violent teenager to becoming the youngest Ph.D. graduate at the Technion. The book was given to more than two thousand youths at risk and became a top seller in Israel shortly after its publication. Maor dedicated the book to his parents and to the memorial of his friend Captain Tal Nachman who was killed in operational activity during his military service in 2014. The organization consists of hundreds of volunteers, gives full scholarships to STEM students from the periphery who serve as mentors of youths, both Jews and Arabs, from disadvantaged backgrounds, runs a hotline which gives online practical and mental support to hundreds of youths, parents and educators, initiates inspirational activities with military orientation to increase the motivation of its teen-age members for significant military service, and gives inspirational lectures to more than 5,000 youths each year. In 2019, Maor initiated a collaboration with Unit 8200 in which tens of the program's members are being interviewed to the unit. This opportunity is usually given to students with the highest grades in the matriculate exams in each class. In 2020, Dr. Farid established the ScienceAbroad center at MIT, aiming to strengthen the connections between Israeli researchers in the institute and the state of Israel. Moreover, he serves as a volunteer in the American Technion Society. Honors and awards Personal life Farid is married to Michal. Interviews and articles References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Posek] | [TOKENS: 698]
Contents Posek In Jewish law, a posek (Hebrew: פוסק [poˈsek], pl. poskim, פוסקים‎ [posˈkim]) is a legal scholar who determines the application of halakha, the Jewish religious laws derived from the written and Oral Torah, in cases of Jewish law where previous authorities are inconclusive, or in those situations where no clear halakhic precedent exists. The decision of a posek is known as a psak halakha ("ruling of law"; pl. piskei halakha) or simply a "psak". Piskei halakha are generally recorded in the responsa literature. Orthodox Judaism A posek has to be an individual that is particularly learned in a large area of law. They have to have shown ability to apply complex legal rulings to a variety of novel situations. They have to possess a very clear command of the Talmud, the Shulchan Aruch (code of Jewish law) and legal responsa with practical applications. This is a huge amount of material to have mastery over and will take years of dedicated study to achieve. This is generally more than the training of a rabbi or rav of a community. Poskim play an integral role in Orthodox Judaism. Poskim will generally not overrule a specific law unless based on an earlier authority: a posek will generally extend a law to new situations but will not change the halakhah. Conservative Judaism Conservative Judaism approaches the idea of posek, and Halakha in general, somewhat differently: poskim here apply a relatively lower weighting to precedent, and will thus frequently reinterpret (or even change) a previous ruling through a formal argument. Although there are some poskim in the Conservative movement (e.g., Louis Ginzberg, David Golinkin, Joel Roth, and Elliot Dorff), the rulings of any one individual rabbi are considered less authoritative than a consensus ruling. Thus, the Conservative movement's Rabbinical Assembly maintains a Committee on Jewish Law and Standards, whose decisions are accepted as authoritative within the American Conservative movement. At the same time, every Conservative rabbi has the right of mara d'atra to interpret Jewish law for his, her, or their own community regardless of the responsa of the Law Committee. Progressive Judaism Both Reform and Reconstructionist Judaism do not regard Halakha as binding. Although Reform stresses the individual autonomy of its membership, it never completely abandoned the field of responsa literature, if only to counter its rivals' demands. Even Classical Reformers such as Rabbi David Einhorn composed some. Rabbi Solomon Freehof, and his successor Rabbi Walter Jacob, attempted to create a concept of "Progressive Halacha", authoring numerous responsa based on a methodology laying great emphasis on current sensibilities and ethical ideals. Full text collections of Reform responsa are available on the website of the Central Conference of American Rabbis. The Reconstructionist position is that if Jews had formed cohesive communities again, their rulings would be binding, but presently Judaism is in a "post-Halakhic state". Therefore, their basic policy is to allow tradition "a vote, not a veto" in communal and personal affairs. List of poskim and major works In chronological order, by the year of birth, and if needed, secondarily, by year of death and surname. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_note-lucchita_rosanova-165] | [TOKENS: 11899]
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
========================================
[SOURCE: https://en.wikipedia.org/wiki/Drought] | [TOKENS: 4775]
Contents Drought A drought is a period of drier-than-normal conditions.: 1157 A drought can last for days, months or years. Drought often has large impacts on the ecosystems and agriculture of affected regions, and causes harm to the local economy. Annual dry seasons in the tropics significantly increase the chances of a drought developing, with subsequent increased wildfire risks. Heat waves can significantly worsen drought conditions by increasing evapotranspiration. This dries out forests and other vegetation, and increases the amount of fuel for wildfires. Drought is a recurring feature of the climate in most parts of the world, becoming more extreme and less predictable due to climate change, which dendrochronological studies date back to 1900. There are three kinds of drought effects, environmental, economic and social. Environmental effects include the drying of wetlands, more and larger wildfires, loss of biodiversity. Economic impacts of drought result due to negative disruptions to agriculture and livestock farming (causing food insecurity), forestry, public water supplies, river navigation (due to e.g.: lower water levels), electric power supply (by affecting hydropower systems) and impacts on human health. Social and health costs include the negative effect on the health of people directly exposed to this phenomenon (excessive heat waves), high food costs, stress caused by failed harvests, water scarcity, etc. Drought can also lead to increased air pollution due to increased dust concentrations and wildfires. Prolonged droughts have caused mass migrations and humanitarian crisis. Examples for regions with increased drought risks are the Amazon basin, Australia, the Sahel region and India. For example, in 2005, parts of the Amazon basin experienced the worst drought in 100 years. Australia could experience more severe droughts and they could become more frequent in the future, a government-commissioned report said on July 6, 2008. The long Australian Millennial drought broke in 2010. The 2020–2022 Horn of Africa drought surpassed the severe drought in 2010–2011 in both duration and severity. Throughout history, humans have usually viewed droughts as disasters due to the impact on food availability and the rest of society. People have viewed drought as a natural disaster or as something influenced by human activity, or as a result of supernatural forces. Definition The IPCC Sixth Assessment Report defines a drought simply as "drier than normal conditions".: 1157 This means that a drought is "a moisture deficit relative to the average water availability at a given location and season".: 1157 According to National Integrated Drought Information System, a multi-agency partnership, drought is generally defined as "a deficiency of precipitation over an extended period of time (usually a season or more), resulting in a water shortage". The National Weather Service office of the NOAA defines drought as "a deficiency of moisture that results in adverse impacts on people, animals, or vegetation over a sizeable area". Drought is a complex phenomenon − relating to the absence of water − which is difficult to monitor and define. By the early 1980s, over 150 definitions of "drought" had already been published. The range of definitions reflects differences in regions, needs, and disciplinary approaches. Categories There are three major categories of drought based on where in the water cycle the moisture deficit occurs: meteorological drought, hydrological drought, and agricultural or ecological drought.: 1157 A meteorological drought occurs due to lack of precipitation. A hydrological drought is related to low runoff, streamflow, and reservoir and groundwater storage. An agricultural or ecological drought is causing plant stress from a combination of evaporation and low soil moisture.: 1157 Some organizations add another category: socioeconomic drought occurs when the demand for an economic good exceeds supply as a result of a weather-related shortfall in water supply. The socioeconomic drought is a similar concept to water scarcity. The different categories of droughts have different causes but similar effects: Indices and monitoring Several indices have been defined to quantify and monitor drought at different spatial and temporal scales. A key property of drought indices is their spatial comparability, and they must be statistically robust. Drought indices include: High-resolution drought information helps to better assess the spatial and temporal changes and variability in drought duration, severity, and magnitude at a much finer scale. This supports the development of site-specific adaptation measures. The application of multiple indices using different datasets helps to better manage and monitor droughts than using a single dataset, This is particularly the case in regions of the world where not enough data is available such as Africa and South America. Using a single dataset can be limiting, as it may not capture the full spectrum of drought characteristics and impacts. Careful monitoring of moisture levels can also help predict increased risk for wildfires. Causes Mechanisms of producing precipitation include convective, stratiform, and orographic rainfall. Convective processes involve strong vertical motions that can cause the overturning of the atmosphere in that location within an hour and cause heavy precipitation, while stratiform processes involve weaker upward motions and less intense precipitation over a longer duration. Precipitation can be divided into three categories, based on whether it falls as liquid water, liquid water that freezes on contact with the surface, or ice. Droughts occur mainly in areas where normal levels of rainfall are, in themselves, low. If these factors do not support precipitation volumes sufficiently to reach the surface over a sufficient time, the result is a drought. Drought can be triggered by a high level of reflected sunlight and above average prevalence of high pressure systems, winds carrying continental, rather than oceanic air masses, and ridges of high pressure areas aloft can prevent or restrict the developing of thunderstorm activity or rainfall over one certain region. Once a region is within drought, feedback mechanisms such as local arid air, hot conditions which can promote warm core ridging, and minimal evapotranspiration can worsen drought conditions. Within the tropics, distinct, wet and dry seasons emerge due to the movement of the Intertropical Convergence Zone or Monsoon trough. The dry season greatly increases drought occurrence, and is characterized by its low humidity, with watering holes and rivers drying up. Because of the lack of these watering holes, many grazing animals are forced to migrate due to the lack of water in search of more fertile lands. Examples of such animals are zebras, elephants, and wildebeest. Because of the lack of water in the plants, bushfires are common. Since water vapor becomes more energetic with increasing temperature, more water vapor is required to increase relative humidity values to 100% at higher temperatures (or to get the temperature to fall to the dew point). Periods of warmth quicken the pace of fruit and vegetable production, increase evaporation and transpiration from plants, and worsen drought conditions. The El Niño–Southern Oscillation (ENSO) phenomenon can sometimes play a significant role in drought. ENSO comprises two patterns of temperature anomalies in the central Pacific Ocean, known as La Niña and El Niño. La Niña events are generally associated with drier and hotter conditions and further exacerbation of drought in California and the Southwestern United States, and to some extent the U.S. Southeast. Meteorological scientists have observed that La Niñas have become more frequent over time. Conversely, during El Niño events, drier and hotter weather occurs in parts of the Amazon River Basin, Colombia, and Central America. Winters during the El Niño are warmer and drier than average conditions in the Northwest, northern Midwest, and northern Mideast United States, so those regions experience reduced snowfalls. Conditions are also drier than normal from December to February in south-central Africa, mainly in Zambia, Zimbabwe, Mozambique, and Botswana. Direct effects of El Niño resulting in drier conditions occur in parts of Southeast Asia and Northern Australia, increasing bush fires, worsening haze, and decreasing air quality dramatically. Drier-than-normal conditions are also in general observed in Queensland, inland Victoria, inland New South Wales, and eastern Tasmania from June to August. As warm water spreads from the west Pacific and the Indian Ocean to the east Pacific, it causes extensive drought in the western Pacific. Singapore experienced the driest February in 2014 since records began in 1869, with only 6.3 mm of rain falling in the month and temperatures hitting as high as 35 °C on 26 February. The years 1968 and 2005 had the next driest Februaries, when 8.4 mm of rain fell. Globally, the occurrence of droughts has increased as a result of the increase in temperature and atmospheric evaporative demand. In addition, increased climate variability has increased the frequency and severity of drought events. Moreover, the occurrence and impact of droughts are aggravated by anthropogenic activities such as land use change and water management and demand. The IPCC Sixth Assessment Report also pointed out that "Warming over land drives an increase in atmospheric evaporative demand and in the severity of drought events": 1057 and "Increased atmospheric evaporative demand increases plant water stress, leading to agricultural and ecological drought".: 578 There is a rise of compound warm-season droughts in Europe that are concurrent with an increase in potential evapotranspiration. Climate change affects many factors associated with droughts. These include how much rain falls and how fast the rain evaporates again. Warming over land increases the severity and frequency of droughts around much of the world.: 1057 In some tropical and subtropical regions of the world, there will probably be less rain due to global warming. This will make them more prone to drought. Droughts are set to worsen in many regions of the world. These include Central America, the Amazon and south-western South America. They also include West and Southern Africa. The Mediterranean and south-western Australia are also some of these regions.: 1157 Higher temperatures increase evaporation. This dries the soil and increases plant stress. Agriculture suffers as a result. This means even regions where overall rainfall is expected to remain relatively stable will experience these impacts.: 1157 These regions include central and northern Europe. Without climate change mitigation, around one third of land areas are likely to experience moderate or more severe drought by 2100.: 1157 Due to global warming droughts are more frequent and intense than in the past. Several social factors may worsen the impact of droughts. These are increased water demand, population growth and urban expansion in many areas. Land restoration techniques, such as agroforestry, can help reduce the impact of droughts. Human activity can directly trigger exacerbating factors such as over-farming, excessive irrigation, deforestation, and erosion adversely impact the ability of the land to capture and hold water. In arid climates, the main source of erosion is wind. Erosion can be the result of material movement by the wind. The wind can cause small particles to be lifted and therefore moved to another region (deflation). Suspended particles within the wind may impact on solid objects causing erosion by abrasion (ecological succession). Wind erosion generally occurs in areas with little or no vegetation, often in areas where there is insufficient rainfall to support vegetation. Woody plant encroachment can increase soil porosity and therewith the chances of soil drought. Impacts Drought is one of the most complex and major natural hazards, and it has devastating impacts on the environment, economy, water resources, agriculture, and society worldwide. One can divide the impacts of droughts and water shortages into three groups: environmental, economic and social (including health). Environmental effects of droughts include: lower surface and subterranean water-levels, lower flow-levels (with a decrease below the minimum leading to direct danger for amphibian life), increased pollution of surface water, the drying out of wetlands, more and larger wildfires, higher deflation intensity, loss of biodiversity, worse health of trees and the appearance of pests and dendroid diseases. Drought-induced mortality of trees lacks in most climate models in their representation of forests as land carbon sink. Economic losses as a result of droughts include lower agricultural, forests, game and fishing output, higher food-production costs, lower energy-production levels in hydro plants, losses caused by depleted water tourism and transport revenue, problems with water supply for the energy sector and for technological processes in metallurgy, mining, the chemical, paper, wood, foodstuff industries etc., disruption of water supplies for municipal economies. Further examples of common environmental and economic consequences of drought include: Droughts can cause land degradation and loss of soil moisture, resulting in the destruction of cropland productivity. This can result in diminished crop growth or yield productions and carrying capacity for livestock. Drought in combination with high levels of grazing pressure can function as the tipping point for an ecosystem, causing woody encroachment. Water stress affects plant development and quality in a variety of ways: firstly drought can cause poor germination and impaired seedling development. At the same time plant growth relies on cellular division, cell enlargement, and differentiation. Drought stress impairs mitosis and cell elongation via loss of turgor pressure which results in poor growth. Development of leaves is also dependent upon turgor pressure, concentration of nutrients, and carbon assimilates[clarification needed] all of which are reduced by drought conditions, thus drought stress lead to a decrease in leaf size and number. Plant height, biomass, leaf size and stem girth has been shown to decrease in maize under water limiting conditions. Crop yield is also negatively effected by drought stress, the reduction in crop yield results from a decrease in photosynthetic rate, changes in leaf development, and altered allocation of resources all due to drought stress. Crop plants exposed to drought stress suffer from reductions in leaf water potential and transpiration rate. Water-use efficiency increases in crops such as wheat while decreasing in others, such as potatoes. Plants need water for the uptake of nutrients from the soil, and for the transport of nutrients throughout the plant: drought conditions limit these functions leading to stunted growth. Drought stress also causes a decrease in photosynthetic activity in plants due to the reduction of photosynthetic tissues, stomatal closure, and reduced performance of photosynthetic machinery. This reduction in photosynthetic activity contributes to the reduction in plant growth and yields. Another factor influencing reduced plant growth and yields include the allocation of resources; following drought stress plants will allocate more resources to roots to aid in water uptake increasing root growth and reducing the growth of other plant parts while decreasing yields. The most negative impacts of drought for humans include crop failure, food crisis, famine, malnutrition, and poverty, which lead to loss of life and mass migration of people. There are negative effects on the health of people who are directly exposed to this phenomenon (excessive heat waves). Droughts can also cause limitations of water supplies, increased water pollution levels, high food-costs, stress caused by failed harvests, water scarcity, etc. Reduced water quality can occur because lower water-flows reduce dilution of pollutants and increase contamination of remaining water sources. This explains why droughts and water scarcity operate as a factor which increases the gap between developed and developing countries. Effects vary according to vulnerability. For example, subsistence farmers are more likely to migrate during drought because they do not have alternative food-sources. Areas with populations that depend on water sources as a major food-source are more vulnerable to famine. Further examples of social and health consequences include: Severe drought has been noted to cause unrest and precede in some cases periods of political upheaval. Wind erosion is much more severe in arid areas and during times of drought. For example, in the Great Plains, it is estimated that soil loss due to wind erosion can be as much as 6100 times greater in drought years than in wet years. Loess is a homogeneous, typically nonstratified, porous, friable, slightly coherent, often calcareous, fine-grained, silty, pale yellow or buff, windblown (Aeolian) sediment. It generally occurs as a widespread blanket deposit that covers areas of hundreds of square kilometers and tens of meters thick. Loess often stands in either steep or vertical faces. Loess tends to develop into highly rich soils. Under appropriate climatic conditions, areas with loess are among the most agriculturally productive in the world. Loess deposits are geologically unstable by nature, and will erode very readily. Therefore, windbreaks (such as big trees and bushes) are often planted by farmers to reduce the wind erosion of loess. Regions particularly affected In 2005, parts of the Amazon basin experienced the worst drought in 100 years. A 2006 article reported results showing that the forest in its present form could survive only three years of drought. Scientists at the Brazilian National Institute of Amazonian Research argue in the article that this drought response, coupled with the effects of deforestation on regional climate, are pushing the rainforest towards a "tipping point" where it would irreversibly start to die. It concludes that the rainforest is on the brink of being turned into savanna or desert, with catastrophic consequences for the world's climate. According to the WWF, the combination of climate change and deforestation increases the drying effect of dead trees that fuels forest fires. The 1997–2009 Millennium Drought in Australia led to a water supply crisis across much of the country. As a result, many desalination plants were built for the first time (see list). By far the largest part of Australia is desert or semi-arid lands commonly known as the outback. A 2005 study by Australian and American researchers investigated the desertification of the interior, and suggested that one explanation was related to human settlers who arrived about 50,000 years ago. Regular burning by these settlers could have prevented monsoons from reaching interior Australia. In June 2008 it became known that an expert panel had warned of long term, maybe irreversible, severe ecological damage for the whole Murray-Darling basin if it did not receive sufficient water by October 2008. Australia could experience more severe droughts and they could become more frequent in the future, a government-commissioned report said on July 6, 2008. Australian environmentalist Tim Flannery, predicted that unless it made drastic changes, Perth in Western Australia could become the world's first ghost metropolis, an abandoned city with no more water to sustain its population. The long Australian Millennial drought broke in 2010. East Africa, including for example Ethiopia, Eritrea, Kenya, Somalia, South Sudan, Sudan, Tanzania, and Uganda, has a diverse climate, ranging from hot, dry regions to cooler, wetter highland regions. The region has considerable variability in seasonal rainfall and a very complex topography. In the northern parts of the region within the Nile basin (Ethiopia, Sudan), the rainfall is characterized by an unimodal cycle with a wet season from July to September. The rest of the region has a bimodal annual cycle, featuring long rains from March to May and the short rains from October to December. The frequent occurrence of hydrological extremes, like droughts and floods, harms the already vulnerable population suffering from severe poverty and economic turmoil. Droughts prompted food shortages for example in 1984–85, 2006 and 2011. The Eastern African region experiences the impacts of climate change in different forms. For instance, below-average rainfall occurred for six consecutive rainy seasons in the Horn of Africa during the period 2020–2023 leading to the third longest and most widespread drought on record with dire implications for food security (see Horn of Africa drought (2020–present)). Conversely, other parts experienced extreme floods, e.g., the 2020 East Africa floods in Ethiopia, Rwanda, Kenya, Burundi, and Uganda, and the 2022 floods in South Sudan. A key feature in the region is the heterogeneous distribution of hydrologic extremes in space and time. For instance, El Niño can cause droughts in one part of the region and floods in the other. This is also a common situation within a country, e.g., in Ethiopia. The recent years with consecutive droughts followed by floods are a testament to the need to better forecast these kinds of events and their impacts. Approximately 2.4 billion people live in the drainage basin of the Himalayan rivers. India, China, Pakistan, Bangladesh, Nepal and Myanmar could experience floods followed by droughts in coming decades. Drought in India affecting the Ganges is of particular concern, as it provides drinking water and agricultural irrigation for more than 500 million people. In 2025, the UN warned that retreating glaciers could threaten the food and water supply of 2 billion people worldwide. The west coast of North America, which gets much of its water from glaciers in mountain ranges such as the Rocky Mountains and Sierra Nevada, also would be affected. By country or region Droughts in particular countries: See also: Protection, mitigation and relief Agriculturally, people can effectively mitigate much of the impact of drought through irrigation and crop rotation. Failure to develop adequate drought mitigation strategies carries a grave human cost in the modern era, exacerbated by ever-increasing population densities. Strategies for drought protection or mitigation include: When water is scarce due to droughts, there are a range of options for people to access other sources of water, such as wastewater reuse, rainwater harvesting and stormwater recovery, or seawater desalination. History Throughout history, humans have usually viewed droughts as disasters due to the impact on food availability and the rest of society. Drought is among the earliest documented climatic events, present in the Epic of Gilgamesh and tied to the Biblical story of Joseph's arrival in and the later Exodus from ancient Egypt. Hunter-gatherer migrations in 9,500 BC Chile have been linked to the phenomenon, as has the exodus of early humans out of Africa and into the rest of the world around 135,000 years ago. Droughts can be scientifically explained in terms of physical mechanisms, which underlie natural disasters and are influenced by human impact on the environment. Beliefs about drought are further shaped by cultural factors including local knowledge, perceptions, values, beliefs and religion. In some places and times, droughts have been interpreted as the work of supernatural forces. Globally, people in many societies have been more likely to explain natural events like drought, famine and disease in terms of the supernatural than they are to explain social phenomena like war, murder, and theft. Historically, rituals have been used in an attempt to prevent or avert drought. Rainmaking rituals have ranged from dances to scapegoating to human sacrifices. Many ancient practices are now a matter of folklore while others may still be practiced. In areas where people have limited understanding of the scientific basis of drought, beliefs about drought continue to reflect indigenous beliefs in the power of spirits and Christian philosophies that see drought as a divine punishment. Such beliefs can influence people's thinking and affect their resilience and ability to adapt to stress and respond to crises. In the case of Creationism, curricula sometimes give religious explanations of natural phenomena rather than scientific ones. Teaching explicitly denies evolution, that human agency is affecting climate, and that climate change is occurring. Some historical droughts include: See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Uniform_Resource_Identifier] | [TOKENS: 2817]
Contents Uniform Resource Identifier A Uniform Resource Identifier (URI), formerly Universal Resource Identifier, is a unique sequence of characters that identifies an abstract or physical resource,: 1 such as resources on a webpage, email address, phone number,: 7 books, real-world objects such as people and places, and concepts.: 5 URIs which provide a means of locating and retrieving information resources on a network (either on the Internet or on another private network, such as a computer file system or an Intranet) are Uniform Resource Locators (URLs). Therefore, URLs are a subset of URIs, i.e. every URL is a URI (and not necessarily the other way around).: 7 Other URIs provide only a unique name, without a means of locating or retrieving the resource or information about it; these are Uniform Resource Names (URNs). The web technologies that use URIs are not limited to web browsers. History URIs and URLs have a shared history. In 1990, Tim Berners-Lee's proposals for hypertext implicitly introduced the idea of a URL as a short string representing a resource that is the target of a hyperlink. At the time, people referred to it as a "hypertext name" or "document name". Over the next three and a half years, as the World Wide Web's core technologies of HTML, HTTP, and web browsers developed, a need to distinguish a string that provided an address for a resource from a string that merely named a resource emerged. Although not yet formally defined, the term Uniform Resource Locator came to represent the former, and the more contentious Uniform Resource Name came to represent the latter. In July 1992 Berners-Lee's report on the Internet Engineering Task Force (IETF) "UDI (Universal Document Identifiers) BOF" mentions URLs (as Uniform Resource Locators), URNs (originally, as Unique Resource Numbers), and the need to charter a new working group. In November 1992 the IETF "URI Working Group" met for the first time. During the debate over defining URLs and URNs, it became evident that the concepts embodied by the two terms were merely aspects of the fundamental, overarching, notion of resource identification. In June 1994, the IETF published RFC 1630, Berners-Lee's first Request for Comments that acknowledged the existence of URLs and URNs. Most importantly, it defined a formal syntax for Universal Resource Identifiers (i.e. URL-like strings whose precise syntaxes and semantics depended on their schemes). It also attempted to summarize the syntaxes of URL schemes in use at the time. It acknowledged – but did not standardize—the existence of relative URLs and fragment identifiers. In December 1994, RFC 1738 formally defined relative and absolute URLs, refined the general URL syntax, defined how to resolve relative URLs to absolute form, and better enumerated the URL schemes then in use. The agreed definition and syntax of URNs had to wait until the publication of IETF RFC 2141 in May 1997. The publication of IETF RFC 2396 in August 1998 saw the URI syntax become a separate specification and most of the parts of RFCs 1630 and 1738 relating to URIs and URLs in general were revised and expanded by the IETF. The new RFC changed the meaning of U in URI from "Universal" to "Uniform." In December 1999, RFC 2732 provided a minor update to RFC 2396, allowing URIs to accommodate IPv6 addresses. A number of shortcomings discovered in the two specifications led to a community effort, coordinated by RFC 2396 co-author Roy Fielding, that culminated in the publication of IETF RFC 3986 in January 2005. While obsoleting the prior standard, it did not render the details of existing URL schemes obsolete; RFC 1738 continues to govern such schemes except where otherwise superseded. IETF RFC 2616 for example, refines the http scheme. Simultaneously, the IETF published the content of RFC 3986 as the full standard STD 66, reflecting the establishment of the URI generic syntax as an official Internet protocol. In 2001, the World Wide Web Consortium's (W3C) Technical Architecture Group (TAG) published a guide to best practices and canonical URIs for publishing multiple versions of a given resource. For example, content might differ by language or by size to adjust for capacity or settings of the device used to access that content. In August 2002, IETF RFC 3305 pointed out that the term "URL" had, despite widespread public use, faded into near obsolescence, and serves only as a reminder that some URIs act as addresses by having schemes implying network accessibility, regardless of any such actual use. As URI-based standards such as Resource Description Framework make evident, resource identification need not suggest the retrieval of resource representations over the Internet, nor need they imply network-based resources at all. The Semantic Web uses the HTTP URI scheme to identify both documents and concepts for practical uses, a distinction which has caused confusion as to how to distinguish the two. The TAG published an e-mail in 2005 with a solution of the problem, which became known as the httpRange-14 resolution. The W3C subsequently published an Interest Group Note titled "Cool URIs for the Semantic Web", which explained the use of content negotiation and the HTTP 303 response code for redirections in more detail. Design A Uniform Resource Name (URN) is a URI that identifies a resource by name in a particular namespace. A URN may be used to talk about a resource without implying its location or how to access it. For example, in the International Standard Book Number (ISBN) system, ISBN 0-486-27557-4 identifies a specific edition of the William Shakespeare play Romeo and Juliet. The URN for that edition would be urn:isbn:0-486-27557-4. However, it gives no information as to where to find a copy of that book. A Uniform Resource Locator (URL) is a URI that specifies the means of acting upon or obtaining the representation of a resource, i.e. specifying both its primary access mechanism and network location. For example, the URL http://example.org/wiki/Main_Page refers to a resource identified as /wiki/Main_Page, whose representation is obtainable via the Hypertext Transfer Protocol (http:) from a network host whose domain name is example.org. (In this case, HTTP usually implies it to be in the form of HTML and related code. In practice, that is not necessarily the case, as HTTP allows specifying arbitrary formats in its header.) A URN is analogous to a person's name, while a URL is analogous to their street address. In other words, a URN identifies an item and a URL provides a method for finding it. Technical publications, especially standards produced by the IETF and by the W3C, normally reflect a view outlined in a W3C Recommendation of 30 July 2001, which acknowledges the precedence of the term URI rather than endorsing any formal subdivision into URL and URN. URL is a useful but informal concept: a URL is a type of URI that identifies a resource via a representation of its primary access mechanism (e.g., its network "location"), rather than by some other attributes it may have. As such, a URL is simply a URI that happens to point to a resource over a network.[a] However, in non-technical contexts and in software for the World Wide Web, the term "URL" remains widely used. Additionally, the term "web address" (which has no formal definition) often occurs in non-technical publications as a synonym for a URI that uses the http or https schemes. Such assumptions can lead to confusion, for example, in the case of XML namespaces that have a visual similarity to resolvable URIs. Specifications produced by the WHATWG prefer URL over URI, and so newer HTML5 APIs use URL over URI. Standardize on the term URL. URI and IRI [Internationalized Resource Identifier] are just confusing. In practice a single algorithm is used for both so keeping them distinct is not helping anyone. URL also easily wins the search result popularity contest. While most URI schemes were originally designed to be used with a particular protocol, and often have the same name, they are semantically different from protocols. For example, the scheme http is generally used for interacting with web resources using HTTP, but the scheme file has no protocol. A URI has a scheme that refers to a specification for assigning identifiers within that scheme. As such, the URI syntax is a federated and extensible naming system wherein each scheme's specification may further restrict the syntax and semantics of identifiers using that scheme. The URI generic syntax is a superset of the syntax of all URI schemes. It was first defined in RFC 2396, published in August 1998, and finalized in RFC 3986, published in January 2005. A URI is composed from an allowed set of ASCII characters consisting of reserved characters (gen-delims: :, /, ?, #, [, ], and @; sub-delims: !, $, &, ', (, ), *, +, ,, ;, and =),: 13–14 unreserved characters (uppercase and lowercase letters, decimal digits, -, ., _, and ~),: 13–14 and the character %.: 12 Syntax components and subcomponents are separated by delimiters from the reserved characters (only from generic reserved characters for components) and define identifying data represented as unreserved characters, reserved characters that do not act as delimiters in the component and subcomponent respectively,: §2 and percent-encodings when the corresponding character is outside the allowed set or is being used as a delimiter of, or within, the component. A percent-encoding of an identifying data octet is a sequence of three characters, consisting of the character % followed by the two hexadecimal digits representing that octet's numeric value.: §2.1 The URI generic syntax consists of five components organized hierarchically in order of decreasing significance from left to right:: §3 A component is undefined if it has an associated delimiter and the delimiter does not appear in the URI; the scheme and path components are always defined.: §5.2.1 A component is empty if it has no characters; the scheme component is always non-empty.: §3 The authority component consists of subcomponents: This is represented in a syntax diagram as: The URI comprises: The scheme- or implementation-specific reserved character + may be used in the scheme, userinfo, host, path, query, and fragment, and the scheme- or implementation-specific reserved characters !, $, &, ', (, ), *, ,, ;, and = may be used in the userinfo, host, path, query, and fragment. Additionally, the generic reserved character : may be used in the userinfo, path, query and fragment, the generic reserved characters @ and / may be used in the path, query and fragment, and the generic reserved character ? may be used in the query and fragment.: §A The following figure displays example URIs and their component parts. DOIs (digital object identifiers) fit within the Handle System and fit within the URI system, as facilitated by appropriate syntax. A URI reference is either a URI or a relative reference when it does not begin with a scheme component followed by a colon (:).: §4.1 A path segment that contains a colon character (e.g., foo:bar) cannot be used as the first path segment of a relative reference if its path component does not begin with a slash (/), as it would be mistaken for a scheme component. Such a path segment must be preceded by a dot path segment (e.g., ./foo:bar).: §4.2 Web document markup languages frequently use URI references to point to other resources, such as external documents or specific portions of the same logical document:: §4.4 Resolving a URI reference against a base URI results in a target URI. This implies that the base URI exists and is an absolute URI (a URI with no fragment component). The base URI can be obtained, in order of precedence, from:: §5.1 Within a representation with a well defined base URI of a relative reference is resolved to its target URI as follows:: §5.4 URL munging is a technique by which a command is appended to a URL, usually at the end, after a "?" token. It is commonly used in WebDAV as a mechanism of adding functionality to HTTP. In a versioning system, for example, to add a "checkout" command to a URL, it is written as http://editing.com/resource/file.php?command=checkout. It has the advantage of both being easy for CGI parsers and also acts as an intermediary between HTTP and underlying resource, in this case. In XML, a namespace is an abstract domain to which a collection of element and attribute names can be assigned. The namespace name is a character string which must adhere to the generic URI syntax. However, the name is generally not considered to be a URI, because the URI specification bases the decision not only on lexical components, but also on their intended use. A namespace name does not necessarily imply any of the semantics of URI schemes; for example, a namespace name beginning with http: may have no connotation to the use of the HTTP. Originally, the namespace name could match the syntax of any non-empty URI reference, but the use of relative URI references was deprecated by the W3C. A separate W3C specification for namespaces in XML 1.1 permits Internationalized Resource Identifier (IRI) references to serve as the basis for namespace names in addition to URI references. See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Turing_(programming_language)] | [TOKENS: 928]
Contents Turing (programming language) Turing is a high-level, general purpose programming language developed in 1982 by Ric Holt and James Cordy, at University of Toronto in Ontario, Canada. It was designed to help students taking their first computer science course learn how to code. Turing is a descendant of Pascal, Euclid, and SP/k that features a clean syntax and precise machine-independent semantics. Turing 4.1.0 is the latest stable version. Versions 4.1.1 and 4.1.2 do not emit stand alone .exe files. Versions pre-4.1.0 have outdated syntax and functions. Overview Named after British computer scientist Alan Turing, Turing is used mainly as a teaching language at the high school and university level. Two other versions exist, Object-Oriented Turing and Turing+, a systems programming variant. In September 2001, "Object Oriented Turing" was renamed "Turing" and the original Turing was renamed "Classic Turing". Turing is now unsupported by Holt Software Associates in Toronto, Ontario. Turing was widely used in high schools in Ontario as an introduction to programming. On November 28, 2007, Turing, which was previously a commercial programming language, became freeware, available to download from the developer's website free of charge for personal, commercial, and educational use. The makers of Turing, Holt Software Associates, have since ceased operations, and Turing has seen no further development since November 25, 2007. Syntax Turing is designed to have a very lightweight, readable, intuitive syntax. Here is the entire "Hello, World!" program in Turing with syntax highlighting: Turing avoids semicolons and braces, using explicit end markers for most language constructs instead, and allows declarations anywhere. Here is a complete program defining and using the traditional recursive function to calculate a factorial. Open implementations Currently, there are two open source alternative implementations of Turing: Open Turing, an open source version of the original interpreter, and TPlus, a native compiler for the concurrent system programming language variant Turing+. OpenT, a project to develop a Turing compiler, was discontinued. Open Turing is an open-source implementation of the original Turing interpreter for Windows written by Tristan Hume. It includes speed improvements, new features such as OpenGL 3D and a new code editor. It is fully backwards compatible with the closed-source implementation. TPlus is an open-source implementation of original (non-Object-Oriented) Turing with systems programming extensions developed at the University of Toronto and ported to Linux, Solaris, and Mac OS X at Queen's University in the late 1990s. TPlus implements Turing+ (Turing Plus), a concurrent systems programming language based on the original Turing programming language. Some, but not all, of the features of Turing Plus were eventually subsumed into the present Object-Oriented Turing language. Turing Plus extends original Turing with processes, monitors (as specified by C.A.R. Hoare), and language constructs needed for system programming such as binary input-output, separate compiling, variables at absolute addresses, type converters and other features. Turing+ Turing+ (Turing Plus) is a concurrent systems programming language based on the Turing programming language designed by James Cordy and Ric Holt, then at the University of Toronto, Canada, in 1987. Some, but not all, of the features of Turing+ were eventually subsumed into Object-Oriented Turing. Turing+ extended original Turing with processes and monitors (as specified by C.A.R. Hoare) as well as language constructs needed for systems programming such as binary input-output, separate compiling, variables at absolute addresses, type converters, and other features. Turing+ was explicitly designed to replace Concurrent Euclid in systems-programming applications. The TUNIS operating system, originally written in Concurrent Euclid, was recoded to Turing+ in its MiniTunis implementation. Turing+ has been used to implement several production software systems, including the language TXL. Object-Oriented Turing Object-Oriented Turing is an extension of the Turing programming language and a replacement for Turing Plus created by Ric Holt of the University of Toronto, Canada, in 1991. It is imperative, object-oriented, and concurrent. It has modules, classes, single inheritance, processes, exception handling, and optional machine-dependent programming. There is an integrated development environment under the X Window System and a demo version.[citation needed] Versions exist for Sun-4, MIPS, RS-6000, NeXTSTEP, Windows 95, and others. References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/How_the_Other_Half_Lives] | [TOKENS: 2356]
Contents How the Other Half Lives How the Other Half Lives: Studies among the Tenements of New York (1890) is an early publication of photojournalism by Jacob Riis, documenting squalid living conditions in New York City slums in the 1880s. The photographs served as a basis for future "muckraking" journalism by exposing the slums to New York City's upper and middle classes. They inspired many reforms of working-class housing, both immediately after publication as well as making a lasting impact in today's society. Background In the 1880s many people in upper- and middle-class society were unaware of the dangerous conditions in the slums among poor immigrants. After the Civil War, the country transformed into an industrial superpower and became largely urban. Simultaneously, a wave of unskilled southern European, eastern European, and Asian immigrants came to settle in the "promised land" of the United States. This migration was vastly different from the previous booms due to the influx of non-western European and non-Protestant individuals, which made the split between the "new" and "old" immigrants much larger. In the 1880s, over 5.2 million immigrants came to the United States, with many of these people staying in New York City. This increased New York City's population by 25%, therefore making the tenement problem much more extreme. In the years after the Civil War, many of the former residents of the most notorious slums were wealthy enough to move out of these conditions, or had died in the war. Also, the elevated railway in the Bowery in 1889 transformed this evolving neighborhood back into the squalid, seedy neighborhood it was before the war, and even made it worse. The slums were viewed as a problem by people before the publication of How the Other Half Lives. Some political reformers believed that a wider distribution of wealth would fix the problem, while the Socialists believed that public ownership and a redistribution of wealth would fix the problem. National organizations such as the American Red Cross, The National Conference of Charities and Corrections, and the Women's Christian Temperance Union were involved in different aspects of the slum problem, but these efforts were local in scope instead of national. By 1865, a total of 15,309 tenements existed in New York City, and the city's population was approaching 1,000,000. It was colloquially known to be a housing type for the poor made for maximum densities, within the constraints of a 25-by-100-foot (8-by-30-metre) lot. When the Tenement House Act of 1867 was passed, the tenement was defined as: Any house, building, or portion thereof, which is rented, leased, let or hired out to be occupied or is occupied, as the home or residence of more than three families living independently of one another and doing their own cooking upon the premises, or by more than two families upon a floor, so living and cooking and having a common right in the halls, stairways, yards, water-closets, or privies, or some of them. It was with the publication of this act that the basic "dumbbell" layout of the tenement was first used. This tenement style was supposed to allow more natural light and air ventilation into these living quarters, as well as adding more water closets and allowing for the fire safety regulations explained in the Tenement House Act of 1867. On top of this, many of the landlords of these tenements did little to improve their conditions. When asked about the enforcement of the new statutes, the Superintendent of Buildings said that he was satisfied with hard wood in these tenements because it "burned slowly." Jacob Riis Jacob Riis emigrated from Denmark in 1870 to New York City, eager to prove himself. Finding it difficult to find work, he found a home in the slums of New York's Lower East Side. He went back to Denmark for a short time, returning to New York to become a police reporter. During this time, Riis became a devout Christian and devoted himself to "the service of God and his fellows." Riis turned to photography as a sort of "pastime" and found it a useful tool when writing his police reports. Once he began using magnesium flash powder, he could capture the dark and dingy conditions of the tenements. How the Other Half Lives was only one book in Riis' bibliography highlighting the conditions in the slums of New York. Some of his other works that highlighted more in depth views into slum life were The Children of the Poor, Children of the Tenements, The Battle with the Slums, and Out of Mulberry Street. Summary In January 1888, Riis bought a detective camera and went on an expedition to gather images of what life was like in the slums of New York City. This both included Riis' taking his own photos as well as his using the images of other photographers. Finally, on January 28, 1888, Riis presented "The Other Half: How It Lives and Dies in New York" using his images on a projection screen and taking the viewer on a journey by describing the images. Throughout 1888, Riis continued his lectures in local New York City churches, which were reviewed in several newspapers including New York Sun, Brooklyn Times, New York Evening Post, and Harper's. In February 1889, Riis wrote a magazine article based on his lectures in Scribner's Magazine, which was a resounding success. The book version of Riis' work was published in January 1890 as How the Other Half Lives: Studies among the Tenements of New York. The title of the book is a reference to a sentence by French writer François Rabelais, who wrote in Pantagruel: "one half of the world does not know how the other half lives" ("la moitié du monde ne sait comment l'autre vit"). How the Other Half Lives: Studies among the Tenements of New York explained the living conditions in New York slums as well as the sweatshops in some tenements, which paid workers only a few cents per day. The book explains the plight of working children; they would work in factories and at other jobs. Some children became garment workers and newsies (newsboys). Riis describes a system of tenement housing that had failed, as he claims, because of greed and neglect from wealthier people. He claims a correlation between the high crime rate, drunkenness, and reckless behavior of the poor and their lack of a proper home. Chapter by chapter he uses his words and photographs to expose the conditions inhabited by the poor in a manner that "spoke directly to people's hearts". While Riis treats many of the ethnic groups he encounters with slurs and numerous stereotypes, he still keeps his general hypothesis that the reason for the poverty in these communities is caused by the conditions surrounding them. Riis often does have an "ethnic hierarchy," often its most extreme toward the Chinese. Riis ends How the Other Half Lives with a plan of how to fix the problem. He asserts that the plan is achievable and that the upper classes will not only profit financially from such ventures, but have a moral obligation to tend to them as well. How the Other Half Lives follows a general outline for the charity writings of the nineteenth century: a section on crime, the Protestant virtues and vices (intemperance, idleness, disorder, uncleanliness), miserable conditions of living, disease, the loss of modesty (especially women), the dissolution of the family, the institutions that would help in their uplift, as well as future sources of reform. The difference introduced photography to prove the squalid conditions and to increase sympathy for the individuals living in these slums. Riis finally convinced the average reader of newspapers that the poor were not so by choice; that the dangerous and unhygienic conditions in which they lived were imposed by society, rather than the result of loose moral standards; that the slums were something that needed to be fixed rather than gaped at or shunned. Critical reception The article proved to be popular, and Riis spent the better part of a year expanding it into the book published by Scribner's Books in 1890. It offered more illustrations and halftones than the magazine articles could offer. The book was successful. Soon after its publication, The New York Times lauded its content, calling it a "powerful book". The praise for How the Other Half Lives continued in many other newspapers across the country. Many of these enthusiastic reviews were seen in Christian newsletters, which enjoyed Riis' view on the moral issues of poverty. The Christian Intelligencer reviewed the first edition saying "Books like this that lift the curtains and expose to public gaze the great evils of the system will hasten the day of reform." Many of these Christian reviewers found Riis' work to apply their own cities, and called for similar reforms that Riis outlines in How the Other Half Lives. One of the most famous people who liked Riis' work was Theodore Roosevelt. Roosevelt became close to Riis during the former's two years as the President of the Police Board. Roosevelt and Riis worked together to abolish police lodging houses, areas in police station houses that provided temporary shelter for homeless or destitute individuals that were prone to crime and public health crises, such as outbreaks of typhus, as well as the reenactment of the Civil Service Law, the Tenement House Commission, laws that increased the amount of factory inspectors, making the eight-hour and prevailing rate of wages law effective, regulation of the working hours of women and children, as well as multiple other labor reforms. Because of this awakening caused by Riis' efforts, many reforms were quickly compiled to improve conditions for the working poor. In 1894, the Tenement House Committee was established. In 1895, they published the New York Tenement House Act, which outlawed rear tenements, squalid dwellings built into the backyards of buildings that did not occupy entire lots and provided even worse conditions than rooms of the buildings that faced the streets, and also was the first official document to supplement a written description of tenement housing with photographs. In addition to this legislature, more reform was brought about by the New York Tenement House Act of 1901, which changed the minimum requirements of tenement housing to include reforms in the amount of light received by living quarters, increased fire safety regulations, more ventilation, restrictions on building height, and increased room space. For these reforms, Riis has almost been given full credit for inspiring these acts, especially the Act of 1901. Because of his poignant descriptions of the degradation of Mulberry Bend, this area was razed and replaced with a city park. How the Other Half Lives also inspired reform on a national scale. The Department of Labor published The Housing of the Working People in 1895, which was the second major tenement study of the decade. Jacob Riis was only the first to expose the conditions that the impoverished lived in using photographs. Some other works that detailed life in poverty were the images of the migrants during the Dust Bowl commissioned by the Farm Security Administration by Dorothea Lange; The Other America by Michael Harrington; and The American Way of Poverty: How the Other Half Still Lives by Sasha Abramsky. Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/TUTOR] | [TOKENS: 3716]
Contents TUTOR TUTOR, also known as PLATO Author Language, is a programming language developed for use on the PLATO system at the University of Illinois at Urbana-Champaign beginning in roughly 1965. TUTOR was initially designed by Paul Tenczar for use in computer assisted instruction (CAI) and computer managed instruction (CMI) (in computer programs called "lessons") and has many features for that purpose. For example, TUTOR has powerful answer-parsing and answer-judging commands, graphics, and features to simplify handling student records and statistics by instructors. TUTOR's flexibility, in combination with PLATO's computational power (running on what was considered a supercomputer in 1972), also made it suitable for the creation of games — including flight simulators, war games, dungeon style multiplayer role-playing games, card games, word games, and medical lesson games such as Bugs and Drugs (BND). TUTOR lives on today as the programming language for the Cyber1 PLATO System, which runs most of the source code from 1980s PLATO and has roughly 5000 users as of June 2020. Origins and development TUTOR was originally developed as a special purpose authoring language for designing instructional lessons, and its evolution into a general purpose programming language was unplanned. The name TUTOR was first applied to the authoring language of the PLATO system in the later days of Plato III. The first documentation of the language, under this name, appears to have been Avner, Richard Allen; Tenczar, Paul (January 1969), The TUTOR Manual. CERL Report X-4 The article Teaching the Translation of Russian by Computer gives a snapshot of TUTOR from shortly before PLATO IV was operational. Core elements of the language were present, but commands were given in upper case, and instead of using a general mechanism, support for alternative character sets was through special command names such as WRUSS for "write using the Russian character set." Through the 1970s, the developers of TUTOR took advantage of the fact that the entire corpus of TUTOR programs were stored on-line on the same computer system. Whenever they felt a need to change the language, they ran conversion software over the corpus of TUTOR code to revise all existing code so that it conformed with the changes they had made. As a result, once new versions of TUTOR were developed, maintaining compatibility with the PLATO version could be very difficult. Control Data Corporation (CDC), by 1981, had largely expunged the name TUTOR from their PLATO documentation. They referred to the language itself as the PLATO Author Language. The phrase TUTOR file or even TUTOR lesson file survived, however, as the name of the type of file used to store text written in the PLATO Author Language. Structure of a TUTOR lesson A TUTOR lesson consists of a sequence of units where each unit begins with the presentation of information and progress from one unit to the next is contingent on correctly answering one or more questions. As with COBOL paragraphs, control may enter a TUTOR unit from the preceding unit and exit into the next, but units are also callable as subroutines using the do or join commands. Here is an example unit: Several things should be immediately apparent from this example. What may not be apparent is the control structure implicit in this unit. The arrow command marks the entrance to a judging block This control structure is one of TUTOR's unique features. Unique features TUTOR contained a number of unique features. The following list is not intended as a substitute for a TUTOR manual, but merely highlights the most interesting, innovative, and sometimes confusing features of the language. A judging block in TUTOR is a control structure that begins with an arrow command and ends with the next arrow, endarrow or unit command. The arrow command also prompts for input, with the special arrow character (resembling "▷") displayed as a prompt at the indicated screen coordinates. In effect, a judging block can be thought of as a backtracking control structure where the student may make multiple attempts to answer a question until a correct answer allows forward progress. Each judging block consists of a sequence of pattern matching commands, each of which introduces a (possibly empty) block of commands to be executed if that pattern matches. The two most common pattern matching commands were answer and wrong. These had identical pattern matching semantics except that answer judged a student response to be correct if it matched, while wrong judged a student response to be incorrect. The tag fields on the answer and wrong commands consisted of lists of optional, required and alternative words. consider this example: This would match answers such as "it is a right triangle" or "it's a triangular figure" or just "rt triangle". It would not match "sort of triangular" because the words "sort of" are not listed as ignored, and it would not match "triangle, right?" because the order is wrong. The pattern matching subsystem recognized spelling errors, so the words "triangel" or "triangl" would match the example pattern. The lesson author could use the specs command to set how pedantic the system was about spelling errors. The pattern matching algorithms used by various TUTOR implementations varied in detail, but typically, each word in the input text and each word in the pattern were converted to bit vectors. To see whether a word of student input matched a word of the pattern, the Hamming distance between the two bit vectors was used as a measure of the degree of difference between the words. Bit vectors were typically 60 or 64 bits long, with fields for letter presence, letter pair presence, and the first letter. As a result, the number of one bits in the exclusive or of two such bit vectors approximated the extent of the phonetic difference between the corresponding words. All early presentations of the control structure of a TUTOR judging block were confusing. In modern terms, however, a judging block can be described as an iterative control structure that exits when the student input is judged correct. The body of this control structure consists of a series of cases, each introduced by a pattern matching command such as answer or wrong. All output produced by the body of the judging loop in the previous cycle is erased from the screen prior to the next cycle. Consider this example: In the event that the student inputs "square" or "a square", the answer is judged to be incorrect, and the text "A square has four sides." is output starting at line 15 column 1 on the screen. This output remains on the screen until the student begins to enter a new answer, at which point, it is erased so that the response to the new answer can be computed. The mechanism by which the display screen rolls back to its previous state varies from implementation to implementation. Early implementations operated by switching the terminal into erase mode and re-executing the entire case that had matched. Some later implementations buffered the output produced during judging so that this output could be erased. The join command was a unique form of subroutine call. It was defined as being equivalent to textual substitution of the body of the joined unit in place of the join command itself. As such, a joined unit could contain part of a judging block. Thus, while the judging block is conceptually an iterator enclosing a series of cases, this block may be arbitrarily broken into subroutines. (An alternative subroutine call, the do command, conformed to the usual semantics associated with subroutine calls in other programming languages.) The PLATO IV student terminal had a 512 by 512 pixel plasma display panel, with hardware support for point plotting, line drawing, and text display. Each pixel on the PLATO IV terminal was either orange or black. The CDC PLATO V terminal used a monochrome black and white CRT to emulate the plasma panel. The built-in character set had 4 sets of 63 characters, each 8 by 16 pixels, half of these were fixed, half were programmable. The Tutor language provided complete support for this terminal. There were two coordinate systems The following example illustrates some of Tutor's drawing commands. Note the use of semicolons to separate successive coordinates on the draw command. This allows unambiguous use of comma-separated fine coordinates. Normally, the draw command connects consecutive points with line segments, but by putting skip in the tag, the draw command could be made to conceptually lift its pen. The tags on the circle command give the radius and fine coordinates of the center. Additional tags could specify starting and ending angles for partial circles. Hand composing draw commands is difficult, so a picture editor was included in the PLATO system by 1974 to automate this work. This could only deal with drawing commands with constant coordinates. The following example illustrates some of the text rendering tools of Tutor. Text rendered in size zero rotation zero used the built-in character rendering hardware of the PLATO terminal, while rendering with nonzero size and rotation was done with line segments and therefore significantly slower due to the speed of the communication link to the terminal. Aside from its unique answer judging mechanisms, TUTOR's original set of control structures was rather sparse. In the mid 1970s, this shortcoming was addressed by introducing if, endif blocks with optional elseif and else sections. The semantics of these control structures was routine, but the syntax inherited the mandatory indentation of the Tutor Language, presaging that of Python and adding a unique nonblank indent character to distinguish indenting from continuation lines. This is illustrated in the following example: (The assignment arrow in the calc statement is not rendered correctly in some browsers. It appears similar to <= but as one character. It had a dedicated key on the PLATO IV keyboard.) The same syntax was used for loop, endloop blocks with semantics comparable to while loops in conventional programming languages. This is illustrated in the following example Note that the reloop and outloop commands are somewhat analogous to the continue and break statements of languages based on C, except that they must sit at the indenting level of the loop they modify, and they have a condition tag that indicates when the indicated control transfer is to take place. This makes the construct more powerful than in other languages, because any line of the inner loop could terminate or reloop several outer loops with one statement. TUTOR's expression syntax did not look back to the syntax of FORTRAN, nor was it limited by poorly designed character sets of the era. For example, the PLATO IV character set included control characters for subscript and superscript, and TUTOR used these for exponentiation. Consider this command The character set also included the conventional symbols for multiplication and division, × and ÷, but in a more radical departure from the conventions established by FORTRAN, it allowed implicit multiplication, so the expressions (4+7)(3+6) and 3.4+5(23-3)/2 were valid, with the values 99 and 15.9, respectively (op cit). This feature was seen as essential. When students typed in a numeric answer to a question, they could use operators and variables and standard algebraic notation, and the program would use the TUTOR "compute" command to compile and run the formula and check that it was numerically equivalent (or within the floating point roundoff error) to the correct answer. The language included a pre-defined constant named with the Greek letter pi (π), with the appropriate value, which could be used in calculations. Thus, the expression πr2 could be used to calculate the area of a circle, using the built-in π constant, implicit multiplication and exponentiation indicated by a superscript. In TUTOR, the floating-point comparison x=y was defined as being true if x and y were approximately equal. This simplified life for mathematically naïve developers of instructional lessons, but it occasionally caused headaches for developers of numerically sophisticated code because it was possible that both x<y and x≥y could be true at the same time. As an authoring language, TUTOR began with only minimal memory resources and only the crudest tools for manipulating them. Each user process had a private data segment of 150 variables, and shared common blocks could be attached, allowing inter-user communication through shared memory. On the PLATO IV system, words were 60 bits, in keeping with the CDC 6600 family of computers. Some later implementations changed this to 64 bits. The private memory region of each process consisted of 150 words each, referred to as student variables; the values of these variables were persistent, following the individual user from session to session. These were addressed as n1 through n150 when used to hold integer values, or as v1 through v150 when used to hold floating point values. A TUTOR lesson could attach a single region of up to 1500 words of shared memory using the common command. Each lesson could have an unnamed temporary common block containing variables shared by all users of that lesson. Such blocks were created when a lesson came into use and deallocated when the lesson became inactive. In contrast, named common blocks were associated with a block of a lesson (a disk file). Shared memory was addressed as nc1 through nc1500 (for integers) or vc1 through vc1500 (for floating point numbers). Where 150 student variables was insufficient, a lesson could use the storage command to create an additional private memory segment of up to 1000 words. This segment existed in swap space only, but it could be mapped to student variables or common variables. For example This example defines nc1 to nc1000 as a shared unnamed common block, while nc1001 to nc1075 are private storage. The Tutor define command was very similar to the C #define preprocessor directive. This was the only way to associate mnemonic names with variables. It was up to the programmer to statically allocate memory and assign names to variables. Consider this example: This creates a set of definitions named mynames defining three floating point variables. Users were advised that "there should not be any v3's or v26's anywhere in your lesson except in the define statement itself. Put all your definitions at the very beginning of the lesson where you will have ready reference to which variables you are using." Functions could be defined, with macro-substitution semantics, as in this illustration: Unlike C, the original scope rules of TUTOR were pure "definition before use" with no provisions for local definitions. Thus, the formal parameter a used above must not have any previous definition. Later in the development of TUTOR, with the introduction of multiple named sets of definitions, the programmer was given explicit control over which sets of definitions were currently in force. For example, define purge, setname would discard all definitions in the named set. The original TUTOR tools for text manipulation were based on commands for specific text operations, for example, pack to place a packed character string into consecutive variables in memory, search to search for one string within another, and move to move a string from memory to memory. By 1975, more general tools for arrays of integers and packed arrays were added. For example: Segmented arrays, defined with the keyword segment, were comparable to packed arrays in Pascal. The byte size and whether or not the array elements were to be treated as signed or unsigned were entirely under user control. Arbitrary text manipulation could be done by setting the byte size to the machine byte size, 6 bits on implementations using display code, 8 bits on some later ASCII and extended ASCII implementations. Note the lack of any specification of array dimensionality for segmented arrays. A general parameter passing mechanism was added to TUTOR early in the PLATO IV era. For example: Notice that the formal parameters listed in the argument list to the unit command are simply the defined names for statically allocated global variables. The semantics of parameter passing was given as being equivalent to assignment at the time of the control transfer to the destination unit, and if actual parameters were omitted, as in the second do command above, the effect was to leave the prior values of the corresponding formal parameters unchanged. Local variables were added to TUTOR some time around 1980. Lesson authors wishing to use local variables were required to use the lvars command to declare the size of the buffer used for local variables, up to 128 words. Having done so, a unit using local variables could begin as follows: The continuation lines of the unit command given above are taken to be lines of an implicit define command with local scope. Conventional definitions in terms of student variables such as n150 could be used in such a local define, but the forms illustrated here all automatically bind names to locations in the block of memory allocated by the lvars command. The available TUTOR documentation does not discuss how local variables are allocated. Other implementations There has been a sizable family of TUTOR-related languages, each similar to the original TUTOR language but with differences. In particular, TUTOR was a component of a system (the PLATO computer-based education system) that ran on particular CDC mainframe hardware. For efficiency, there were some hardware-specific elements in TUTOR (e.g. variables that were 60-bit words that could be used as arrays of 60 bits or as 10 six-bit characters, etc.). Also, TUTOR was designed before the advent of the windows-oriented graphical user interface (GUI). The microTutor language was developed in the PLATO project at UIUC to permit portions of a lesson to run in terminals that contained microcomputers, with connections to TUTOR code running on the mainframe. The microTutor dialect was also the programming language of the Cluster system developed at UIUC and licensed to TDK in Japan; the Cluster system consisted of a small group of terminals attached to a minicomputer which provided storage and compilation. The Tencore Language Authoring System is a TUTOR derivative developed by Paul Tenczar for PCs and sold by Computer Teaching Corporation. cT was a derivative of TUTOR and microTutor developed at Carnegie Mellon which allowed programs to run without change in windowed GUI environments on Windows, Mac, and Unix/Linux systems. The Pterm terminal emulator developed by Cyber1 supports the microTutor language starting with version 6. Citations References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Earth%27s_crust] | [TOKENS: 1246]
Contents Earth's crust Earth's crust is its thick outer shell of rock, comprising less than one percent of the planet's radius and volume. It is the top component of the lithosphere, a solidified division of Earth's layers that includes the crust and the upper part of the mantle. The lithosphere is broken into tectonic plates whose motion allows heat to escape the interior of Earth into space. The crust lies on top of the mantle, a configuration that is stable because the upper mantle is made of peridotite and is therefore significantly denser than the crust. The boundary between the crust and mantle is conventionally placed at the Mohorovičić discontinuity, a boundary defined by a contrast in seismic velocity. The temperature of the crust increases with depth, reaching values typically in the range from about 700 to 1,600 °C (1,292 to 2,912 °F) at the boundary with the underlying mantle. The temperature increases by as much as 30 °C (54 °F) for every kilometer locally in the upper part of the crust. Composition The crust of Earth is of two distinct types: The average thickness of the crust is about 15–20 km (9.3–12.4 mi). Because both the continental and oceanic crust are less dense than the mantle below, both types of crust "float" on the mantle. The surface of the continental crust is significantly higher than the surface of the oceanic crust, due to the greater buoyancy of the thicker, less dense continental crust (an example of isostasy). As a result, the continents form high ground surrounded by deep ocean basins. The continental crust has an average composition similar to that of andesite, though the composition is not uniform, with the upper crust averaging a more felsic composition similar to that of dacite, while the lower crust averages a more mafic composition resembling basalt. The most abundant minerals in Earth's continental crust are feldspars, which make up about 41% of the crust by mass, followed by quartz at 12%, and pyroxenes at 11%. All the other constituents except water occur only in very small quantities and total less than 1%. Continental crust is enriched in incompatible elements compared to the basaltic ocean crust and much enriched compared to the underlying mantle. The most incompatible elements are enriched by a factor of 50 to 100 in the continental crust relative to primitive mantle rock, while oceanic crust is enriched with incompatible elements by a factor of about 10. The estimated average density of the continental crust is 2.835 g/cm3, with density increasing with depth from an average of 2.66 g/cm3 in the uppermost crust to 3.1 g/cm3 at the base of the crust. In contrast to the continental crust, the oceanic crust is composed predominantly of pillow lava and sheeted dikes with the composition of mid-ocean ridge basalt, with a thin upper layer of sediments and a lower layer of gabbro. Formation and evolution Earth formed approximately 4.6 billion years ago from a disk of dust and gas orbiting the newly formed Sun. It formed via accretion, where planetesimals and other smaller rocky bodies collided and stuck, gradually growing into a planet. This process generated an enormous amount of heat, which caused early Earth to melt completely. As planetary accretion slowed, Earth began to cool, forming its first crust, called a primary or primordial crust. This crust was likely repeatedly destroyed by large impacts, then reformed from the magma ocean left by the impact. None of Earth's primary crust has survived to today; all was destroyed by erosion, impacts, and plate tectonics over the past several billion years. Since then, Earth has been forming a secondary and tertiary crust, which correspond to oceanic and continental crust, respectively. Secondary crust forms at mid-ocean spreading centers, where partial-melting of the underlying mantle yields basaltic magmas and new ocean crust forms. This "ridge push" is one of the driving forces of plate tectonics, and it is constantly creating new ocean crust. Consequently, old crust must be destroyed, so opposite a spreading center, there is usually a subduction zone: a trench where an ocean plate is sinking back into the mantle. This constant process of creating a new ocean crust and destroying the old ocean crust means that the oldest ocean crust on Earth today is only about 200 million years old. In contrast, the bulk of the continental crust is much older. The oldest continental crustal rocks on Earth have ages in the range from about 3.7 to 4.28 billion years and have been found in the Narryer Gneiss terrane in Western Australia, in the Acasta Gneiss in the Northwest Territories on the Canadian Shield, and on other cratonic regions such as those on the Fennoscandian Shield. Some zircon with age as great as 4.3 billion years has been found in the Narryer Gneiss terrane. Continental crust is a tertiary crust, formed at subduction zones through recycling of subducted secondary (oceanic) crust. The average age of Earth's current continental crust has been estimated to be about 2.0 billion years. Most crustal rocks formed before 2.5 billion years ago are located in cratons. Such an old continental crust and the underlying mantle asthenosphere are less dense than elsewhere on Earth and so are not readily destroyed by subduction. Formation of new continental crust is linked to periods of intense orogeny, which coincide with the formation of the supercontinents such as Rodinia, Pangaea and Gondwana. The crust forms in part by aggregation of island arcs including granite and metamorphic fold belts, and it is preserved in part by depletion of the underlying mantle to form buoyant lithospheric mantle. Crustal movement on continents may result in earthquakes, while movement under the seabed can lead to tidal waves. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Rosh_yeshiva] | [TOKENS: 1057]
Contents Rosh yeshiva Rosh yeshiva or Rosh Hayeshiva (Hebrew: ראש ישיבה, pl. Hebrew: ראשי ישיבה, roshei yeshiva, rashe yeshiva; Anglicized pl. rosh yeshivas) is the title given to the dean of a yeshiva, a Jewish educational institution that focuses on the study of traditional religious texts, primarily the Talmud and the Torah, and halakha (Jewish law). The general role of the rosh yeshiva is to oversee the Talmudic studies and practical matters. The rosh yeshiva will often give the highest shiur (class) and is also the one to decide whether to grant permission for students to undertake classes for rabbinical ordination, known as semicha. The term is a compound of the Hebrew words rosh ("head") and yeshiva (a school of religious Jewish education). The rosh yeshiva is required to have a comprehensive knowledge of the Talmud and the ability to analyse and present new perspectives, called chidushim (novellae) verbally and often in print. In some institutions, such as YU's Rabbi Isaac Elchanan Theological Seminary, the title of rosh yeshiva is given to many rabbis and the dean of the yeshiva is known as the rosh ha-yeshiva. Role The primary role of the rosh yeshiva is not simply to be the dean, but is generally to give the highest-level lecture in the yeshiva, which is usually a program of at least two years. Students who have studied in a yeshiva are generally known as "students of the Rosh Yeshiva", as their lecture is the one in which they usually attain their method of Talmudic analysis and critical reasoning, and this method is based on the particular style of that rosh yeshiva. In addition, since yeshivas play a central role in the life of certain communities within Orthodox Judaism, the position of rosh yeshiva is more than just the position within the yeshiva. A rosh yeshiva is often seen as a pillar of leadership in extended communities. In Hasidic Judaism, the role of rosh yeshiva is secondary to the Rebbe, who is head of the Hasidic dynasty that controls it. In many Hasidic groups, the rosh yeshiva of a school will be the son or son-in-law of the rebbe, the assumed heir of the rebbe. However, the role that yeshivahs have within Hasidic communities is not nearly as important as it is in Lithuanian Jewish (Litvishe) communities. Hasidic students usually get married at the age of 18, which—in most cases—is the end of their yeshiva education. Students in the Lithuanian Jewish communities typically continue to study until they get married starting at around age 23, with the vast majority continuing their studies in a kollel after marriage. As a result, the role that a rosh yeshiva plays in Lithuanian Jewish communities is much more important than in the Hasidic ones. History Yeshivas continue the scholarly traditions of the sages of the Mishnah and Talmud who often headed academies with hundreds of students. In the Talmudic academies in Babylonia, the rosh yeshiva was referred to as the reish metivta ("head of the academy" in Aramaic) and had the title of gaon. Regard for the rosh yeshiva in many ways is the transplantation of Hasidic attitudes in the Lithuanian world. Dynasties Depending on the size of the yeshiva, there may be several rosh yeshivas, sometimes from one extended family. There are familial dynasties of rosh yeshivas, for example, the Soloveitchik, Finkel, Feinstein, Kotler, and Kook families, which head many yeshivas in the United States and Israel. Famous rosh yeshivas Prior to the Holocaust, most of the large yeshivas were based in Eastern Europe. Presently, the majority of the world's yeshivas and their rosh yeshivas are located in the United States and Israel. The following is a list of some famous rosh yeshivas: Rosh mesivta The title rosh mesivta (alt. rosh metivta) has a long history, going back many centuries. The role is comparable to a dean in a university. Mashgiach Ruchani The personal and ethical development of the students in the yeshiva is usually covered by a different personality, known as the mashgiach or spiritual supervisor. This concept, introduced by the Mussar movement in the 19th century, led to perfection of character as one of the aims of attending a yeshiva. One typical and influential mashgiach was Eliyahu Eliezer Dessler. See References
========================================
[SOURCE: https://en.wikipedia.org/wiki/33_Orionis] | [TOKENS: 209]
Contents List of stars in Orion This is the list of notable stars in the constellation Orion, sorted by decreasing brightness. • Name = Proper name • B = Bayer designation • F or/and G. = Flamsteed designation or Gould designation • Var = Variable star designation • HD = Henry Draper Catalogue designation number • HIP = Hipparcos Catalogue designation number • RA = Right ascension for the Epoch/Equinox J2000.0 • Dec = Declination for the Epoch/Equinox J2000.0 • vis. mag. = visual magnitude (m or mv), also known as apparent magnitude • abs. mag. = absolute magnitude (Mv) • Dist. (ly) = Distance in light-years from Earth • Sp. class = Spectral class of the star in the stellar classification system • Notes = Common name(s) or alternate name(s); comments; notable properties [for example: multiple star status, range of variability if it is a variable star, exoplanets, etc.] See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Surface_runoff] | [TOKENS: 3903]
Contents Surface runoff Surface runoff (also known as overland flow or terrestrial runoff) is the unconfined flow of water over the ground surface, in contrast to channel runoff (or stream flow). It occurs when excess rainwater, stormwater, meltwater, or other sources, can no longer sufficiently rapidly infiltrate into the soil. This can occur when the soil is saturated by water to its full capacity, and the rain arrives more quickly than the soil can absorb it. Surface runoff often occurs because impervious areas (such as roofs and pavement) do not allow water to soak into the ground. Furthermore, runoff can occur either through natural or human-made processes. Surface runoff is a major component of the water cycle. It is the primary agent of soil erosion by water. The land area producing runoff that drains to a common point is called a drainage basin. Runoff that occurs on the ground surface before reaching a channel can be a nonpoint source of pollution, as it can carry human-made contaminants or natural forms of pollution (such as rotting leaves). Human-made contaminants in runoff include petroleum, pesticides, fertilizers and others. Much agricultural pollution is exacerbated by surface runoff, leading to a number of downstream impacts, including nutrient pollution that causes eutrophication. In addition to causing water erosion and pollution, surface runoff in urban areas is a primary cause of urban flooding, which can result in property damage, damp and mold in basements, and street flooding. Generation Surface runoff is defined as precipitation (rain, snow, sleet, or hail) that reaches a surface stream without ever passing below the soil surface. It is distinct from direct runoff, which is runoff that reaches surface streams immediately after rainfall or melting snowfall and excludes runoff generated by the melting of snowpack or glaciers. Snow and glacier melt occur only in areas cold enough for these to form permanently. Typically snowmelt will peak in the spring and glacier melt in the summer, leading to pronounced flow maxima in rivers affected by them. The determining factor of the rate of melting of snow or glaciers is both air temperature and the duration of sunlight. In high mountain regions, streams frequently rise on sunny days and fall on cloudy ones for this reason. In areas where there is no snow, runoff will come from rainfall. However, not all rainfall will produce runoff because storage from soils can absorb light showers. On the extremely ancient soils of Australia and Southern Africa, proteoid roots with their extremely dense networks of root hairs can absorb so much rainwater as to prevent runoff even with substantial amounts of rainfall. In these regions, even on less infertile cracking clay soils, high amounts of rainfall and potential evaporation are needed to generate any surface runoff, leading to specialised adaptations to extremely variable (usually ephemeral) streams. This occurs when the rate of rainfall on a surface exceeds the rate at which water can infiltrate the ground, and any depression storage has already been filled. This is also called Hortonian overland flow (after Robert E. Horton), or unsaturated overland flow. This more commonly occurs in arid and semi-arid regions, where rainfall intensities are high and the soil infiltration capacity is reduced because of surface sealing, or in urban areas where pavements prevent water from infiltrating. When the soil is saturated and the depression storage filled, and rain continues to fall, the rainfall will immediately produce surface runoff. The level of antecedent soil moisture is one factor affecting the time until soil becomes saturated. This runoff is called saturation excess overland flow, saturated overland flow, or Dunne runoff. Soil retains a degree of moisture after a rainfall. This residual water moisture affects the soil's infiltration capacity. During the next rainfall event, the infiltration capacity will cause the soil to be saturated at a different rate. The higher the level of antecedent soil moisture, the more quickly the soil becomes saturated. Once the soil is saturated, runoff occurs. Therefore, surface runoff is a significant factor in the controlling of soil moisture after medium and low intensity storms. After water infiltrates the soil on an up-slope portion of a hill, the water may flow laterally through the soil, and exfiltrate (flow out of the soil) closer to a channel. This is called subsurface return flow or throughflow. As it flows, the amount of runoff may be reduced in a number of possible ways: a small portion of it may evapotranspire; water may become temporarily stored in microtopographic depressions; and a portion of it may infiltrate as it flows overland. Any remaining surface water eventually flows into a receiving water body such as a river, lake, estuary or ocean. Human influence Urbanization increases surface runoff by creating more impervious surfaces such as pavement and buildings that do not allow percolation of the water down through the soil to the aquifer. It is instead forced directly into streams or storm water runoff drains, where erosion and siltation can be major problems, even when flooding is not. Increased runoff reduces groundwater recharge, thus lowering the water table and making droughts worse, especially for agricultural farmers and others who depend on the water wells. When anthropogenic contaminants are dissolved or suspended in runoff, the human impact is expanded to create water pollution. This pollutant load can reach various receiving waters such as streams, rivers, lakes, estuaries and oceans with resultant water chemistry changes to these water systems and their related ecosystems. As humans continue to alter the climate through the addition of greenhouse gases to the atmosphere, precipitation patterns are expected to change as the atmospheric capacity for water vapor increases. This will have direct consequences on runoff amounts. Urban runoff is surface runoff of rainwater, landscape irrigation, and car washing created by urbanization. Impervious surfaces (roads, parking lots and sidewalks) are constructed during land development. During rain, storms, and other precipitation events, these surfaces (built from materials such as asphalt and concrete), along with rooftops, carry polluted stormwater to storm drains, instead of allowing the water to percolate through soil. This causes lowering of the water table (because groundwater recharge is lessened) and flooding since the amount of water that remains on the surface is greater. Most municipal storm sewer systems discharge untreated stormwater to streams, rivers, and bays. This excess water can also make its way into people's properties through basement backups and seepage through building wall and floors. Urban runoff can be a major source of urban flooding and water pollution in urban communities worldwide. Industrial stormwater is runoff from precipitation (rain, snow, sleet, freezing rain, or hail) that lands on industrial sites (e.g. manufacturing facilities, mines, airports). This runoff is often polluted by materials that are handled or stored on the sites, and the facilities are subject to regulations to control the discharges. To manage industrial stormwater effectively, facilities use best management practices (BMPs) that aim to both prevent pollutants from entering the runoff and treat water before it's released from the site. Common preventive steps include maintaining clean workspaces, conducting routine equipment checks, storing materials properly, preventing spills, and training staff on pollution prevention techniques. To treat stormwater, facilities may install structural controls such as detention and retention ponds, constructed wetlands, filter systems, or oil-water separators. These systems help reduce pollution by settling out solids, filtering water, or supporting natural treatment processes before the water is discharged. Effects of surface runoff Surface runoff can cause erosion of the Earth's surface; eroded material may be deposited a considerable distance away. There are four main types of soil erosion by water: splash erosion, sheet erosion, rill erosion and gully erosion. Splash erosion is the result of mechanical collision of raindrops with the soil surface: soil particles which are dislodged by the impact then move with the surface runoff. Sheet erosion is the overland transport of sediment by runoff without a well defined channel. Soil surface roughness may cause runoff to become concentrated into narrower flow paths: as these incise, the small but well-defined channels which are formed are known as rills. These channels can be as small as one centimeter wide or as large as several meters. If runoff continues to incise and enlarge rills, they may eventually grow to become gullies. Gully erosion can transport large amounts of eroded material in a small time period. Reduced crop productivity usually results from erosion, and these effects are studied in the field of soil conservation. The soil particles carried in runoff vary in size from about 0.001 millimeter to 1.0 millimeter in diameter. Larger particles settle over short transport distances, whereas small particles can be carried over long distances suspended in the water column. Erosion of silty soils that contain smaller particles generates turbidity and diminishes light transmission, which disrupts aquatic ecosystems. Entire sections of countries have been rendered unproductive by erosion. On the high central plateau of Madagascar, approximately ten percent of that country's land area, virtually the entire landscape is devoid of vegetation, with erosive gully furrows typically in excess of 50 meters deep and one kilometer wide. Shifting cultivation is a farming system which sometimes incorporates the slash and burn method in some regions of the world. Erosion causes loss of the fertile topsoil and reduces its fertility and quality of the agricultural produce. Modern industrial farming is another major cause of erosion. Over a third of the U.S. Corn Belt has completely lost its topsoil. Switching to no-till practices would reduce soil erosion from U.S. agricultural fields by more than 70 percent. The principal environmental issues associated with runoff are the impacts to surface water, groundwater and soil through transport of water pollutants to these systems. Ultimately these consequences translate into human health risk, ecosystem disturbance and aesthetic impact to water resources. Some of the contaminants that create the greatest impact to surface waters arising from runoff are petroleum substances, herbicides and fertilizers. Quantitative uptake by surface runoff of pesticides and other contaminants has been studied since the 1960s, and early on contact of pesticides with water was known to enhance phytotoxicity. In the case of surface waters, the impacts translate to water pollution, since the streams and rivers have received runoff carrying various chemicals or sediments. When surface waters are used as potable water supplies, they can be compromised regarding health risks and clean drinking water aesthetics (that is, odor, color and turbidity effects). Contaminated surface waters risk altering the metabolic processes of the aquatic species that they host; these alterations can lead to death, such as fish kills, or alter the balance of populations present. Other specific impacts are on animal mating, spawning, egg and larvae viability, juvenile survival and plant productivity. Some research shows surface runoff of pesticides, such as DDT, can alter the gender of fish species genetically, which transforms male into female fish. Surface runoff occurring within forests can supply lakes with high loads of mineral nitrogen and phosphorus leading to eutrophication. Runoff waters within coniferous forests are also enriched with humic acids and can lead to humification of water bodies Additionally, high standing and young islands in the tropics and subtropics can undergo high soil erosion rates and also contribute large material fluxes to the coastal ocean. Such land derived runoff of sediment nutrients, carbon, and contaminants can have large impacts on global biogeochemical cycles and marine and coastal ecosystems. In the case of groundwater, the main issue is contamination of drinking water, if the aquifer is abstracted for human use. Regarding soil contamination, runoff waters can have two important pathways of concern. Firstly, runoff water can extract soil contaminants and carry them in the form of water pollution to even more sensitive aquatic habitats. Secondly, runoff can deposit contaminants on pristine soils, creating health or ecological consequences. The other context of agricultural issues involves the transport of agricultural chemicals (nitrates, phosphates, pesticides, herbicides, etc.) via surface runoff. This result occurs when chemical use is excessive or poorly timed with respect to high precipitation. The resulting contaminated runoff represents not only a waste of agricultural chemicals, but also an environmental threat to downstream ecosystems. Pine straws are often used to protect soil from soil erosion and weed growth. However, harvesting these crops may result in the increase of soil erosion. Surface runoff results in a significant amount of economic effects. Pine straws are cost effective ways of dealing with surface runoff. Moreover, surface runoff can be reused through the growth of elephant grass. In Nigeria, elephant grass is considered to be an economical way in which surface runoff and erosion can be reduced. Also, China has suffered significant impact from surface runoff to most of their economical crops such as vegetables. Therefore, they are known to have implemented a system which reduced loss of nutrients (nitrogen and phosphorus) in soil. Flooding occurs when a watercourse is unable to convey the quantity of runoff flowing downstream. The frequency with which this occurs is described by a return period. Flooding is a natural process, which maintains ecosystem composition and processes, but it can also be altered by land use changes such as river engineering. Floods can be both beneficial to societies or cause damage. Agriculture along the Nile floodplain took advantage of the seasonal flooding that deposited nutrients beneficial for crops. However, as the number and susceptibility of settlements increase, flooding increasingly becomes a natural hazard. In urban areas, surface runoff is the primary cause of urban flooding, known for its repetitive and costly impact on communities. Adverse impacts span loss of life, property damage, contamination of water supplies, loss of crops, and social dislocation and temporary homelessness. Floods are among the most devastating of natural disasters. The use of supplemental irrigation is also recognized as a significant way in which crops such as maize can retain nitrogen fertilizers in soil, resulting in improvement of crop water availability. Mitigation and treatment Mitigation of adverse impacts of runoff can take several forms: Land use controls. Many world regulatory agencies have encouraged research on methods of minimizing total surface runoff by avoiding unnecessary hardscape. Many municipalities have produced guidelines and codes (zoning and related ordinances) for land developers that encourage minimum width sidewalks, use of pavers set in earth for driveways and walkways and other design techniques to allow maximum water infiltration in urban settings. An example of a local program specifying design requirements, construction practices and maintenance requirements for buildings and properties is in Santa Monica, California. Erosion controls have appeared since medieval times when farmers realized the importance of contour farming to protect soil resources. Beginning in the 1950s these agricultural methods became increasingly more sophisticated. In the 1960s some state and local governments began to focus their efforts on mitigation of construction runoff by requiring builders to implement erosion and sediment controls (ESCs). This included such techniques as: use of straw bales and barriers to slow runoff on slopes, installation of silt fences, programming construction for months that have less rainfall and minimizing extent and duration of exposed graded areas. Montgomery County, Maryland implemented the first local government sediment control program in 1965, and this was followed by a statewide program in Maryland in 1970. Flood control programs as early as the first half of the twentieth century became quantitative in predicting peak flows of riverine systems. Progressively strategies have been developed to minimize peak flows and also to reduce channel velocities. Some of the techniques commonly applied are: provision of holding ponds (also called detention basins or balancing lakes) to buffer riverine peak flows, use of energy dissipators in channels to reduce stream velocity and land use controls to minimize runoff. Chemical use and handling. Following enactment of the U.S. Resource Conservation and Recovery Act (RCRA) in 1976, and later the Water Quality Act of 1987, states and cities have become more vigilant in controlling the containment and storage of toxic chemicals, thus preventing releases and leakage. Methods commonly applied are: requirements for double containment of underground storage tanks, registration of hazardous materials usage, reduction in numbers of allowed pesticides and more stringent regulation of fertilizers and herbicides in landscape maintenance. In many industrial cases, pretreatment of wastes is required, to minimize escape of pollutants into sanitary or stormwater sewers. The U.S. Clean Water Act (CWA) requires that local governments in urbanized areas (as defined by the Census Bureau) obtain stormwater discharge permits for their drainage systems. Essentially this means that the locality must operate a stormwater management program for all surface runoff that enters the municipal separate storm sewer system ("MS4"). EPA and state regulations and related publications outline six basic components that each local program must contain: Other property owners which operate storm drain systems similar to municipalities, such as state highway systems, universities, military bases and prisons, are also subject to the MS4 permit requirements. Measurement and mathematical modeling Runoff is analyzed by using mathematical models in combination with various water quality sampling methods. Measurements can be made using continuous automated water quality analysis instruments targeted on pollutants such as specific organic or inorganic chemicals, pH, turbidity, etc., or targeted on secondary indicators such as dissolved oxygen. Measurements can also be made in batch form by extracting a single water sample and conducting chemical or physical tests on that sample. In the 1950s or earlier, hydrology transport models appeared to calculate quantities of runoff, primarily for flood forecasting. Beginning in the early 1970s, computer models were developed to analyze the transport of runoff carrying water pollutants. These models considered dissolution rates of various chemicals, infiltration into soils, and the ultimate pollutant load delivered to receiving waters. One of the earliest models addressing chemical dissolution in runoff and resulting transport was developed in the early 1970s under contract to the United States Environmental Protection Agency (EPA). This computer model formed the basis of much of the mitigation study that led to strategies for land use and chemical handling controls. Increasingly, stormwater practitioners have recognized the need for Monte Carlo models to simulate stormwater processes because of natural variations in multiple variables affecting runoff quality and quantity. The benefit of the Monte Carlo analysis is not to decrease uncertainty in the input statistics but to represent the different combinations of the variables that determine potential risks of water-quality excursions. One example of this type of stormwater model is the stochastic empirical loading and dilution model (SELDM) is a stormwater quality model. SELDM is designed to transform complex scientific data into meaningful information about the risk of adverse effects of runoff on receiving waters, the potential need for mitigation measures, and the effectiveness of such management measures for reducing these risks. SELDM provides a method for rapid assessment of information that is otherwise difficult or impossible to obtain because it models the interactions among hydrologic variables (with different probability distributions), resulting in a population of values representing likely long-term outcomes from runoff processes and the potential effects of various mitigation measures. SELDM also provides the means for rapidly doing sensitivity analyses to determine the possible effects of varying input assumptions on the risks for water-quality excursions. Other computer models have been developed (such as the DSSAM Model) that allow surface runoff to be tracked through a river course as reactive water pollutants. In this case, the surface runoff may be considered to be a line source of water pollution to the receiving waters. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Web_servers] | [TOKENS: 6455]
Contents Web server A web server is computer software that accepts requests via HTTP (the network protocol created to distribute web content) or its secure variant HTTPS. A user agent, commonly a web browser or web crawler, initiates communication by making a request for a web page or other resource using HTTP, and the server responds with the content of that resource or an error message. A web server can also accept and store resources sent from the user agent if configured to do so. The hardware used to run a web server can vary according to the volume of requests that it needs to handle. At the low end of the range are embedded systems, such as a router that runs a small web server as its configuration interface. A high-traffic Internet website might handle requests with hundreds of servers that run on racks of high-speed computers. A resource sent from a web server can be a pre-existing file (static content) available to the web server, or it can be generated at the time of the request (dynamic content) by another program that communicates with the server software. The former usually can be served faster and can be more easily cached for repeated requests, while the latter supports a broader range of applications. Technologies such as REST and SOAP, which use HTTP as a basis for general computer-to-computer communication, as well as support for WebDAV extensions, have extended the application of web servers well beyond their original purpose of serving human-readable pages. History This is a very brief history of web server programs, so some information necessarily overlaps with the histories of the web browsers, the World Wide Web and the Internet; therefore, for the sake of clarity and understandability, some key historical information below reported may be similar to that found also in one or more of the above-mentioned history articles. In March 1989, Sir Tim Berners-Lee proposed a new project to his employer CERN, with the goal of easing the exchange of information between scientists by using a hypertext system. The proposal titled "HyperText and CERN", asked for comments and it was read by several people. In October 1990 the proposal was reformulated and enriched (having as co-author Robert Cailliau), and finally, it was approved. Between late 1990 and early 1991 the project resulted in Berners-Lee and his developers writing and testing several software libraries along with three programs, which initially ran on NeXTSTEP OS installed on NeXT workstations: Those early browsers retrieved web pages written in a simple early form of HTML from web servers using a new basic communication protocol that was named HTTP 0.9. In August 1991 Tim Berners-Lee announced the birth of WWW technology and encouraged scientists to adopt and develop it. Soon after, those programs, along with their source code, were made available to people interested in their usage. Although the source code was not formally licensed or placed in the public domain, CERN informally allowed users and developers to experiment and further develop on top of them. Berners-Lee started promoting the adoption and the usage of those programs along with their porting to other operating systems. In December 1991, the first web server outside Europe was installed at SLAC (U.S.A.). This was a very important event because it started trans-continental web communications between web browsers and web servers. In 1991–1993, CERN web server program continued to be actively developed by the www group, meanwhile, thanks to the availability of its source code and the public specifications of the HTTP protocol, many other implementations of web servers started to be developed. In April 1993, CERN issued a public official statement stating that the three components of Web software (the basic line-mode client, the web server and the library of common code), along with their source code, were put in the public domain. This statement freed web server developers from any possible legal issue about the development of derivative work based on that source code (a threat that in practice never existed). At the beginning of 1994, the most notable among new web servers was NCSA httpd which ran on a variety of Unix-based OSs and could serve dynamically generated content by implementing the POST HTTP method and the CGI to communicate with external programs. These capabilities, along with the multimedia features of NCSA's Mosaic browser (also able to manage HTML FORMs in order to send data to a web server) highlighted the potential of web technology for publishing and distributed computing applications. In the second half of 1994, the development of NCSA httpd stalled to the point that a group of external software developers, webmasters and other professional figures interested in that server, started to write and collect patches thanks to the NCSA httpd source code being available to the public domain. At the beginning of 1995 those patches were all applied to the last release of NCSA source code and, after several tests, the Apache HTTP server project was started. At the end of 1994, a new commercial web server, named Netsite, was released with specific features. It was the first one of many other similar products that were developed first by Netscape, then also by Sun Microsystems, and finally by Oracle Corporation. In mid-1995, the first version of IIS was released by Microsoft for Windows NT OS. This marked the entry, in the field of World Wide Web technologies, of a commercial developer and vendor that has played and still is playing a key role on both sides (client and server) of the web. In the second half of 1995, CERN and NCSA web servers started to decline (in global percentage usage) because of the widespread adoption of new web servers which had a much faster development cycle along with more features, more fixes applied, and more performances than the previous ones. At the end of 1996, there were already over fifty known, different web-server-software programs that were available to everybody who wanted to own an Internet domain name or to host websites. Many of them lived only shortly and were replaced by other web servers. The publication of RFCs about protocol versions HTTP/1.0 (1996) and HTTP/1.1 (1997, 1999), forced most web servers to comply (not always completely) with those standards. The use of TCP/IP persistent connections (HTTP/1.1) required web servers both to increase the maximum number of concurrent connections allowed and to improve their level of scalability. Between 1996 and 1999, Netscape Enterprise Server and Microsoft's IIS emerged among the leading commercial options whereas among the freely available and open-source programs Apache HTTP Server held the lead as the preferred server (because of its reliability and its many features). In those years there was also another commercial web server called Zeus (now discontinued) that was known as one of the fastest and most scalable web servers available on market, at least till the first decade of 2000s, despite its low percentage of usage. Apache resulted in the most used web server from mid-1996 to the end of 2015 when, after a few years of decline, it was surpassed initially by IIS and then by Nginx. Afterward IIS dropped to much lower percentages of usage than Apache (see also market share). From 2005–2006, Apache started to improve its speed and its scalability level by introducing new performance features (e.g., event MPM and new content cache). As those new performance improvements initially were marked as experimental, they were not enabled by its users for a long time and so Apache suffered, even more, the competition of commercial servers and, above all, of other open-source servers which meanwhile had already achieved far superior performances (mostly when serving static content) since the beginning of their development and at the time of the Apache decline were able to offer also a long enough list of well tested advanced features. A few years after 2000 started, not only other commercial and highly competitive web servers (e.g., LiteSpeed) but also many other open-source programs such as Hiawatha, Cherokee HTTP server, Lighttpd, Nginx and other derived and related products also available with commercial support emerged. Around 2007–2008, most popular web browsers increased their previous default limit of 2 persistent connections per host-domain (a limit recommended by RFC-2616) to 4, 6 or 8 persistent connections per host-domain, in order to speed up the retrieval of heavy web pages with lots of images, and to mitigate the problem of the shortage of persistent connections dedicated to dynamic objects used for bi-directional notifications of events in web pages. Within a year, these changes, on average, nearly tripled the maximum number of persistent connections that web servers had to manage. This trend (of increasing the number of persistent connections) definitely gave a strong impetus to the adoption of reverse proxies in front of slower web servers and it gave also one more chance to the emerging new web servers that could show all their speed and their capability to handle very high numbers of concurrent connections without requiring too many hardware resources (expensive computers with lots of CPUs, RAM and fast disks). In 2015, RFCs published new protocol version [HTTP/2], and as the implementation of new specifications was not trivial at all, a dilemma arose among developers of less popular web servers (e.g., with a percentage of usage lower than 1% .. 2%), about adding or not adding support for that new protocol version. In fact supporting HTTP/2 often required radical changes to their internal implementation due to many factors (practically always required encrypted connections, capability to distinguish between HTTP/1.x and HTTP/2 connections on the same TCP port, binary representation of HTTP messages, message priority, compression of HTTP headers, use of streams also known as TCP/IP sub-connections and related flow-control, etc.) and so a few developers of those web servers opted for not supporting new HTTP/2 version (at least in the near future) also because of these main reasons: Instead, developers of most popular web servers, rushed to offer the availability of new protocol, not only because they had the work force and the time to do so, but also because usually their previous implementation of SPDY protocol could be reused as a starting point and because most used web browsers implemented it very quickly for the same reason. Another reason that prompted those developers to act quickly was that webmasters felt the pressure of the ever increasing web traffic and they really wanted to install and to try – as soon as possible – something that could drastically lower the number of TCP/IP connections and speedup accesses to hosted websites. In 2020–2021 the HTTP/2 dynamics about its implementation (by top web servers and popular web browsers) were partly replicated after the publication of advanced drafts of future RFC about HTTP/3 protocol. Technical overview The following technical overview should be considered only as an attempt to give a few very limited examples about some features that may be implemented in a web server and some of the tasks that it may perform in order to have a sufficiently wide scenario about the topic. A web server program plays the role of a server in a client–server model by implementing one or more versions of HTTP protocol, often including the HTTPS secure variant and other features and extensions that are considered useful for its planned usage. The complexity and the efficiency of a web server program may vary a lot depending on: Although web server programs differ in how they are implemented, most of them offer the following common features. These are basic features that most web servers usually have. A few other more advanced and popular features (only a very short selection) are the following ones. A web server program, when it is running, usually performs several general tasks: Web server programs are able: Once an HTTP request message has been decoded and verified, its values can be used to determine whether that request can be satisfied or not. This requires many other steps, including security checks. Web server programs usually perform some type of URL normalization (URL found in most HTTP request messages) in order to: The term URL normalization refers to the process of modifying and standardizing a URL in a consistent manner. There are several types of normalization that may be performed, including the conversion of the scheme and host to lowercase. Among the most important normalizations are the removal of "." and ".." path segments and adding trailing slashes to a non-empty path component. "URL mapping is the process by which a web server or application framework determines how an incoming URL request is routed to the appropriate resource, handler, or action. Modern URL mapping mechanisms analyse the structure of the requested URL and use routing rules or configuration patterns to deliver static resources, invoke dynamic handlers, or perform rewrites and redirects without directly relying on file system paths. This approach allows clean, human-readable URLs and flexible application architectures. In practice, web server programs that implement advanced features, beyond the simple static content serving (e.g., URL rewrite engine, dynamic content serving), usually have to figure out how that URL has to be handled as a: One or more configuration files of web server may specify the mapping of parts of URL path (e.g., initial parts of file path, filename extension and other path components) to a specific URL handler (file, directory, external program or internal module). When a web server implements one or more of the above-mentioned advanced features then the path part of a valid URL may not always match an existing file system path under website directory tree (a file or a directory in file system) because it can refer to a virtual name of an internal or external module processor for dynamic requests. Web server programs are able to translate an URL path (all or part of it), that refers to a physical file system path, to an absolute path under the target website's root directory. Website's root directory may be specified by a configuration file or by some internal rule of the web server by using the name of the website which is the host part of the URL found in HTTP client request. Path translation to file system is done for the following types of web resources: The web server appends the path found in requested URL (HTTP request message) and appends it to the path of the (Host) website root directory. On an Apache server, this is commonly /home/www/website (on Unix machines, usually it is: /var/www/website). See the following examples of how it may result. URL path translation for a static file request Example of a static request of an existing file specified by the following URL: The client's user agent connects to www.example.com and then sends the following HTTP/1.1 request: The result is the local file system resource: The web server then reads the file, if it exists, and sends a response to the client's web browser. The response will describe the content of the file and contain the file itself or an error message will return saying that the file does not exist or its access is forbidden. URL path translation for a directory request (without a static index file) Example of an implicit dynamic request of an existing directory specified by the following URL: The client's user agent connects to www.example.com and then sends the following HTTP/1.1 request: The result is the local directory path: The web server then verifies the existence of the directory and if it exists and it can be accessed then tries to find out an index file (which in this case does not exist) and so it passes the request to an internal module or a program dedicated to directory listings and finally reads data output and sends a response to the client's web browser. The response will describe the content of the directory (list of contained subdirectories and files) or an error message will return saying that the directory does not exist or its access is forbidden. URL path translation for a dynamic program request For a dynamic request the URL path specified by the client should refer to an existing external program (usually an executable file with a CGI) used by the web server to generate dynamic content. Example of a dynamic request using a program file to generate output: The client's user agent connects to www.example.com and then sends the following HTTP/1.1 request: The result is the local file path of the program (in this example, a PHP program): The web server executes that program, passing in the path-info and the query string action=view&orderby=thread&date=2021-10-15 so that the program has the info it needs to run. (In this case, it will return an HTML document containing a view of forum entries ordered by thread from October 15, 2021). In addition to this, the web server reads data sent from the external program and resends that data to the client that made the request. Once a request has been read, interpreted, and verified, it has to be managed depending on its method, its URL, and its parameters, which may include values of HTTP headers. In practice, the web server has to handle the request by using one of these response paths: If a web server program is capable of serving static content and it has been configured to do so, then it is able to send file content whenever a request message has a valid URL path matching (after URL mapping, URL translation and URL redirection) that of an existing file under the root directory of a website and file has attributes which match those required by internal rules of web server program. That kind of content is called static because usually it is not changed by the web server when it is sent to clients and because it remains the same until it is modified (file modification) by some program. NOTE: when serving static content only, a web server program usually does not change file contents of served websites (as they are only read and never written) and so it suffices to support only these HTTP methods: Response of static file content can be sped up by a file cache. If a web server program receives a client request message with an URL whose path matches one of an existing directory and that directory is accessible and serving directory index files is enabled then a web server program may try to serve the first of known (or configured) static index file names (a regular file) found in that directory; if no index file is found or other conditions are not met then an error message is returned. Most used names for static index files are: index.html, index.htm and Default.htm. If a web server program receives a client request message with an URL whose path matches the file name of an existing file and that file is accessible by web server program and its attributes match internal rules of web server program, then web server program can send that file to client. Usually, for security reasons, most web server programs are pre-configured to serve only regular files or to avoid to use special file types like device files, along with symbolic links or hard links to them. The aim is to avoid undesirable side effects when serving static web resources. If a web server program is capable of serving dynamic content and it has been configured to do so, then it is able to communicate with the proper internal module or external program (associated with the requested URL path) in order to pass to it the parameters of the client request. After that, the web server program reads from it its data response (that it has generated, often on the fly) and then it resends it to the client program who made the request.[citation needed] NOTE: when serving static and dynamic content, a web server program usually has to support also the following HTTP method in order to be able to safely receive data from clients and so to be able to host also websites with interactive forms that may send large data sets (e.g., lots of data entry or file uploads) to web server, external programs or modules: In order to be able to communicate with its internal modules or external programs, a web server program must have implemented one or more of the many available gateway interfaces (see also Web Server Gateway Interfaces used for dynamic content). The three standard and historical gateway interfaces are the following ones. A web server program may be capable to manage the dynamic generation (on the fly) of a directory index list of files and sub-directories. If a web server program is configured to do so and a requested URL path matches an existing directory and its access is allowed and no static index file is found under that directory then a web page (usually in HTML format), containing the list of files or subdirectories of above mentioned directory, is dynamically generated (on the fly). If it cannot be generated an error is returned. Some web server programs allow the customization of directory listings by allowing the usage of a web page template—an HTML document containing placeholders, (e.g., $(FILE_NAME), $(FILE_SIZE), etc.) that are replaced with the field values of each file entry found in directory by web server (e.g., index.tpl) or the usage of HTML and embedded source code that is interpreted and executed (e.g.,, index.asp) or by supporting the usage of dynamic index programs such as CGIs, SCGIs, FCGIs (e.g., index.cgi, index.php, index.fcgi). Usage of dynamically generated directory listings is usually avoided or limited to a few selected directories of a website because that generation takes much more OS resources than sending a static index page. The main usage of directory listings is to allow the download of files (usually when their names, sizes, modification date-times or file attributes may change randomly and frequently) as they are, without requiring to provide further information to requesting user. An external program or an internal module (processing unit) can execute some sort of application function that may be used to get data from or to store data to one or more data repositories:[citation needed] A processing unit can return any kind of web content, also by using data retrieved from a data repository:[citation needed] In practice whenever there is content that may vary, depending on one or more parameters contained in client request or in configuration settings, then, usually, it is generated dynamically. Web server programs are able to send response messages as replies to client request messages. An error response message may be sent because a request message could not be successfully read or decoded or analyzed or executed. NOTE: the following sections are reported only as examples to help to understand what a web server, more or less, does; these sections are by any means neither exhaustive nor complete. A web server program may reply to a client request message with many kinds of error messages, anyway these errors are divided mainly in two categories: When an error response or message is received by a client browser, then if it is related to the main user request (e.g., an URL of a web resource such as a web page) then usually that error message is shown in some browser window or message. A web server program may be able to verify whether the requested URL path: If the authorization or access-rights feature has been implemented and enabled and access to web resource is not granted, then, depending on the required access rights, a web server program: A web server program may have the capability of doing URL redirections to new URLs (new locations) which consists in replying to a client request message with a response message containing a new URL suited to access a valid or an existing web resource (client should redo the request with the new URL). URL redirection of location is used: Example 1: a URL path points to a directory name but it does not have a final slash '/' so web server sends a redirect to client in order to instruct it to redo the request with the fixed path name. From: /directory1/directory2 To: /directory1/directory2/ Example 2: a whole set of documents has been moved inside website in order to reorganize their file system paths. From: /directory1/directory2/2021-10-08/ To: /directory1/directory2/2021/10/08/ Example 3: a whole set of documents has been moved to a new website and now it is mandatory to use secure HTTPS connections to access them. From: http://www.example.com/directory1/directory2/2021-10-08/ To: https://docs.example.com/directory1/2021-10-08/ Above examples are only a few of the possible kind of redirections. A web server program is able to reply to a valid client request message with a successful message, optionally containing requested web resource data. If web resource data is sent back to client, then it can be static content or dynamic content depending on how it has been retrieved (from a file or from the output of some program or module). In order to speed up web server responses by lowering average HTTP response times and hardware resources used, many popular web servers implement one or more content caches, each one specialized in a content category. Content is usually cached by its origin: Historically, static contents found in files which had to be accessed frequently, randomly and quickly, have been stored mostly on electro-mechanical disks since mid-late 1960s and 1970s; regrettably reads from and writes to those kind of devices have always been considered very slow operations when compared to RAM speed and so, since early OSs, first disk caches and then also OS file cache sub-systems were developed to speed up I/O operations of frequently accessed data. Even with the aid of an OS file cache, the relative or occasional slowness of I/O operations involving directories and files stored on disks became soon a bottleneck in the increase of performances expected from top level web servers, specially since mid-late 1990s, when web Internet traffic started to grow exponentially along with the constant increase of speed of Internet or network lines. The problem about how to further efficiently speed-up the serving of static files, thus increasing the maximum number of requests or responses per second (RPS), started to be studied and researched since mid 1990s, with the aim to propose useful cache models that could be implemented in web server programs. In practice, nowadays, many web server programs include their own userland file cache, tailored for a web server usage and using their specific implementation and parameters. The wide spread adoption of RAID and fast solid-state drives (storage hardware with very high I/O speed) has slightly reduced but of course not eliminated the advantage of having a file cache incorporated in a web server. Dynamic content, output by an internal module or an external program, may not always change very frequently (given a unique URL with keys or parameters) and so, maybe for a while (e.g., from one second to several hours or more), the resulting output can be cached in RAM or even on a fast disk. The typical usage of a dynamic cache is when a website has dynamic web pages about news, weather, images, maps, etc. that do not change frequently (e.g., every n minutes) and that are accessed by a huge number of clients per minute per hour; in those cases it is useful to return cached content too (without calling the internal module or the external program) because clients often do not have an updated copy of the requested content in their browser caches. Anyway, in most cases those kind of caches are implemented by external servers (e.g., reverse proxy) or by storing dynamic data output in separate computers, managed by specific applications (e.g., memcached), in order to not compete for hardware resources (CPU, RAM, disks) with web servers. A web server software can be either incorporated into the OS and executed in kernel space, or it can be executed in user space (like other regular applications). Web servers that run in kernel mode (usually called kernel space web servers) can have direct access to kernel resources and so they can be, in theory, faster than those running in user mode, but there are disadvantages in running a web server in kernel mode (e.g., difficulties in developing and debugging software) whereas run-time critical errors may lead to serious problems in OS kernel. Web servers that run in user-mode have to ask the system for permission to use more memory or more CPU resources. Not only do these requests to the kernel take time, but they might not always be satisfied because the system reserves resources for its own usage and has the responsibility to share hardware resources with all the other running applications. Executing in user mode can also mean using more buffer or data copies (between user-space and kernel-space) which can lead to a decrease in the performance of a user-mode web server. Nowadays almost all web server software is executed in user mode (because many of the aforementioned small disadvantages have been overcome by faster hardware, new OS versions, much faster OS system calls and new optimized web server software). See also comparison of web server software to discover which of them run in kernel mode or in user mode (also referred as kernel space or user space). Performances To improve the user experience (on client or browser side), a web server should reply quickly (as soon as possible) to client requests; unless content response is throttled (by configuration) for some type of files (e.g., big or huge files), also returned data content should be sent as fast as possible (high transfer speed). In other words, a web server should always be very responsive, even under high load of web traffic, in order to keep total user's wait (sum of browser time + network time + web server response time) for a response as low as possible. For web server software, main key performance metrics (measured under vary operating conditions) usually are at least the following ones: Among the operating conditions, the number (1 .. n) of concurrent client connections used during a test is an important parameter because it allows to correlate the concurrency level supported by web server with results of the tested performance metrics. The specific web server software design and model adopted: ... and other programming techniques, such as: ... used to implement a web server program, can bias a lot the performances and in particular the scalability level that can be achieved under heavy load or when using high end hardware (many CPUs, disks and lots of RAM). In practice some web server software models may require more OS resources (specially more CPUs and more RAM) than others to be able to work well and so to achieve target performances. There are many operating conditions that can affect the performances of a web server; performance values may vary depending on: Performances of a web server are typically benchmarked by using one or more of the available automated load testing tools. Load limits A web server (program installation) usually has pre-defined load limits for each combination of operating conditions, also because it is limited by OS resources and because it can handle only a limited number of concurrent client connections (usually between 2 and several tens of thousands for each active web server process, see also the C10k problem and the C10M problem). When a web server is near to or over its load limits, it gets overloaded and so it may become unresponsive. At any time web servers can be overloaded due to one or more of the following causes: The symptoms of an overloaded web server are usually the following ones: To partially overcome above average load limits and to prevent overload, most popular websites use common techniques like the following ones: Market share Below are the latest statistics of the market share of all sites of the top web servers on the Internet by Netcraft. NOTE: (*) percentage rounded to integer number, because its decimal values are not publicly reported by source page (only its rounded value is reported in graph). See also Standard Web Server Gateway Interfaces used for dynamic contents: A few other Web Server Interfaces (server or programming language specific) used for dynamic contents: References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Desktop_Computer] | [TOKENS: 3513]
Contents Desktop computer A desktop computer, often abbreviated as desktop, is a personal computer designed for regular use at a stationary location on or near a desk (as opposed to a portable computer) due to its size and power requirements. The most common configuration has a case that houses the power supply, motherboard (a printed circuit board with a microprocessor as the central processing unit, memory, bus, certain peripherals and other electronic components), disk storage (usually one or more hard disk drives, solid-state drives, optical disc drives, and in early models floppy disk drives); a keyboard and mouse for input; and a monitor, speakers, and, often, a printer for output. The case may be oriented horizontally or vertically and placed either underneath, beside, or on top of a desk. Desktop computers with their cases oriented vertically are referred to as towers. As the majority of cases offered since the mid 1990s are in this form factor, the term desktop has been retronymically used to refer to modern cases offered in the traditional horizontal orientation. History Prior to the widespread use of microprocessors, a computer that could fit on a desk was considered remarkably small; the type of computers most commonly used were minicomputers, which, despite the name, were rather large and were "mini" only compared to the so-called "big iron". Early computers, and later the general purpose high throughput "mainframes", took up the space of a whole room. Minicomputers, on the contrary, generally fit into one or a few refrigerator-sized racks, or, for the few smaller ones, were built into a fairly large desk, not put on top of it. It was not until the 1970s when fully programmable computers appeared that could fit entirely on top of a desk. 1970 saw the introduction of the Datapoint 2200, a "smart" computer terminal complete with keyboard and monitor, was designed to connect with a mainframe computer but that did not stop owners from using its built-in computational abilities as a stand-alone desktop computer. The HP 9800 series, which started out as programmable calculators in 1971 but was programmable in BASIC by 1972, used a smaller version of a minicomputer design based on ROM memory and had small one-line LED alphanumeric displays and displayed graphics with a plotter. The Wang 2200 of 1973 had a full-size cathode-ray tube (CRT) and cassette tape storage. The IBM 5100 in 1975 had a small CRT display and could be programmed in BASIC and APL. These were generally expensive specialized computers sold for business or scientific uses. Apple II, TRS-80 and Commodore PET were first-generation personal home computers launched in 1977, which were aimed at the consumer market – rather than businessmen or computer hobbyists. Byte magazine referred to these three as the "1977 Trinity" of personal computing. Throughout the 1980s and 1990s, desktop computers became the predominant type, the most popular being the IBM PC and its clones, followed by the Apple Macintosh, with the third-placed Commodore Amiga having some success in the mid-1980s but declining by the early 1990s. Early personal computers, like the original IBM Personal Computer, were enclosed in a "desktop case", horizontally oriented to have the display screen placed on top, thus saving space on the user's actual desk, although these cases had to be sturdy enough to support the weight of CRT displays that were widespread at the time. Over the course of the 1990s, desktop cases gradually became less common than the more-accessible tower cases that may be located on the floor under or beside a desk rather than on a desk. Not only do these tower cases have more room for expansion, they have also freed up desk space for monitors which were becoming larger every year. Desktop cases, particularly the compact form factors, remain popular for corporate computing environments and kiosks. Some computer cases can be interchangeably positioned either horizontally (desktop) or upright (mini-tower). Influential games such as Doom and Quake during the 1990s had pushed gamers and enthusiasts to frequently upgrade to the latest CPUs and graphics cards (3dfx, ATI, and Nvidia) for their desktops (usually a tower case) in order to run these applications, though this has slowed since the late 2000s as the growing popularity of Intel integrated graphics forced game developers to scale back. Creative Technology's Sound Blaster series were a de facto standard for sound cards in desktop PCs during the 1990s until the early 2000s, when they were reduced to a niche product, as OEM desktop PCs came with sound boards integrated directly onto the motherboard. While desktops have long been the most common configuration for PCs, by the mid-2000s the growth shifted from desktops to laptops. Laptops had long been produced by contract manufacturers based in Asia, such as Foxconn, and this shift led to the closure of the many desktop assembly plants in the United States by 2010. Another trend around this time was the increasing proportion of inexpensive base-configuration desktops being sold, hurting PC manufacturers such as Dell whose build-to-order customization of desktops relied on upselling added features to buyers. Battery-powered portable computers had just a 2% worldwide market share in 1986. However, laptops have become increasingly popular, both for business and personal use. Around 109 million notebook PCs shipped worldwide in 2007, a growth of 33% compared to 2006. In 2008, it was estimated that 145.9 million notebooks were sold and that the number would grow in 2009 to 177.7 million. The third quarter of 2008 was the first time when worldwide notebook PC shipments exceeded desktops, with 38.6 million units versus 38.5 million units. The sales breakdown of the Apple Macintosh has seen sales of desktop Macs staying mostly constant while being surpassed by that of Mac notebooks whose sales rate has grown considerably; seven out of ten Macs sold were laptops in 2009, a ratio projected to rise to three out of four by 2010. The change in sales of form factors is due to the desktop iMac moving from affordable G3 to upscale G4 model and subsequent releases are considered premium all-in-ones. By contrast, the MSRP of the MacBook laptop lines have dropped through successive generations such that the MacBook Air and MacBook Pro constitute the lowest price of entry to a Mac, with the exception of the even more inexpensive Mac Mini (albeit without a monitor and keyboard), and the MacBooks are the top-selling form factors of the Macintosh platform today. The decades of development mean that most people already own desktop computers that meet their needs and have no need of buying a new one merely to keep pace with advancing technology. Notably, the successive release of new versions of Windows (Windows 95, 98, XP, Vista, 7, 8, 10 and so on) had been drivers for the replacement of PCs in the 1990s, but this slowed in the 2000s. IDC analyst Jay Chou suggested that Windows 8 actually hurt sales of PCs in 2012, as businesses decided to stick with Windows 7 rather than upgrade. Some suggested that Microsoft had acknowledged "implicitly ringing the desktop PC death knell" as Windows 8 offered little upgrade in desktop PC functionality over Windows 7; instead, Windows 8's innovations were mostly on the mobile side. The post-PC trend saw a decline in the sales of desktop and laptop PCs. The decline was attributed to increased power and applications of alternative computing devices, namely smartphones and tablet computers. Although most people exclusively use their smartphones and tablets for more basic tasks such as social media and casual gaming, these devices have in many instances replaced a second or third PC in the household that would have performed these tasks, though most families still retain a powerful PC for serious work. Among PC form factors, desktops remain a staple in the enterprise market but lost popularity among home buyers. PC makers and electronics retailers responded by investing their engineering and marketing resources towards laptops (initially netbooks in the late 2000s, and then the higher-performance Ultrabooks from 2011 onwards), which manufacturers believed had more potential to revive the PC market than desktops. In April 2017, StatCounter declared a "Milestone in technology history and end of an era" with the mobile Android operating system becoming more popular than Windows (the operating system that made desktops dominant over mainframe computers). Windows is still most popular on desktops (and laptops), while smartphones (and tablets) use Android or iOS. Towards the middle of the 2010s, media sources began to question the existence of the post-PC trend, at least as conventionally defined, stating that the so-called post-PC devices are just other portable forms of PCs joining traditional desktop PCs which still have their own operation areas and evolve. Although for casual use traditional desktops and laptops have seen a decline in sales, in 2018, global PC sales experienced a resurgence, driven by the business market. Desktops remain a solid fixture in the commercial and educational sectors. In 2019 the global PC market recorded its first full year of growth in eight years. Inclusive of desktops, notebooks and workstations, 268.1 million units were shipped, up 2.7% on 2018. According to the International Data Corporation (IDC), PC sales shot up 14.8% between 2020 and 2021 and desktop market grew faster than the laptop market in the second quarter of 2021. Total PC shipments during 2021 reached 348.8 million units, up 14.8% from 2020. This represents the highest level of shipments the PC market has seen since 2012. In addition, gaming desktops have seen a global revenue increase of 54% annually. For gaming the global market of gaming desktops, laptops, and monitors was expected to grow to 61.1 million shipments by the end of 2023, up from 42.1 million, with desktops growing from 15.1 million shipments to 19 million. PC gaming as a whole accounts for 28% of the total gaming market as of 2017. This is partially due to the increasing affordability of desktop PCs. In 2024 255.5 million PCs (including desktops and laptops) were shipped, up from 246 million in 2023 – a 3.8% year-over-year growth with Lenovo maintaining the largest market share. Types Full-sized desktops are characterized by separate display and processing components. These components are connected to each other by cables or wireless connections. They often come in a tower form factor. These computers are easy to customize and upgrade per user requirements, e.g. by expansion card. Early extended-size (significantly larger than mainstream ATX case) tower computers sometimes were labeled as "deskside computers", but currently this naming is quite rare. Compact desktops are reduced in physical proportions compared to full-sized desktops. They are typically small-sized, inexpensive, low-power computers designed for basic tasks such as web browsing, accessing web-based applications, document processing, and audio/video playback. Hardware specifications and processing power are usually reduced and hence make them less appropriate for running complex or resource-intensive applications. A nettop is a notable example of a compact desktop. A laptop without a screen can functionally be used as a compact desktop, sometimes called a "slabtop". An all-in-one (AIO) desktop computer integrates the system's internal components into the same case as the display, thus occupying a smaller footprint (with fewer cables) than desktops that incorporate a tower. The All-in-one systems are rarely labeled as desktop computers. In personal computing, a tower is a form factor of desktop computer case whose height is much greater than its width, thus having the appearance of an upstanding tower block. In computing, a pizza box enclosure is a design for desktop computers. Pizza box cases tend to be wide and flat, resembling pizza delivery boxes, hence the name. Cube Workstations have a cube case enclosure to house the motherboard, PCI-E expansion cards, GPU, CPU, DRAM DIMM slots, computer cooling equipment, chipsets, I/O ports, hard disk drives, and solid-state drives. Open Frame cases have easy service access, and have no airflow problems, are great for building liquid-cooled systems and have an industrial design look, but will draw a lot of dust on components and needs cleaning them more often, yet the design allows blowing dust away with ease. Gaming computers are desktop computers with high performance CPU, GPU, and RAM optimized for playing video games at high resolution and frame rates. Gaming computer peripherals usually include mechanical keyboards for faster response time, and a gaming computer mouse which can track higher dots per inch movement. These desktops are connected to home entertainment systems and typically used for amusement purposes. They come with high definition display, video graphics, surround sound and TV tuner systems to complement typical PC features. Over time some traditional desktop computers have been replaced with thin clients utilizing off-site computing solutions like the cloud. As more services and applications are served over the internet from off-site servers, local computing needs decrease, this drives desktop computers to be smaller, cheaper, and need less powerful hardware. More applications and in some cases entire virtual desktops are moved off-site and the desktop computer runs only an operating system or a shell application while the actual content is served from a server. Thin client computers may do almost all of their computing on a virtual machine in another site. Internal, hosted virtual desktops can offer users a completely consistent experience from anywhere. Workstations are advanced class of personal computers designed for a user and more powerful than a regular PC but less powerful than a server in regular computing. They are capable of high-resolution and three-dimensional interfaces, and typically used to perform scientific and engineering work. Like server computers, they are often connected with other workstations. The main form-factor for this class is a Tower case, but most vendors produce compact or all-in-one low-end workstations. Most tower workstations can be converted to a rack-mount version. Oriented for small business class of servers; typically entry-level server machines, with similar to workstation/gaming PC computing powers and with some mainstream servers features, but with only basic graphic abilities; and some desktop servers can be converted to workstations. Comparison with laptops Desktops have an advantage over laptops in that the spare parts and extensions tend to be standardized, resulting in lower prices and greater availability. For example, the size and mounting of the motherboard are standardized into ATX, microATX, BTX or other form factors. Desktops have several standardized expansion slots, like conventional PCI or PCI Express, while laptops tend to have only one mini-PCI slot and one PC Card slot (or ExpressCard slot). Procedures for assembly and disassembly of desktops tend to be simple and standardized as well. This tends not to be the case for laptops, though adding or replacing some parts, like the optical drive, hard disk, or adding an extra memory module is often quite simple. This means that a desktop computer configuration, usually a tower case, can be customized and upgraded to a greater extent than laptops. This customization has kept tower cases popular among gamers and enthusiasts. Another advantage of the desktop is that (apart from environmental concerns) power consumption is not as critical as in laptop computers because the desktop is exclusively powered from the wall socket. Desktop computers also provide more space for cooling fans and vents to dissipate heat, allowing enthusiasts to overclock with less risk. The two large microprocessor manufacturers, Intel and AMD, have developed special CPUs for mobile computers (e.g. laptops) that consume less power and lower heat, but with lower performance levels. Laptop computers, conversely, offer portability that desktop systems (including small form factor and all-in-one desktops) cannot due to their compact size and clamshell design. The laptop's all-in-one design provides a built-in keyboard and a pointing device (such as a touchpad) for its user and can draw on power supplied by a rechargeable battery. Laptops also commonly integrate wireless technologies like Wi-Fi, Bluetooth, and 3G, giving them a broader range of options for connecting to the internet, though this trend is changing as newer desktop computers come integrated with one or more of these technologies. A desktop computer needs a UPS to handle electrical disturbances like short interruptions, blackouts, and spikes; achieving an on-battery time of more than 20–30 minutes for a desktop PC requires a large and expensive UPS. A laptop with a sufficiently charged battery can continue to be used for hours in case of a power outage and is not affected by short power interruptions and blackouts. A desktop computer often has the advantage over a comparable laptop in computational capacity. Overclocking is often more feasible on a desktop than on a laptop; similarly, hardware add-ons such as discrete graphics co-processors may be possible to install only in a desktop. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Ralph_Fowler] | [TOKENS: 921]
Contents Ralph Fowler Sir Ralph Howard Fowler (17 January 1889 – 28 July 1944) was a British mathematical physicist. Biography Ralph Howard Fowler was born on 17 January 1889 in Roydon, England, the son of Howard Fowler from Burnham-on-Sea, and Frances Eva Dewhurst, the daughter of a cotton merchant from Manchester. After attending Winchester College, Fowler won a scholarship to Trinity College, Cambridge, where he studied mathematics and became a Wrangler in Part II of the Mathematical Tripos. In World War I, Fowler obtained a commission in the Royal Marine Artillery and was seriously wounded in his shoulder in the Gallipoli campaign. The wound enabled his friend Archibald Hill to use his talents properly. As Hill's second in command he worked on anti-aircraft ballistics in the Anti-Aircraft Experimental Section of HMS Excellent on Whale Island. He made a major contribution on the aerodynamics of spinning shells. He was awarded the OBE in 1918. In 1919, Fowler returned to Trinity College, where he became College Lecturer in Mathematics the following year. Fowler worked on thermodynamics and statistical mechanics, bringing a new approach to physical chemistry. With Arthur Milne, a comrade during the war, he wrote a seminal work on stellar spectra, temperatures, and pressures. In 1925, he was elected a Fellow of the Royal Society. He became research supervisor to Paul Dirac and, in 1926, worked with him on the statistical mechanics of white dwarf stars. In 1927, Fowler was one of the participants of the fifth Solvay Conference on Physics that took place at the International Solvay Institute for Physics in Belgium. The following year, he published (with Lothar Nordheim) a seminal paper that explained the physical phenomenon now known as field electron emission, and helped to establish the validity of modern electron band theory. In 1931, he was the first to formulate and label the zeroth law of thermodynamics. The following year, he was appointed John Humphrey Plummer Professor of Mathematical Physics in the Cavendish Laboratory. In 1933, he worked with John Bernal to develop a model for the structure of water and ice known as the ice rules. In 1939, when World War II began, he resumed his work with the Ordnance Board, despite poor health, and was chosen for scientific liaison with Canada and the United States. He knew America well, having visiting professorships at Princeton University and the University of Wisconsin–Madison. For this liaison work, he was knighted in 1942 (see MAUD Committee). He returned to Britain later in the war and worked for the Ordnance Board and the Admiralty up until a few weeks before his death in 1944. Fifteen Fellows of the Royal Society and three Nobel Laureates (Chandrasekhar, Dirac, and Mott) were supervised by Fowler between 1922 and 1939. In addition to Milne, he worked with Sir Arthur Eddington, Subrahmanyan Chandrasekhar, Paul Dirac, Homi J. Bhabha, and Sir William McCrea. It was Fowler who introduced Dirac to quantum theory in 1923. Fowler also put Dirac and Werner Heisenberg in touch with each other through Niels Bohr. At Cambridge, he supervised the doctoral studies of 64 students, including John Lennard-Jones, Paul Dirac, and Garrett Birkhoff. Fowler died on 28 July 1944 in Trumpington at the age of 55. The Fowler Islands in Crystal Sound on the Antarctic Peninsula were named by the UK Antarctic Place-Names Committee in his honour. Personal life Fowler was a keen amateur cricketer who played as a wicket-keeper. He played for Norfolk in the Minor Counties Championship in 1908 and 1909. In 1921, Fowler married Eileen Mary (1901–1930), the only daughter of Ernest Rutherford. They had four children; two sons and two daughters. Eileen died after the birth of their last child, Ruth Fowler Edwards, a geneticist and wife of Robert G. (Bob) Edwards, the "father" of in vitro fertilisation and 2010 Nobel Prize in Physiology or Medicine laureate. One of his grandchildren is Mary Fowler, a geophysicist and the sixth Master (2012–2020) of Darwin College, Cambridge. Selected publications References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/List_of_incidents_of_civil_unrest_in_the_United_States] | [TOKENS: 62]
Contents List of incidents of civil unrest in the United States Listed are major episodes of civil unrest in the United States. This list does not include the numerous incidents of destruction and violence associated with various sporting events. 18th century 19th century 20th century 21st century See also References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/35_Orionis] | [TOKENS: 209]
Contents List of stars in Orion This is the list of notable stars in the constellation Orion, sorted by decreasing brightness. • Name = Proper name • B = Bayer designation • F or/and G. = Flamsteed designation or Gould designation • Var = Variable star designation • HD = Henry Draper Catalogue designation number • HIP = Hipparcos Catalogue designation number • RA = Right ascension for the Epoch/Equinox J2000.0 • Dec = Declination for the Epoch/Equinox J2000.0 • vis. mag. = visual magnitude (m or mv), also known as apparent magnitude • abs. mag. = absolute magnitude (Mv) • Dist. (ly) = Distance in light-years from Earth • Sp. class = Spectral class of the star in the stellar classification system • Notes = Common name(s) or alternate name(s); comments; notable properties [for example: multiple star status, range of variability if it is a variable star, exoplanets, etc.] See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Sea_level_rise] | [TOKENS: 14753]
Contents Sea level rise The sea level has been rising since the end of the Last Glacial Maximum, which was around 20,000 years ago. Between 1901 and 2018, the average sea level rose by 15–25 cm (6–10 in), with an increase of 2.3 mm (0.091 in) per year since the 1970s.: 1216 This was faster than the sea level had ever risen over at least the past 3,000 years.: 1216 The rate accelerated to 4.62 mm (0.182 in)/yr for the decade 2013–2022. Climate change due to human activities is the main cause of this persistent acceleration.: 5, 8 Between 1993 and 2018, melting ice sheets and glaciers accounted for 44% of sea level rise, with another 42% resulting from thermal expansion of water.: 1576 Sea level rise lags behind changes in the Earth's temperature by decades, and sea level rise will therefore continue to accelerate between now and 2050 in response to warming that has already happened. What happens after that depends on future human greenhouse gas emissions. If there are very deep cuts in emissions, sea level rise would slow between 2050 and 2100. The reported factors of increase in flood hazard potential are often exceedingly large, ranging from 10 to 1000 for even modest sea-level rise scenarios of 0.5 m or less. It could then rise by between 30 cm (1 ft) and 1.0 m (3+1⁄3 ft) between the early 2020s and 2100, or by approximately 60 cm (2 ft) to 130 cm (4+1⁄2 ft) from the 19th century to 2100. With high emissions it would instead accelerate further, and could rise by 50 cm (1.6 ft) or even by 1.9 m (6.2 ft) by 2100.: 1302 In the long run, sea level rise would amount to 2–3 m (7–10 ft) over the next 2000 years if warming stays to its current 1.5 °C (2.7 °F) over the pre-industrial past. It would be 19–22 metres (62–72 ft) if warming peaks at 5 °C (9.0 °F).: 21 Rising seas affect every coastal population on Earth. This can be through flooding, higher storm surges, king tides, and increased vulnerability to tsunamis. There are many knock-on effects. They lead to loss of coastal ecosystems such as mangrove swamps. There may be a reduction in crop yields because of increasing salt levels in irrigation water. Damage to ports disrupts sea trade. The sea level rise projected by 2050 will expose places currently inhabited by tens of millions of people to annual flooding. Without a sharp reduction in greenhouse gas emissions, this may increase to hundreds of millions in the latter decades of the century. Local factors such as tidal range or land subsidence will greatly affect the severity of impacts. For instance, sea level rise in the United States is likely to be two to three times greater than the global average by the end of the century. Yet, of the twenty countries with the greatest exposure to sea level rise, twelve are in Asia, including Indonesia, Bangladesh and the Philippines. The resilience and adaptive capacity of ecosystems and countries also varies, which will result in more or less pronounced impacts. The greatest impact on human populations in the near term will occur in low-lying Caribbean and Pacific islands including atolls. Sea level rise will make many of them uninhabitable later this century. Societies can adapt to sea level rise in multiple ways. Managed retreat, accommodating coastal change, or protecting against sea level rise through hard-construction practices such as seawalls are hard approaches. There are also soft approaches such as dune rehabilitation and beach nourishment. Sometimes these adaptation strategies go hand in hand. At other times choices must be made among different strategies. Poorer nations may also struggle to implement the same approaches to adapt to sea level rise as richer states. Observations Between 1901 and 2018, the global mean sea level rose by about 20 cm (7.9 in). More precise data gathered from satellite radar measurements found an increase of 7.5 cm (3.0 in) from 1993 to 2017 (average of 2.9 mm (0.11 in)/yr). This accelerated to 4.62 mm (0.182 in)/yr for 2013–2022. Paleoclimate data shows that this rate of sea level rise is the fastest it had been over at least the past 3,000 years.: 1216 A research paper published in October 2025 updated the global sea level curve for the last 11,700 years, finding that global mean sea-level rise since 1900 is faster than in any century over at least the last 4,000 years. While sea level rise is uniform around the globe, some land masses are moving up or down as a consequence of subsidence (land sinking or settling) or post-glacial rebound (land rising as melting ice reduces weight). Therefore, local relative sea level rise may be higher or lower than the global average. Changing ice masses also affect the distribution of sea water around the globe through gravity. Projections Several complementary approaches are used for sea level rise (SLR) projections. One is process-based modeling, where ice melting is computed through an ice-sheet model and rising sea temperature and expansion through a general circulation model, and then these contributions are added up. The so-called semi-empirical approach instead applies statistical techniques and basic physical modeling to the observed recent sea level rise and reconstructions from the older historical geological data (known as paleoclimate modeling). It was developed because process-based model projections in the past IPCC reports (such as the Fourth Assessment Report from 2007) were found to underestimate the already observed sea level rise. By 2013, improvements in modeling had addressed this issue, and model and semi-empirical projections for the year 2100 are now very similar. Yet, semi-empirical estimates are reliant on the quality of available observations and struggle to represent non-linearities, while processes without enough available information about them cannot be modeled. Thus, another approach is to combine the opinions of a large number of scientists in what is known as a structured expert judgement (SEJ). Some analyses suggest that if fossil fuel use continues indefinitely and all polar and mountain ice melts, global sea level could rise by as much as 216 feet. Variations of these primary approaches exist. For instance, large climate models are computationally expensive, so less complex models are often used in their place for simpler tasks such as projecting flood risk in the specific regions. A structured expert judgement may be used in combination with modeling to determine which outcomes are more or less likely, which is known as "shifted SEJ". Semi-empirical techniques can be combined with the so-called "intermediate-complexity" models. After 2016, some ice sheet modeling exhibited the so-called ice cliff instability in Antarctica, which results in substantially faster disintegration and retreat than otherwise simulated. The differences are limited with low warming, but at higher warming levels, ice cliff instability predicts far greater sea level rise than any other approach. The study reports that sea level also is expected to grow by another 6.6 inches (169 millimeters) globally over the next 30 years if it follows this trend, which will lead to 16.63 inches (42.25 centimeters) under a 1.75 °C warming by 2100. The Intergovernmental Panel on Climate Change is the largest and most influential scientific organization on climate change, and since 1990, it provides several plausible scenarios of 21st century sea level rise in each of its major reports. The differences between scenarios are mainly due to uncertainty about future greenhouse gas emissions. These depend on future economic developments, and also future political action which is hard to predict. Each scenario provides an estimate for sea level rise as a range with a lower and upper limit to reflect the unknowns. The scenarios in the 2013–2014 Fifth Assessment Report (AR5) were called Representative Concentration Pathways, or RCPs and the scenarios in the IPCC Sixth Assessment Report (AR6) are known as Shared Socioeconomic Pathways, or SSPs. A large difference between the two was the addition of SSP1-1.9 to AR6, which represents meeting the best Paris climate agreement goal of 1.5 °C (2.7 °F). In that case, the likely range of sea level rise by 2100 is 28–55 cm (11–21+1⁄2 in).: 1302 The lowest scenario in AR5, RCP2.6, would see greenhouse gas emissions low enough to meet the goal of limiting warming by 2100 to 2 °C (3.6 °F). It shows sea level rise in 2100 of about 44 cm (17 in) with a range of 28–61 cm (11–24 in). The "moderate" scenario, where CO2 emissions take a decade or two to peak and its atmospheric concentration does not plateau until the 2070s is called RCP 4.5. Its likely range of sea level rise is 36–71 cm (14–28 in). The highest scenario in RCP8.5 pathway sea level would rise between 52 and 98 cm (20+1⁄2 and 38+1⁄2 in). AR6 had equivalents for both scenarios, but it estimated larger sea level rise under both. In AR6, the SSP1-2.6 pathway results in a range of 32–62 cm (12+1⁄2–24+1⁄2 in) by 2100. The "moderate" SSP2-4.5 results in a 44–76 cm (17+1⁄2–30 in) range by 2100 and SSP5-8.5 led to 65–101 cm (25+1⁄2–40 in).: 1302 This general increase of projections in AR6 came after the improvements in ice-sheet modeling and the incorporation of structured expert judgements. These decisions came as the observed ice-sheet erosion in Greenland and Antarctica had matched the upper-end range of the AR5 projections by 2020, and the finding that AR5 projections were likely too slow next to an extrapolation of observed sea level rise trends, while the subsequent reports had improved in this regard. Further, AR5 was criticized by multiple researchers for excluding detailed estimates of the impact of "low-confidence" processes such as marine ice sheet and marine ice cliff instability, which can substantially accelerate ice loss to potentially add "tens of centimeters" to sea level rise within this century. AR6 includes a version of SSP5-8.5 where these processes take place, and in that case, sea level rise of up to 1.6 m (5+1⁄3 ft) by 2100 could not be ruled out.: 1302 The greatest uncertainty with sea level rise projections is associated with the so-called marine ice sheet instability (MISI), and, even more so, Marine Ice Cliff Instability (MICI).: 1302 These processes are mainly associated with West Antarctic Ice Sheet, but may also apply to some of Greenland's glaciers. The former suggests that when glaciers are mostly underwater on retrograde (backwards-sloping) bedrock, the water melts more and more of their height as their retreat continues, thus accelerating their breakdown on its own. This is widely accepted, but is difficult to model. The latter posits that coastal ice cliffs which exceed ~90 m (295+1⁄2 ft) in above-ground height and are ~800 m (2,624+1⁄2 ft) in basal (underground) height are likely to rapidly collapse under their own weight once the ice shelves propping them up are gone. The collapse then exposes the ice masses following them to the same instability, potentially resulting in a self-sustaining cycle of cliff collapse and rapid ice sheet retreat. This theory had been highly influential – in a 2020 survey of 106 experts, the 2016 paper which suggested 1 m (3+1⁄2 ft) or more of sea level rise by 2100 from Antarctica alone, was considered even more important than the 2014 IPCC Fifth Assessment Report. Even more rapid sea level rise was proposed in a 2016 study led by Jim Hansen, which hypothesized multi-meter sea level rise in 50–100 years as a plausible outcome of high emissions, but it remains a minority view amongst the scientific community. Marine ice cliff instability had also been very controversial, since it was proposed as a modelling exercise, and the observational evidence from both the past and the present is very limited and ambiguous. So far, only one episode of seabed gouging by ice from the Younger Dryas period appears truly consistent with this theory, but it had lasted for an estimated 900 years, so it is unclear if it supports rapid sea level rise in the present. Modelling which investigated the hypothesis after 2016 often suggested that the ice shelves in the real world may collapse too slowly to make this scenario relevant, or that ice mélange – debris produced as the glacier breaks down – would quickly build up in front of the glacier and significantly slow or even outright stop the instability soon after it began. Due to these uncertainties, some scientists – including the originators of the hypothesis, Robert DeConto and David Pollard – have suggested that the best way to resolve the question would be to precisely determine sea level rise during the Last Interglacial. MICI can be effectively ruled out if SLR at the time was lower than 4 m (13 ft), while it is very likely if the SLR was greater than 6 m (19+1⁄2 ft). As of 2023, the most recent analysis indicates that the Last Interglacial SLR is unlikely to have been higher than 2.7 m (9 ft), as higher values in other research, such as 5.7 m (18+1⁄2 ft), appear inconsistent with the new paleoclimate data from The Bahamas and the known history of the Greenland Ice Sheet. Even if the temperature stabilizes, significant sea-level rise (SLR) will continue for centuries, consistent with paleo records of sea level rise.: 1189 This is due to the high level of inertia in the carbon cycle and the climate system, owing to factors such as the slow diffusion of heat into the deep ocean, leading to a longer climate response time. A 2018 paper estimated that sea level rise in 2300 would increase by a median of 20 cm (8 in) for every five years CO2 emissions increase before peaking. It shows a 5% likelihood of a 1 m (3+1⁄2 ft) increase due to the same. The same estimate found that if the temperature stabilized below 2 °C (3.6 °F), 2300 sea level rise would still exceed 1.5 m (5 ft). Early net zero and slowly falling temperatures could limit it to 70–120 cm (27+1⁄2–47 in). By 2021, the IPCC Sixth Assessment Report was able to provide estimates for sea level rise in 2150. Keeping warming to 1.5 °C under the SSP1-1.9 scenario would result in sea level rise in the 17–83% range of 37–86 cm (14+1⁄2–34 in). In the SSP1-2.6 pathway the range would be 46–99 cm (18–39 in), for SSP2-4.5 a 66–133 cm (26–52+1⁄2 in) range by 2100 and for SSP5-8.5 a rise of 98–188 cm (38+1⁄2–74 in). It stated that the "low-confidence, high impact" projected 0.63–1.60 m (2–5 ft) mean sea level rise by 2100, and that by 2150, the total sea level rise in his scenario would be in the range of 0.98–4.82 m (3–16 ft) by 2150.: 1302 AR6 also provided lower-confidence estimates for year 2300 sea level rise under SSP1-2.6 and SSP5-8.5 with various impact assumptions. In the best case scenario, under SSP1-2.6 with no ice sheet acceleration after 2100, the estimate was only 0.8–2.0 metres (2.6–6.6 ft). In the worst estimated scenario, SSP-8.5 with ice cliff instability, the projected range for total sea level rise was 9.5–16.2 metres (31–53 ft) by the year 2300.: 1306 Projections for subsequent years are more difficult. In 2019, when 22 experts on ice sheets were asked to estimate 2200 and 2300 SLR under the 5 °C warming scenario, there were 90% confidence intervals of −10 cm (4 in) to 740 cm (24+1⁄2 ft) and −9 cm (3+1⁄2 in) to 970 cm (32 ft), respectively. (Negative values represent the extremely low probability of large climate change-induced increases in precipitation greatly elevating ice sheet surface mass balance.) In 2020, 106 experts who contributed to 6 or more papers on sea level estimated median 118 cm (46+1⁄2 in) SLR in the year 2300 for the low-warming RCP2.6 scenario and the median of 329 cm (129+1⁄2 in) for the high-warming RCP8.5. The former scenario had the 5%–95% confidence range of 24–311 cm (9+1⁄2–122+1⁄2 in), and the latter of 88–783 cm (34+1⁄2–308+1⁄2 in). After 500 years, sea level rise from thermal expansion alone may have reached only half of its eventual level - likely within ranges of 0.5–2 m (1+1⁄2–6+1⁄2 ft). Additionally, tipping points of Greenland and Antarctica ice sheets are likely to play a larger role over such timescales. Ice loss from Antarctica is likely to dominate very long-term SLR, especially if the warming exceeds 2 °C (3.6 °F). Continued carbon dioxide emissions from fossil fuel sources could cause additional tens of metres of sea level rise, over the next millennia. Burning of all fossil fuels on Earth is sufficient to melt the entire Antarctic ice sheet, causing about 58 m (190 ft) of sea level rise. Year 2021 IPCC estimates for the amount of sea level rise over the next 2,000 years project that: Sea levels would continue to rise for several thousand years after the ceasing of emissions, due to the slow nature of climate response to heat. The same estimates on a timescale of 10,000 years project that: Measurements Variations in the amount of water in the oceans, changes in its volume, or varying land elevation compared to the sea surface can drive sea level changes. Over a consistent time period, assessments can attribute contributions to sea level rise and provide early indications of change in trajectory. This helps to inform adaptation plans. The different techniques used to measure changes in sea level do not measure exactly the same level. Tide gauges can only measure relative sea level. Satellites can also measure absolute sea level changes. To get precise measurements for sea level, researchers studying the ice and oceans factor in ongoing deformations of the solid Earth. They look in particular at landmasses still rising from past ice masses retreating, and the Earth's gravity and rotation. Since the launch of TOPEX/Poseidon in 1992, an overlapping series of altimetric satellites has been continuously recording the sea level and its changes. These satellites can measure the hills and valleys in the sea caused by currents and detect trends in their height. To measure the distance to the sea surface, the satellites send a microwave pulse towards Earth and record the time it takes to return after reflecting off the ocean's surface. Microwave radiometers correct the additional delay caused by water vapor in the atmosphere. Combining these data with the location of the spacecraft determines the sea-surface height to within a few centimetres. These satellite measurements have estimated rates of sea level rise for 1993–2017 at 3.0 ± 0.4 millimetres (1⁄8 ± 1⁄64 in) per year. Satellites are useful for measuring regional variations in sea level. An example is the substantial rise between 1993 and 2012 in the western tropical Pacific. This sharp rise has been linked to increasing trade winds. These occur when the Pacific Decadal Oscillation (PDO) and the El Niño–Southern Oscillation (ENSO) change from one state to the other. The PDO is a basin-wide climate pattern consisting of two phases, each commonly lasting 10 to 30 years. The ENSO has a shorter period of 2 to 7 years. The global network of tide gauges is the other important source of sea-level observations. Compared to the satellite record, this record has major spatial gaps but covers a much longer period. Coverage of tide gauges started mainly in the Northern Hemisphere. Data for the Southern Hemisphere remained scarce up to the 1970s. The longest running sea-level measurements, NAP or Amsterdam Ordnance Datum were established in 1675, in Amsterdam. Record collection is also extensive in Australia. They include measurements by Thomas Lempriere, an amateur meteorologist, beginning in 1837. Lempriere established a sea-level benchmark on a small cliff on the Isle of the Dead near the Port Arthur convict settlement in 1841. Together with satellite data for the period after 1992, this network established that global mean sea level rose 19.5 cm (7.7 in) between 1870 and 2004 at an average rate of about 1.44 mm/yr. (For the 20th century the average is 1.7 mm/yr.) By 2018, data collected by Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) had shown that the global mean sea level was rising by 3.2 mm (1⁄8 in) per year. This was double the average 20th century rate. The 2023 World Meteorological Organization report found further acceleration to 4.62 mm/yr over the 2013–2022 period. These observations help to check and verify predictions from climate change simulations. Regional differences are also visible in the tide gauge data. Some are caused by local sea level differences. Others are due to vertical land movements. In Europe, only some land areas are rising while the others are sinking. Since 1970, most tidal stations have measured higher seas. However sea levels along the northern Baltic Sea have dropped due to post-glacial rebound. An understanding of past sea level is an important guide to where current changes in sea level will end up. In the recent geological past, thermal expansion from increased temperatures and changes in land ice are the dominant reasons of sea level rise. The last time that the Earth was 2 °C (3.6 °F) warmer than pre-industrial temperatures was 120,000 years ago. This was when warming due to Milankovitch cycles (changes in the amount of sunlight due to slow changes in the Earth's orbit) caused the Eemian interglacial. Sea levels during that warmer interglacial were at least 5 m (16 ft) higher than now. The Eemian warming was sustained over a period of thousands of years. The size of the rise in sea level implies a large contribution from the Antarctic and Greenland ice sheets.: 1139 Levels of atmospheric carbon dioxide of around 400 parts per million (similar to 2000s) had increased temperature by over 2–3 °C (3.6–5.4 °F) around three million years ago. This temperature increase eventually melted one third of Antarctica's ice sheet, causing sea levels to rise 20 meters above the preindustrial levels. Since the Last Glacial Maximum about 20,000 years ago, sea level has risen by more than 125 metres (410 ft). Rates vary from less than 1 mm/year during the pre-industrial era to 40+ mm/year when major ice sheets over Canada and Eurasia melted. Meltwater pulses are periods of fast sea level rise caused by the rapid disintegration of these ice sheets. The rate of sea level rise started to slow down about 8,200 years before today. Sea level was almost constant for the last 2,500 years. The recent trend of rising sea level started at the end of the 19th or beginning of the 20th century. Causes The three main reasons why global warming causes sea levels to rise are the expansion of oceans due to heating, water inflow from melting ice sheets and water inflow from glaciers. Other factors affecting sea level rise include changes in snow mass, and flow from terrestrial water storage, though the contribution from these is thought to be small. Glacier retreat and ocean expansion have dominated sea level rise since the start of the 20th century. Some of the losses from glaciers are offset when precipitation falls as snow, accumulates and over time forms glacial ice. If precipitation, surface processes and ice loss at the edge balance each other, sea level remains the same. Because of this precipitation began as water vapor evaporated from the ocean surface, effects of climate change on the water cycle can even increase ice build-up. However, this effect is not enough to fully offset ice losses, and sea level rise continues to accelerate. The contributions of the two large ice sheets, in Greenland and Antarctica, are likely to increase in the 21st century. They store most of the land ice (~99.5%) and have a sea-level equivalent (SLE) of 7.4 m (24 ft 3 in) for Greenland and 58.3 m (191 ft 3 in) for Antarctica. Thus, melting of all the ice on Earth would result in about 70 m (229 ft 8 in) of sea level rise, although this would require at least 10,000 years and up to 10 °C (18 °F) of global warming. The oceans store more than 90% of the extra heat added to the climate system by Earth's energy imbalance and act as a buffer against its effects. This means that the same amount of heat that would increase the average world ocean temperature by 0.01 °C (0.018 °F) would increase atmospheric temperature by approximately 10 °C (18 °F). So a small change in the mean temperature of the ocean represents a very large change in the total heat content of the climate system. Trends have shown that the ocean is 1.2 °F (0.7 °C) warmer than it was in 1950; this has contributed to more than 6 inches (15 cm) of sea level rise. Winds and currents move heat into deeper parts of the ocean. Some of it reaches depths of more than 2,000 m (6,600 ft). The Southern Ocean accounts for approximately 40% ± 5% of global ocean heat uptake, highlighting its critical role in Earth's climate system. When the ocean gains heat, the water expands and sea level rises. Warmer water and water under great pressure (due to depth) expand more than cooler water and water under less pressure.: 1161 Consequently, cold Arctic Ocean water will expand less than warm tropical water. Different climate models present slightly different patterns of ocean heating. So their projections do not agree fully on how much ocean heating contributes to sea level rise. The large volume of ice on the Antarctic continent stores around 60% of the world's fresh water. Excluding groundwater this is 90%. Antarctica is experiencing ice loss from coastal glaciers in West Antarctica and some glaciers of East Antarctica. However it is gaining mass from the increased snow build-up inland, particularly in the East. This leads to contradictory trends. There are different satellite methods for measuring ice mass and change. Combining them helps to reconcile the differences. However, there can still be variations between the studies. In 2018, a systematic review estimated average annual ice loss of 43 billion tons (Gt) across the entire continent between 1992 and 2002. This tripled to an annual average of 220 Gt from 2012 to 2017. However, a 2021 analysis of data from four different research satellite systems (Envisat, European Remote-Sensing Satellite, GRACE and GRACE-FO and ICESat) indicated annual mass loss of only about 12 Gt from 2012 to 2016. This was due to greater ice gain in East Antarctica than estimated earlier. In the future, it is known that West Antarctica at least will continue to lose mass, and the likely future losses of sea ice and ice shelves, which block warmer currents from direct contact with the ice sheet, can accelerate declines even in East Antarctica. Altogether, Antarctica is the source of the largest uncertainty for future sea level projections. In 2019, the SROCC assessed several studies attempting to estimate 2300 sea level rise caused by ice loss in Antarctica alone, arriving at projected estimates of 0.07–0.37 metres (0.23–1.21 ft) for the low emission RCP2.6 scenario, and 0.60–2.89 metres (2.0–9.5 ft) in the high emission RCP8.5 scenario.: 1272 This wide range of estimates is mainly due to the uncertainties regarding marine ice sheet and marine ice cliff instabilities. The world's largest potential source of sea level rise is the East Antarctic Ice Sheet (EAIS). It is 2.2 km thick on average and holds enough ice to raise global sea levels by 53.3 m (174 ft 10 in) Its great thickness and high elevation make it more stable than the other ice sheets. As of the early 2020s, most studies show that it is still gaining mass. Some analyses have suggested it began to lose mass in the 2000s. However they over-extrapolated some observed losses on to the poorly observed areas. A more complete observational record shows continued mass gain. In spite of the net mass gain, some East Antarctica glaciers have lost ice in recent decades due to ocean warming and declining structural support from the local sea ice, such as Denman Glacier, and Totten Glacier. Totten Glacier is particularly important because it stabilizes the Aurora Subglacial Basin. Subglacial basins such as Aurora and Wilkes Basin are major ice reservoirs together holding as much ice as all of West Antarctica. They are more vulnerable than the rest of East Antarctica. Their collective tipping point probably lies at around 3 °C (5.4 °F) of global warming. It may be as high as 6 °C (11 °F) or as low as 2 °C (3.6 °F). Once this tipping point is crossed, the collapse of these subglacial basins could take place over as little as 500 or as much as 10,000 years. The median timeline is 2000 years. Depending on how many subglacial basins are vulnerable, this causes sea level rise of between 1.4 m (4 ft 7 in) and 6.4 m (21 ft 0 in). On the other hand, the whole EAIS would not definitely collapse until global warming reaches 7.5 °C (13.5 °F), with a range between 5 °C (9.0 °F) and 10 °C (18 °F). It would take at least 10,000 years to disappear. Some scientists have estimated that warming would have to reach at least 6 °C (11 °F) to melt two thirds of its volume. East Antarctica contains the largest potential source of sea level rise, but the West Antarctic Ice Sheet (WAIS) is substantially more vulnerable to small and moderate temperature rises. Temperatures on West Antarctica have increased significantly, unlike East Antarctica and the Antarctic Peninsula. The trend has been between 0.08 °C (0.14 °F) and 0.96 °C (1.73 °F) per decade between 1976 and 2012. Satellite observations recorded a substantial increase in WAIS melting from 1992 to 2017. This resulted in 7.6 ± 3.9 mm (19⁄64 ± 5⁄32 in) of Antarctica sea level rise. Outflow glaciers in the Amundsen Sea Embayment played a disproportionate role. The median estimated increase in sea level rise from Antarctica by 2100 is ~11 cm (5 in). There is no difference between scenarios, because the increased warming would intensify the water cycle and increase snowfall accumulation over the EAIS at about the same rate as it would increase ice loss from WAIS. However, most of the bedrock underlying the WAIS lies well below sea level, and it has to be buttressed by the Thwaites and Pine Island glaciers. If these glaciers were to collapse, the entire ice sheet would as well. Their disappearance would take at least several centuries, but is considered almost inevitable, as their bedrock topography deepens inland and becomes more vulnerable to meltwater, in what is known as marine ice sheet instability. The contribution of these glaciers to global sea levels has already accelerated since the year 2000. The Thwaites Glacier now accounts for 4% of global sea level rise. It could start to lose even more ice if the Thwaites Ice Shelf fails and would no longer stabilize it, which could potentially occur in mid-2020s. A combination of ice sheet instability with other important but hard-to-model processes such as hydrofracturing (meltwater collecting atop the ice sheet, pooling into fractures and forcing them open) or smaller-scale changes in ocean circulation could cause the WAIS to contribute up to 41 cm (16 in) by 2100 under the low-emission scenario and up to 57 cm (22 in) under the highest-emission one. Ice cliff instability would cause a contribution of 1 m (3+1⁄2 ft) or more if it were applicable. The melting of all the ice in West Antarctica would increase the total sea level rise to 4.3 m (14 ft 1 in). However, mountain ice caps not in contact with water are less vulnerable than the majority of the ice sheet, which is located below the sea level. Its collapse would cause ~3.3 m (10 ft 10 in) of sea level rise. This disappearance would take an estimated 2000 years. The absolute minimum for the loss of West Antarctica ice is 500 years, and the potential maximum is 13,000 years. Once ice loss from the West Antarctica is triggered, the only way to restore it to near-present values is by lowering the global temperature to 1 °C (1.8 °F) below the preindustrial level. This would be 2 °C (3.6 °F) below the temperature of 2020. Other researchers suggested that a climate engineering intervention to stabilize the ice sheet's glaciers may delay its loss by centuries and give more time to adapt. However this is an uncertain proposal, and would end up as one of the most expensive projects ever attempted. Most ice on Greenland is in the Greenland ice sheet which is 3 km (10,000 ft) at its thickest. The rest of Greenland ice forms isolated glaciers and ice caps. The average annual ice loss in Greenland more than doubled in the early 21st century compared to the 20th century. Its contribution to sea level rise correspondingly increased from 0.07 mm per year between 1992 and 1997 to 0.68 mm per year between 2012 and 2017. Total ice loss from the Greenland ice sheet between 1992 and 2018 amounted to 3,902 gigatons (Gt) of ice. This is equivalent to a SLR contribution of 10.8 mm. The contribution for the 2012–2016 period was equivalent to 37% of sea level rise from land ice sources (excluding thermal expansion). This observed rate of ice sheet melting is at the higher end of predictions from past IPCC assessment reports. In 2021, AR6 estimated that by 2100, the melting of Greenland ice sheet would most likely add around 6 cm (2+1⁄2 in) to sea levels under the low-emission scenario, and 13 cm (5 in) under the high-emission scenario. The first scenario, SSP1-2.6, largely fulfils the Paris Agreement goals, while the other, SSP5-8.5, has the emissions accelerate throughout the century. The uncertainty about ice sheet dynamics can affect both pathways. In the best-case scenario, ice sheet under SSP1-2.6 gains enough mass by 2100 through surface mass balance feedbacks to reduce the sea levels by 2 cm (1 in). In the worst case, it adds 15 cm (6 in). For SSP5-8.5, the best-case scenario is adding 5 cm (2 in) to sea levels, and the worst-case is adding 23 cm (9 in).: 1260 Greenland's peripheral glaciers and ice caps crossed an irreversible tipping point around 1997. Sea level rise from their loss is now unstoppable. However the temperature changes in future, the warming of 2000–2019 had already damaged the ice sheet enough for it to eventually lose ~3.3% of its volume. This is leading to 27 cm (10+1⁄2 in) of future sea level rise. At a certain level of global warming, the Greenland ice sheet will almost completely melt. Ice cores show this happened at least once over the last million years, during which the temperatures have at most been 2.5 °C (4.5 °F) warmer than the preindustrial average or 1 °C (1.8 °F) warmer than the 2025 temperature. 2012 modelling suggested that the tipping point of the ice sheet was between 0.8 °C (1.4 °F) and 3.2 °C (5.8 °F). 2023 modelling has narrowed the tipping threshold to a 1.7 °C (3.1 °F)-2.3 °C (4.1 °F) range, which is consistent with the empirical 2.5 °C (4.5 °F) upper limit from ice cores. If temperatures reach or exceed that level, reducing the global temperature to 1.5 °C (2.7 °F) above pre-industrial levels or lower would prevent the loss of the entire ice sheet. One way to do this in theory would be large-scale carbon dioxide removal, but there would still be cause of greater ice losses and sea level rise from Greenland than if the threshold was not breached in the first place. If the tipping point instead is durably but mildly crossed, the ice sheet would take between 10,000 and 15,000 years to disintegrate entirely, with a most likely estimate of 10,000 years. If climate change continues along its worst trajectory and temperatures continue to rise quickly over multiple centuries, the ice sheet would only take 1,000 years to melt. There are roughly 200,000 glaciers on Earth, which are spread out across all continents. Less than 1% of glacier ice is in mountain glaciers, compared to 99% in Greenland and Antarctica. However, this small size also makes mountain glaciers more vulnerable to melting than the larger ice sheets. This means they have had a disproportionate contribution to historical sea level rise and are set to contribute a smaller, but still significant fraction of sea level rise in the 21st century. Observational and modelling studies of mass loss from glaciers and ice caps show they contribute 0.2–0.4 mm per year to sea level rise, averaged over the 20th century. The contribution for the 2012–2016 period was nearly as large as that of Greenland. It was 0.63 mm of sea level rise per year, equivalent to 34% of sea level rise from land ice sources. Glaciers contributed around 40% to sea level rise during the 20th century, with estimates for the 21st century of around 30%. In 2023, a Science paper estimated that at 1.5 °C (2.7 °F), one quarter of mountain glacier mass would be lost by 2100 and nearly half would be lost at 4 °C (7.2 °F), contributing ~9 cm (3+1⁄2 in) and ~15 cm (6 in) to sea level rise, respectively. Glacier mass is disproportionately concentrated in the most resilient glaciers. So in practice this would remove 49–83% of glacier formations. It further estimated that the current likely trajectory of 2.7 °C (4.9 °F) would result in the SLR contribution of ~11 cm (4+1⁄2 in) by 2100. Mountain glaciers are even more vulnerable over the longer term. In 2022, another Science paper estimated that almost no mountain glaciers could survive once warming crosses 2 °C (3.6 °F). Their complete loss is largely inevitable around 3 °C (5.4 °F). There is even a possibility of complete loss after 2100 at just 1.5 °C (2.7 °F). This could happen as early as 50 years after the tipping point is crossed, although 200 years is the most likely value, and the maximum is around 1000 years. Sea ice loss directly contributes only very slightly to global sea level rise. If the melt water from ice floating in the sea was exactly the same as sea water then, according to Archimedes' principle, no rise would occur. However melted sea ice contains less dissolved salt than sea water and is therefore less dense, with a slightly greater volume per unit of mass. If all floating ice shelves and icebergs were to melt sea level would only rise by about 4 cm (1+1⁄2 in). Human activity impacts how much water is stored on land. Dams retain large quantities of water, which is stored on land rather than flowing into the sea, though the total quantity stored will vary from time to time. On the other hand, humans extract water from lakes, wetlands and underground reservoirs for drinking and food production. This often causes subsidence. Furthermore, the hydrological cycle is influenced by climate change and deforestation. In the 20th century, these processes had approximately cancelled out each other's impact on sea level rise, but dam building has slowed down and is expected to stay low for the 21st century.: 1155 Water redistribution caused by irrigation moving groundwater into the oceans, was estimated at 2,150 GT between 1993 and 2010 - equivalent to a global sea level rise of 6.24 millimetres (0.246 in), but which could not be directly measured. The net movement of water was also expected to cause caused a drift of Earth's rotational pole by 78.48 centimetres (30.90 in), which was confirmed in 2023. Another, less common contributor to sea level rise is land sinkage. Although it does not affect everywhere it has a large effect on coasts such as the United States East Coast. Sea level is measured relative to land, meaning that as land sinks, sea level goes up, thus accelerating sea level rise. Land sinkage has two main causes: pumping groundwater and shifting tectonic plates. As people pump water out of the ground, cavities form where the water used to be, which then gets compressed down, sinking the land. As tectonic plates shift, they slide under one another, causing land to sink, especially on the coasts. In places such as the Gulf Coast of the United States, land is sinking by an inch (2.5cm) every five to ten years. Impacts Sea-level rise has many impacts. They include higher and more frequent high-tide and storm-surge flooding and increased coastal erosion. Other impacts are inhibition of primary production processes, more extensive coastal inundation, and changes in surface water quality and groundwater. These can lead to a greater loss of property and coastal habitats, loss of life during floods and loss of cultural resources. There are also impacts on agriculture and aquaculture. There can also be loss of tourism, recreation, and transport-related functions.: 356 Land use changes such as urbanisation or deforestation of low-lying coastal zones exacerbate coastal flooding impacts. Regions already vulnerable to rising sea level also struggle with coastal flooding. This washes away land and alters the landscape. Changes in emissions are likely to have only a small effect on the extent of sea level rise by 2050. So projected sea level rise could put tens of millions of people at risk by then. Scientists estimate that 2050 levels of sea level rise would result in about 150 million people under the water line during high tide. About 300 million would be in places flooded every year. This projection is based on the distribution of population in 2010. It does not take into account the effects of population growth and human migration. These figures are 40 million and 50 million more respectively than the numbers at risk in 2010. By 2100, there would be another 40 million people under the water line during high tide if sea level rise remains low. This figure would be 80 million for a high estimate of median sea level rise. Ice sheet processes under the highest emission scenario would result in sea level rise of well over one metre (3+1⁄4 ft) by 2100. This could be as much as over two metres (6+1⁄2 ft),: TS-45 This could result in as many as 520 million additional people ending up under the water line during high tide and 640 million in places flooded every year, compared to the 2010 population distribution. Over the longer term, coastal areas are particularly vulnerable to rising sea levels. They are also vulnerable to changes in the frequency and intensity of storms, increased precipitation, and rising ocean temperatures. Ten percent of the world's population live in coastal areas that are less than 10 metres (33 ft) above sea level. Two thirds of the world's cities with over five million people are located in these low-lying coastal areas. About 600 million people live directly on the coast around the world. Cities such as Miami, Rio de Janeiro, Osaka and Shanghai will be especially vulnerable later in the century under warming of 3 °C (5.4 °F). This is close to the current trajectory. LiDAR-based research had established in 2021 that 267 million people worldwide lived on land less than 2 m (6+1⁄2 ft) above sea level. With a 1 m (3+1⁄2 ft) sea level rise and zero population growth, that could increase to 410 million people. Potential disruption of sea trade and migrations could impact people living further inland. United Nations Secretary-General António Guterres warned in 2023 that sea level rise risks causing human migrations on a "biblical scale". Sea level rise will inevitably affect ports, but there is limited research on this. There is insufficient knowledge about the investments necessary to protect ports currently in use. This includes protecting current facilities before it becomes more reasonable to build new ports elsewhere. Some coastal regions are rich agricultural lands. Their loss to the sea could cause food shortages. This is a particularly acute issue for river deltas such as Nile Delta in Egypt and Red River and Mekong Deltas in Vietnam. Saltwater intrusion into the soil and irrigation water has a disproportionate effect on them. In 2025, the World Economic Forum said that rising sea levels caused by climate change were impacting 1 billion people worldwide. On May 20, 2025, about 230 million people live within 1 metre above current sea level, and 1 billion live within 10 metres above sea level. In total, 1.23 billion people live within 1–10 meters above sea level. Even just 20 cm of sea level rise by 2050 would lead to global flood damages of at least $1 trillion a year for the world's 136 largest coastal cities and huge impacts on people's lives and livelihoods. Scientists warned sea level rise would link to catastrophic inland migration. Flooding and soil/water salinization threaten the habitats of coastal plants, birds, and freshwater/estuarine fish when seawater reaches inland. When coastal forest areas become inundated with saltwater to the point no trees can survive the resulting habitats are called ghost forests. Starting around 2050, some nesting sites in Florida, Cuba, Ecuador and the island of Sint Eustatius for leatherback, loggerhead, hawksbill, green and olive ridley turtles are expected to be flooded. The proportion will increase over time. In 2016, Bramble Cay islet in the Great Barrier Reef was inundated. This flooded the habitat of a rodent named Bramble Cay melomys. It was officially declared extinct in 2019. Some ecosystems can move inland with the high-water mark. But natural or artificial barriers prevent many from migrating. This coastal narrowing is sometimes called 'coastal squeeze' when it involves human-made barriers. It could result in the loss of habitats such as mudflats and tidal marshes. Mangrove ecosystems on the mudflats of tropical coasts nurture high biodiversity. They are particularly vulnerable due to mangrove plants' reliance on breathing roots or pneumatophores. These will be submerged if the rate is too rapid for them to migrate upward. This would result in the loss of an ecosystem. Both mangroves and tidal marshes protect against storm surges, waves and tsunamis, so their loss makes the effects of sea level rise worse. Human activities such as dam building may restrict sediment supplies to wetlands. This would prevent natural adaptation processes. The loss of some tidal marshes is unavoidable as a consequence. Corals are important for bird and fish life. They need to grow vertically to remain close to the sea surface in order to get enough energy from sunlight. The corals have so far been able to keep up the vertical growth with the rising seas, but might not be able to do so in the future. Regional variations When a glacier or ice sheet melts, it loses mass. This reduces its gravitational pull. In some places near current and former glaciers and ice sheets, this has caused water levels to drop. At the same time water levels will increase more than average further away from the ice sheet. Thus ice loss in Greenland affects regional sea level differently than the equivalent loss in Antarctica. On the other hand, the Atlantic is warming at a faster pace than the Pacific. This has consequences for Europe and the U.S. East Coast. The East Coast sea level is rising at 3–4 times the global average. Scientists have linked extreme regional sea level rise on the US Northeast Coast to the downturn of the Atlantic meridional overturning circulation (AMOC). Many ports, urban conglomerations, and agricultural regions stand on river deltas. Here land subsidence contributes to much higher relative sea level rise. Unsustainable extraction of groundwater and oil and gas is one cause. Levees and other flood management practices are another. They prevent sediments from accumulating. These would otherwise compensate for the natural settling of deltaic soils.: 638 : 88 Estimates for total human-caused subsidence in the Rhine-Meuse-Scheldt delta (Netherlands) are 3–4 m (10–13 ft), over 3 m (10 ft) in urban areas of the Mississippi River Delta (New Orleans), and over 9 m (30 ft) in the Sacramento–San Joaquin River Delta.: 81–90 On the other hand, relative sea level around the Hudson Bay in Canada and the northern Baltic Sea is falling due to post-glacial isostatic rebound. Adaptation Cutting greenhouse gas emissions can slow and stabilize the rate of sea level rise after 2050. This would greatly reduce its costs and damages, but cannot stop it outright. So climate change adaptation to sea level rise is inevitable.: 3–127 The simplest approach is to stop development in vulnerable areas and ultimately move people and infrastructure away from them. Such retreat from sea level rise often results in the loss of livelihoods. The displacement of newly impoverished people could burden their new homes and accelerate social tensions. Some communities are responding to sea-level rise by building protective infrastructure, moving away from the coast, or introducing new policies to support long-term adaptation. At the same time, certain coastal ecosystems such as wetlands can naturally adjust by migrating to higher ground if the surrounding conditions allow. It is possible to avoid or at least delay the retreat from sea level rise with enhanced protections. These include dams, levees or improved natural defenses. Other options include updating building standards to reduce damage from floods, addition of storm water valves to address more frequent and severe flooding at high tide, or cultivating crops more tolerant of saltwater in the soil, even at an increased cost. These options divide into hard and soft adaptation. Hard adaptation generally involves large-scale changes to human societies and ecological systems. It often includes the construction of capital-intensive infrastructure. Soft adaptation involves strengthening natural defenses and local community adaptation. This usually involves simple, modular and locally owned technology. The two types of adaptation may be complementary or mutually exclusive. Adaptation options often require significant investment. But the costs of doing nothing are far greater. One example would involve adaptation against flooding. Effective adaptation measures could reduce future annual costs of flooding in 136 of the world's largest coastal cities from $1 trillion by 2050 without adaptation to a little over $60 billion annually. The cost would be $50 billion per year. Some experts argue that retreat from the coast would have a lower impact on the GDP of India and Southeast Asia then attempting to protect every coastline, in the case of very high sea level rise. To be successful, adaptation must anticipate sea level rise well ahead of time. As of 2023, the global state of adaptation planning is mixed. A survey of 253 planners from 49 countries found that 98% are aware of sea level rise projections, but 26% have not yet formally integrated them into their policy documents. Only around a third of respondents from Asian and South American countries have done so. This compares with 50% in Africa, and over 75% in Europe, Australasia and North America. Some 56% of all surveyed planners have plans which account for 2050 and 2100 sea level rise. But 53% use only a single projection rather than a range of two or three projections. Just 14% use four projections, including the one for "extreme" or "high-end" sea level rise. Another study found that over 75% of regional sea level rise assessments from the West and Northeastern United States included at least three estimates. These are usually RCP2.6, RCP4.5 and RCP8.5, and sometimes include extreme scenarios. But 88% of projections from the American South had only a single estimate. Similarly, no assessment from the South went beyond 2100. By contrast 14 assessments from the West went up to 2150, and three from the Northeast went to 2200. 56% of all localities were also found to underestimate the upper end of sea level rise relative to IPCC Sixth Assessment Report. By region In Africa, future population growth amplifies risks from sea level rise. Some 54.2 million people lived in the highly exposed low elevation coastal zones (LECZ) around 2000. This number will effectively double to around 110 million people by 2030, and then reach 185 to 230 million people by 2060. By then, the average regional sea level rise will be around 21 cm, with little difference from climate change scenarios. By 2100, Egypt, Mozambique and Tanzania are likely to have the largest number of people affected by annual flooding amongst all African countries. And under RCP8.5, 10 important cultural sites would be at risk of flooding and erosion by the end of the century. In the near term, some of the largest displacement is projected to occur in the East Africa region. At least 750,000 people there are likely to be displaced from the coasts between 2020 and 2050. By 2050, 12 major African cities would collectively sustain cumulative damages of US$65 billion for the "moderate" climate change scenario RCP4.5 and between US$86.5 billion to US$137.5 billion on average: in the worst case, these damages could effectively triple. In all of these estimates, around half of the damages would occur in the Egyptian city of Alexandria. Hundreds of thousands of people in its low-lying areas may already need relocation in the coming decade. Across sub-Saharan Africa as a whole, damage from sea level rise could reach 2–4% of GDP by 2050, although this depends on the extent of future economic growth and climate change adaptation. Asia has the largest population at risk from sea level due to its dense coastal populations. As of 2022, some 63 million people in East and South Asia were already at risk from a 100-year flood. This is largely due to inadequate coastal protection in many countries. Bangladesh, China, India, Indonesia, Japan, Pakistan, the Philippines, Thailand and Vietnam alone account for 70% of people exposed to sea level rise during the 21st century. Sea level rise in Bangladesh is likely to displace 0.9–2.1 million people by 2050. It may also force the relocation of up to one third of power plants as early as 2030, and many of the remaining plants would have to deal with the increased salinity of their cooling water. Nations with extensive rice production in coastal areas, such as Bangladesh, Vietnam and China, are already seeing adverse impacts from saltwater intrusion. Modelling results predict that Asia will suffer direct economic damages of US$167.6 billion at 0.47 meters of sea level rise. This rises to US$272.3 billion at 1.12 meters and US$338.1 billion at 1.75 meters. There is an additional indirect impact of US$8.5, 24 or 15 billion from population displacement at those levels. China, India, the Republic of Korea, Japan, Indonesia and Russia experience the largest economic losses. Out of the 20 coastal cities expected to see the highest flood losses by 2050, 13 are in Asia. Nine of these are the so-called sinking cities, where subsidence (typically caused by unsustainable groundwater extraction in the past) would compound sea level rise. These are Bangkok, Guangzhou, Ho Chi Minh City, Jakarta, Kolkata, Nagoya, Tianjin, Xiamen and Zhanjiang. By 2050, Guangzhou would see 0.2 meters of sea level rise and estimated annual economic losses of US$254 million – the highest in the world. In Shanghai, coastal inundation amounts to about 0.03% of local GDP, yet would increase to 0.8% by 2100 even under the "moderate" RCP4.5 scenario in the absence of adaptation. The city of Jakarta is sinking so much (up to 28 cm (11 in) per year between 1982 and 2010 in some areas) that in 2019, the government had committed to relocate the capital of Indonesia to another city. In Australia, erosion and flooding of Queensland's Sunshine Coast beaches is likely to intensify by 60% by 2030. Without adaptation there would be a big impact on tourism. Adaptation costs for sea level rise would be three times higher under the high-emission RCP8.5 scenario than in the low-emission RCP2.6 scenario. Sea level rise of 0.2–0.3 meters is likely by 2050. In these conditions, what is currently a 100-year flood would occur every year in the New Zealand cities of Wellington and Christchurch. With 0.5 m sea level rise, a current 100-year flood in Australia would occur several times a year. In New Zealand this would expose buildings with a collective worth of NZ$12.75 billion to new 100-year floods. A meter or so of sea level rise would threaten assets in New Zealand with a worth of NZD$25.5 billion. There would be a disproportionate impact on Maori-owned holdings and cultural heritage objects. Australian assets worth AUS$164–226 billion including many unsealed roads and railway lines would also be at risk. This amounts to a 111% rise in Australia's inundation costs between 2020 and 2100. By 2100, coastal flooding and erosion will affect at least 3–4 million people in South America. Many people live in low-lying areas exposed to sea level rise. This includes 6% of the population of Venezuela, 56% of the population of Guyana and 68% of the population of Suriname. In Guyana much of the capital Georgetown is already below sea level. In Brazil, the coastal ecoregion of Caatinga is responsible for 99% of the nation’s shrimp production. A combination of sea level rise, ocean warming and ocean acidification threaten its unique ecosystem. Extreme wave or wind behavior disrupted the port complex of Santa Catarina 76 times in one six-year period in the 2010s. There was a US$25,000–50,000 loss for each idle day. In Port of Santos in São Paulo, storm surges were three times more frequent between 2000 and 2016 than between 1928 and 1999. Many sandy coastlines in Europe are vulnerable to erosion due to sea level rise. In Spain, Costa del Maresme is likely to retreat by 16 meters by 2050 relative to 2010. This could amount to 52 meters by 2100 under RCP8.5 Other vulnerable coastlines include the Tyrrhenian Sea coast of Italy's Calabria region, the Barra-Vagueira coast in Portugal and Nørlev Strand in Denmark. In France, it was estimated that 8,000–10,000 people would be forced to migrate away from the coasts by 2080. The Italian city of Venice is located on islands. It is highly vulnerable to flooding and has already spent $6 billion on a barrier system. A quarter of the German state of Schleswig-Holstein, inhabited by over 350,000 people, is at low elevation and has been vulnerable to flooding since preindustrial times. Many levees already exist. Because of its complex geography, the authorities chose a flexible mix of hard and soft measures to cope with sea level rise of over 1 meter per century. In the United Kingdom, sea level at the end of the century would increase by 53 to 115 centimeters at the mouth of the River Thames and 30 to 90 centimeters in the Firth of Forth at Edinburgh. The UK has divided its coast into 22 areas, each covered by a Shoreline Management Plan. Those are sub-divided into 2000 management units, working across three periods of 0–20, 20–50 and 50–100 years. The Netherlands is a country that sits partially below sea level and is subsiding. It has responded by extending its Delta Works program. Drafted in 2008, the Delta Commission report said that the country must plan for a rise in the North Sea of up to 1.3 m (4 ft 3 in) by 2100 and plan for a 2–4 m (7–13 ft) rise by 2200. It advised annual spending between €1.0 and €1.5 billion. This would support measures such as broadening coastal dunes and strengthening sea and river dikes. Worst-case evacuation plans were also drawn up. As of 2017, around 95 million Americans lived on the coast. The figures for Canada and Mexico were 6.5 million and 19 million. Increased chronic nuisance flooding and king tide flooding is already a problem in the highly vulnerable state of Florida. The US East Coast is also vulnerable. On average, the number of days with tidal flooding in the US increased twofold in the years 2000–2020, reaching 3–7 days per year. In some areas the increase was much stronger: a quadrupling in the Southeast Atlantic and elevenfold in the Western Gulf. By the year 2030 the average number is expected to be 7–15 days, reaching 25–75 days by 2050. U.S. coastal cities have responded with beach nourishment or beach replenishment - adding mined sand to a beach - in addition to other adaptation measures such as zoning, restrictions on state funding, and building code standards. Along an estimated 15% of the US coastline, the majority of local groundwater levels are already below sea level. This places those groundwater reservoirs at risk of sea water intrusion. That would render fresh water unusable once its concentration exceeds 2-3%. Damage is also widespread in Canada. It will affect major cities such as Halifax and more remote locations such as Lennox Island. The Mi'kmaq community there is already considering relocation due to widespread coastal erosion. In Mexico, damage from SLR to tourism hotspots such as Cancun, Isla Mujeres, Playa del Carmen, Puerto Morelos and Cozumel could amount to US$1.4–2.3 billion. The increase in storm surges due to sea level rise is also a problem. Due to this effect Hurricane Sandy caused an additional US$8 billion in damage, impacted 36,000 more houses and 71,000 more people. In the future, the northern Gulf of Mexico, Atlantic Canada and the Pacific coast of Mexico would experience the greatest sea level rise. By 2030, flooding along the US Gulf Coast could cause economic losses of up to US$176 billion. Using nature-based solutions such as wetland restoration and oyster reef restoration could avoid around US$50 billion of this. By 2050, coastal flooding in the US is likely to rise tenfold to four "moderate" flooding events per year. That forecast is even without accounting for storms or heavy rainfall. In New York City, what is currently considered a 100-year flood would occur once in 19–68 years by 2050 and 4–60 years by 2080. By 2050, 20 million people in the greater New York City area would be at risk. This is because 40% of existing water treatment facilities would be compromised and 60% of power plants will need relocation. By 2100, sea level rise of 0.9 m (3 ft) and 1.8 m (6 ft) would threaten 4.2 and 13.1 million people in the US, respectively. In California alone, 2 m (6+1⁄2 ft) of SLR could affect 600,000 people and threaten over US$150 billion in property with inundation. This potentially represents over 6% of the state's GDP. In North Carolina, a meter of SLR would inundate 42% of the Albemarle-Pamlico Peninsula, costing up to US$14 billion. In nine southeast US states, the same level of sea level rise would claim up to 13,000 historical and archaeological sites, including over 1000 sites eligible for inclusion in the National Register for Historic Places. Small island states are nations with populations on atolls and other low islands. Atolls on average reach 0.9–1.8 m (3–6 ft) above sea level. These are the most vulnerable places to coastal erosion, flooding and salt intrusion into soils and freshwater caused by sea level rise. Sea level rise may make an island uninhabitable before it is completely flooded. Already, children in small island states encounter hampered access to food and water. They suffer an increased rate of mental and social disorders due to these stresses. At current rates, sea level rise would be high enough to make the Maldives uninhabitable by 2100. Five of the Solomon Islands have already disappeared due to the effects of sea level rise and stronger trade winds pushing water into the Western Pacific. Adaptation to sea level rise is costly for small island nations as a large portion of their inhabitants live in areas that are at risk. Nations such as Maldives, Kiribati and Tuvalu already have to consider controlled international migration of their populace in response to rising seas. The alternative of uncontrolled migration threatens to worsen the humanitarian crisis of climate refugees. In 2014, Kiribati purchased 20 square kilometers of land (about 2.5% of Kiribati's current area) on the Fijian island of Vanua Levu to relocate its populace once their own islands are lost to the sea. Fiji also suffers from sea level rise. It is in a comparatively safer position. Its residents continue to rely on local adaptation measures, including increasing sediment supply to combat erosion and moving further inland instead of relocating entirely. Fiji has also issued a green bond of $50 million to invest in green initiatives and fund adaptation efforts. It is restoring coral reefs and mangrove swamps to protect against flooding and erosion. It sees this as a more cost-efficient alternative to building sea walls. The nations of Palau and Tonga are taking similar steps. Even when an island is not threatened with complete disappearance from flooding, tourism and local economies may end up devastated. For instance, sea level rise of 1.0 m (3 ft 3 in) would cause partial or complete inundation of 29% of coastal resorts in the Caribbean. A further 49–60% of coastal resorts would be at risk from resulting coastal erosion. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/White_dwarf_stars] | [TOKENS: 12284]
Contents White dwarf A white dwarf is a stellar core remnant composed mostly of electron-degenerate matter. These are very dense objects: in an Earth-sized volume, it packs a mass that is comparable to the Sun. A white dwarf radiates light from residual heat, not from nuclear fusion. The nearest known white dwarf is Sirius B, at 8.6 light years, the smaller component of the Sirius binary star. There are currently thought to be eight white dwarfs among the one hundred star systems nearest the Sun. The unusual faintness of white dwarfs was first recognized in 1910.: 1 The name white dwarf was coined by Willem Jacob Luyten in 1922. White dwarfs are thought to be the final evolutionary state of main-sequence stars whose mass is not high enough to become a neutron star or black hole. These stars with masses from about 0.07 to 10 M☉ include over 97% of the stars in the Milky Way.: §1 The composition of the white dwarf produced will depend on the initial mass of the star. Galactic models suggest the Milky Way galaxy currently contains about ten billion white dwarfs. Once formed, the material in a white dwarf no longer undergoes fusion reactions and thus lacks a heat source to support it against gravitational collapse. Instead, it is supported only by electron degeneracy pressure, causing it to be extremely dense. The physics of degeneracy yields a maximum mass for a non-rotating white dwarf, the Chandrasekhar limit— approximately 1.44 times the mass of the Sun— beyond which electron degeneracy pressure cannot support it. A carbon–oxygen white dwarf that approaches this limit, typically by mass transfer from a companion star, may explode as a Type Ia supernova via a process known as carbon detonation; SN 1006 is a likely example. A white dwarf, very hot when it forms, gradually cools as it radiates its energy. This radiation, which initially has a high color temperature, lessens and reddens over time. Eventually, a white dwarf will cool enough that its material will begin to crystallize into a cold black dwarf. The oldest known white dwarfs still radiate at temperatures of a few thousand kelvins, which establishes an observational limit on the maximum possible age of the universe. History The first white dwarf discovered was in the triple star system of 40 Eridani, which contains the relatively bright main sequence star 40 Eridani A, orbited at a distance by the closer binary system of the white dwarf 40 Eridani B and the main sequence red dwarf 40 Eridani C. The pair 40 Eridani B/C was discovered by William Herschel on 31 January 1783. In 1910, Henry Norris Russell, Edward Charles Pickering and Williamina Fleming discovered that, despite being a dim star, 40 Eridani B was of spectral type A, or white. This star was far out of the normal pattern when Russell first created his now famous luminosity (brightness) versus spectral type (color) diagram. Publically Russell suggested that the spectral type may not be correct.: 114 However the spectral type of 40 Eridani B was independently confirmed in 1914 by Walter Adams. In 1939, Russell looked back on the discovery and noted that Pickering had suggested that such exceptions lead to breakthroughs and in this case it led to the discovery of white dwarfs.: 1 The white dwarf companion of Sirius, Sirius B, was next to be discovered. During the nineteenth century, positional measurements of some stars became precise enough to measure small changes in their location. Friedrich Bessel used position measurements to determine that the stars Sirius (α Canis Majoris) and Procyon (α Canis Minoris) were changing their positions periodically. In 1844 he predicted that both stars had unseen companions. Bessel roughly estimated the period of the companion of Sirius to be about half a century; C.A.F. Peters computed an orbit for it in 1851. It was not until 31 January 1862 that Alvan Graham Clark observed a previously unseen star close to Sirius, later identified as the predicted companion. Adams announced in 1915 that he had found the spectrum of Sirius B to be similar to that of Sirius. In 1917, Adriaan van Maanen discovered van Maanen's Star, an isolated white dwarf. These three white dwarfs, the first discovered, are the so-called classical white dwarfs.: 2 In 1922 Willem Luyten published a series of three papers on faint white stars that had high proper motion, meaning they were low-luminosity white stars close to the Earth. In his third paper he used the term dwarf; the term was later popularized by Arthur Eddington as white dwarf.: 118 18 white dwarfs had been discovered by 1939.: 3 Luyten and others continued to search for white dwarfs in the 1940s. By 1950, over a hundred were known, and by 1999, over 2000 were known. Since then the Sloan Digital Sky Survey has found over 9000 white dwarfs, mostly new. White dwarfs were found to be extremely dense soon after their discovery. If a star is in a binary system, as is the case for Sirius B or 40 Eridani B, it is possible to estimate its mass from observations of the binary orbit. This was done for Sirius B by 1910, yielding a mass estimate of 0.94 M☉, which compares well with a more modern estimate of 1.00 M☉. Since hotter bodies radiate more energy than colder ones, a star's surface brightness can be estimated from its effective surface temperature, and star's spectrum gives its temperature. If the star's distance is known, its absolute luminosity can also be estimated. From the absolute luminosity and distance, the star's surface area and its radius can be calculated. Reasoning of this sort led to the realization, puzzling to astronomers at the time, that due to their relatively high temperature and relatively low absolute luminosity, Sirius B and 40 Eridani B must be very dense. When Ernst Öpik estimated the density of visual binary stars in 1916, he found that 40 Eridani B had a density of over 25000 times that of the Sun, which was so high that he called it "impossible". As Eddington put it in 1927: We learn about the stars by receiving and interpreting the messages which their light brings to us. The message of the companion of Sirius when it was decoded ran: "I am composed of material 3000 times denser than anything you have ever come across; a ton of my material would be a little nugget that you could put in a matchbox." What reply can one make to such a message? The reply which most of us made in 1914 was—"Shut up. Don't talk nonsense.": 50 Such densities are possible because white dwarf material is not composed of atoms joined by chemical bonds, but rather consists of a plasma of unbound nuclei and electrons. There is therefore no obstacle to placing nuclei closer than normally allowed by electron orbitals limited by normal matter. Eddington wondered what would happen when this plasma cooled and the energy to keep the atoms ionized was no longer sufficient. This paradox was resolved by R. H. Fowler in 1926 by an application of the newly devised quantum mechanics. Since electrons obey the Pauli exclusion principle, no two electrons can occupy the same state, and they must obey Fermi–Dirac statistics, also introduced in 1926 to determine the statistical distribution of particles that satisfy the Pauli exclusion principle. At zero temperature, therefore, electrons cannot all occupy the lowest-energy, or ground, state; some of them would have to occupy higher-energy states, forming a band of lowest-available energy states, the Fermi sea. This state of the electrons, called degenerate, meant that a white dwarf could cool to zero temperature and still possess high energy. The existence of a limiting mass that no white dwarf can exceed without collapsing to a neutron star is another consequence of being supported by electron degeneracy pressure. Such limiting masses were calculated for cases of an idealized, constant density star in 1929 by Wilhelm Anderson and in 1930 by Edmund C. Stoner. This value was corrected by considering hydrostatic equilibrium for the density profile, and the presently known value of the limit was first published in 1931 by Subrahmanyan Chandrasekhar in his paper "The Maximum Mass of Ideal White Dwarfs". For a non-rotating white dwarf, it is equal to approximately 5.7 M☉ / μe2, where μe is the average molecular weight per electron of the star.: eqn.(63) As the carbon-12 and oxygen-16 that predominantly compose a carbon–oxygen white dwarf both have atomic numbers equal to half their atomic weight, one should take μe equal to 2 for such a star,[citation needed] leading to the commonly quoted value of 1.4 M☉. (Near the beginning of the 20th century, there was reason to believe that stars were composed chiefly of heavy elements,: 955 so, in his 1931 paper, Chandrasekhar set the average molecular weight per electron, μe, equal to 2.5, giving a limit of 0.91 M☉.) Together with William Alfred Fowler, Chandrasekhar received the Nobel Prize for this and other work in 1983. The limiting mass is now called the Chandrasekhar limit. Composition and structure Although white dwarfs are known with estimated masses as low as 0.17 M☉ and as high as 1.33 M☉, the mass distribution is strongly peaked at 0.6 M☉, and the majority lie between 0.5 and 0.7 M☉. The estimated radii of observed white dwarfs are typically 0.8–2% the radius of the Sun; this is comparable to the Earth's radius of approximately 0.9% solar radius. A white dwarf, then, packs mass comparable to the Sun's into a volume that is typically one millionth of the Sun's; the average density of matter in a white dwarf must therefore be, very roughly, 1000000 times greater than the average density of the Sun, or approximately 106 g/cm3, or 1 tonne per cubic centimetre. A typical white dwarf has a density of between 104 and 107 g/cm3. White dwarfs are composed of one of the densest forms of matter known, surpassed only by other compact stars such as neutron stars and the hypothetical quark stars. At these densities the nuclei within the star are much closer than nuclei in ordinary matter. Two nuclei in a white dwarf are separated by about 1.2×10−12 m, substantially less than a Bohr radius, 5.3×10−11 m, the rough size of a hydrogen atom. The white dwarf matter is pressure ionized: there is no room for bound electron orbitals. Rather than a hot thermal gas of atoms like the core of the Sun, the heat in the core of a white dwarf is associated with vibrating nuclei and the gas-like aspect providing pressure is due to the free electrons.: 575 This unusual source of pressure is known as electron degeneracy pressure, a consequence of quantum mechanics. The uncertainty principle requires the product of position uncertainty, Δ x {\displaystyle \Delta x} , and momentum uncertainty, Δ p x {\displaystyle \Delta p_{x}} , to be larger than half of the reduced Planck constant: Δ x × Δ p x > ℏ / 2. {\displaystyle \Delta x\times \Delta p_{x}>\hbar /2.} If the position uncertainty is small, the momentum uncertainty must increase. Furthermore, the Pauli principle of quantum mechanics requires every electron to be distinguishable, and that in turn means the uncertainty in the electron positions must be less than their average separation. Forcing the electrons close together (smaller average separation) reduces their position uncertainty causing an increase in their momentum uncertainty. More uncertain momentum must also be higher momentum. This increased momentum appears as a force that counteracts the gravitational force compressing the star.: 567 The balanced pressure at the center of a white dwarf will be on the order of 1.5 million times the pressure in the center of the Sun. The central temperature will exceed 10 million degrees K. While these conditions would fuse hydrogen the low luminosity of white dwarf stars implies the necessary nuclei are not present.: 560 A typical white dwarf star, a CO white dwarf, is 99% carbon and oxygen by mass, with the remainder being a thin layer of He and H. Main sequence stars close to the upper mass limit of 10 M☉ are thought to fuse carbon into neon, forming O-Ne white dwarf stars. Very light stars, below 0.6 M☉ never fuse He into carbon and oxygen so they form He-core white dwarfs.: 4 It takes more time than the age of the universe for a light star to burn through its fuel, so any observed He-core white dwarfs are expected to result from mass loss to a companion in an interacting binary star system. If a carbon-oxygen white dwarf accreted enough matter to reach the Chandrasekhar limit of about 1.44 solar masses (for a non-rotating star), it would no longer be able to support the bulk of its mass through electron degeneracy pressure and, in the absence of nuclear reactions, would begin to collapse. The current view is that this limit is not normally attained; increasing temperature and density inside the core ignite carbon fusion as the star approaches the limit (to within about 1%) before collapse is initiated. In contrast, for a core primarily composed of oxygen, neon and magnesium, the collapsing white dwarf will typically form a neutron star. In this case, only a fraction of the star's mass will be ejected during the collapse. If a white dwarf star accumulates sufficient material from a stellar companion to raise its core temperature enough to ignite carbon fusion, it will undergo runaway nuclear fusion, completely disrupting it. There are three avenues by which this detonation is theorised to happen: stable accretion of material from a companion, the collision of two white dwarfs, or accretion that causes ignition in a shell that then ignites the core. The dominant mechanism by which Type Ia supernovae are produced remains unclear. Despite this uncertainty in how Type Ia supernovae are produced, Type Ia supernovae have very uniform properties and are useful standard candles over intergalactic distances. Some calibrations are required to compensate for the gradual change in properties or different frequencies of abnormal luminosity supernovae at high redshift, and for small variations in brightness identified by light curve shape or spectrum. The relationship between the mass and radius of white dwarfs can be estimated using the nonrelativistic Fermi gas equation of state, which gives: 25 R R ⊙ ≈ 0.012 ( M M ⊙ ) − 1 / 3 ( μ e 2 ) − 5 / 3 , {\displaystyle {\frac {R}{R_{\odot }}}\approx 0.012\left({\frac {M}{M_{\odot }}}\right)^{-1/3}\left({\frac {\mu _{e}}{2}}\right)^{-5/3},} where R is the radius, M is the mass of the white dwarf, and the subscript ⊙ {\displaystyle \odot } indicates relative to the Sun. The chemical potential, μ e {\displaystyle \mu _{e}} is a thermodynamic property giving the change in energy as one electron is added or removed; it relates to the composition of the star. Numerical treatment of more complete models have been tested against observational data with good agreement. Since this analysis uses the non-relativistic formula p2 / 2m for the kinetic energy, it is non-relativistic. When the electron velocity in a white dwarf is close to the speed of light, the kinetic energy formula approaches pc where c is the speed of light, and it can be shown that the Fermi gas model has no stable equilibrium in the ultrarelativistic limit. In particular, this analysis yields the maximum mass of a white dwarf, which is: M l i m i t ≈ 1.46 ( μ e 2 ) − 2 {\displaystyle M_{\rm {limit}}\approx 1.46\left({\frac {\mu _{e}}{2}}\right)^{-2}} The observation of many white dwarf stars implies that either they started with masses similar to the Sun or something dramatic happened to reduce their mass. For a more accurate computation of the mass-radius relationship and limiting mass of a white dwarf, one must compute the equation of state that describes the relationship between density and pressure in the white dwarf material. If the density and pressure are both set equal to functions of the radius from the center of the star, the system of equations consisting of the hydrostatic equation together with the equation of state can then be solved to find the structure of the white dwarf at equilibrium. In the non-relativistic case, the radius is inversely proportional to the cube root of the mass.: eqn.(80) Relativistic corrections will alter the result so that the radius becomes zero at a finite value of the mass. This is the limiting value of the mass—called the Chandrasekhar limit—at which the white dwarf can no longer be supported by electron degeneracy pressure. The graph on the right shows the result of such a computation. It shows how radius varies with mass for non-relativistic (blue curve) and relativistic (green curve) models of a white dwarf. Both models treat the white dwarf as a cold Fermi gas in hydrostatic equilibrium. The average molecular weight per electron, μe, has been set equal to 2. Radius is measured in standard solar radii and mass in standard solar masses. These computations all assume that the white dwarf is non-rotating. If the white dwarf is rotating, the equation of hydrostatic equilibrium must be modified to take into account the centrifugal pseudo-force arising from working in a rotating frame. For a uniformly rotating white dwarf, the limiting mass increases only slightly. If the star is allowed to rotate nonuniformly, and viscosity is neglected, then, as was pointed out by Fred Hoyle in 1947, there is no limit to the mass for which it is possible for a model white dwarf to be in static equilibrium. Not all of these model stars will be dynamically stable. Rotating white dwarfs and the estimates of their diameter in terms of the angular velocity of rotation has been treated in the rigorous mathematical literature. The fine structure of the free boundary of white dwarfs has also been analysed mathematically rigorously. White dwarfs have low luminosity and therefore occupy a strip at the bottom of the Hertzsprung–Russell diagram, a graph of stellar luminosity versus color or temperature. They should not be confused with low-luminosity objects at the low-mass end of the main sequence, such as the hydrogen-fusing red dwarfs, whose cores are supported in part by thermal pressure, or the even lower-temperature brown dwarfs. The visible radiation emitted by white dwarfs varies over a wide color range, from the whitish-blue color of an O-, B- or A-type main sequence star to the yellow-orange of a late K- or early M-type star. White dwarf luminosity varies over 7 orders of magnitude, from over 100 times that of the Sun to under ⁠1/ 10000 ⁠ that of the Sun. Assuming the Stefan–Boltzmann law, relating luminosity to the fourth power of the effective surface temperature, white dwarf effective surface temperatures extend from over 150000 K to barely under 4000 K. Hot white dwarfs, with surface temperatures in excess of 30000 K, have been observed to be sources of soft (i.e., lower-energy) X-rays. This enables the composition and structure of their atmospheres to be studied by soft X-ray and extreme ultraviolet observations. White dwarfs also radiate neutrinos through the Urca process. This process has more effect on hotter and younger white dwarfs. Because neutrinos can pass easily through stellar plasma, they can drain energy directly from the dwarf's interior; this mechanism is the dominant contribution to cooling for approximately the first 20 million years of a white dwarf's existence. The degenerate matter that makes up the bulk of a white dwarf has a very low opacity, because any absorption of a photon requires that an electron must transition to a higher empty state, which may not be possible as the energy of the photon may not be a match for the possible quantum states available to that electron, hence radiative heat transfer within a white dwarf is low, but it has a high thermal conductivity. As a result, the interior of the white dwarf maintains an almost uniform temperature as it cools down, starting at approximately 108 K shortly after the formation of the white dwarf and reaching less than 106 K for the coolest known white dwarfs. An outer shell of non-degenerate matter sits on top of the degenerate core. The outermost layers, which are cooler than the interior, radiate roughly as a black body. A white dwarf remains visible for a long time, as its tenuous outer atmosphere slowly radiates the thermal content of the degenerate interior. As was explained by Leon Mestel in 1952, unless the white dwarf accretes matter from a companion star or other source, its radiation comes from its stored heat, which is not replenished.: §2.1 White dwarfs have an extremely small surface area to radiate this heat from, so they cool gradually, remaining hot for a long time. As a white dwarf cools, its surface temperature decreases, the radiation that it emits reddens, and its luminosity decreases. Since the white dwarf has no energy sink other than radiation, it follows that its cooling slows with time. The rate of cooling has been estimated for a carbon white dwarf of 0.59 M☉ with a hydrogen atmosphere. After initially taking approximately 1.5 billion years to cool to a surface temperature of 7140 K, cooling approximately 500 more kelvins to 6590 K takes around 0.3 billion years, but the next two steps of around 500 kelvins (to 6030 K and 5550 K) take first 0.4 and then 1.1 billion years.: Table 2 Most observed white dwarfs have relatively high surface temperatures, between 8000 K and 40000 K. However this is a selection effect: hotter, more luminous white dwarfs are easier to observe. A white dwarf spends more of its lifetime at cooler temperatures: more cool white dwarfs than hot white dwarfs exist. This trend stops at extremely cool white dwarfs; few white dwarfs are observed with surface temperatures below 4000 K, and one of the coolest so far observed, WD J2147–4035, has a surface temperature of approximately 3050 K. The reason for this is that the Universe's age is finite; there has not been enough time for white dwarfs to cool below this temperature. The white dwarf luminosity function can therefore be used to find the time when stars started to form in a region; an estimate for the age of our galactic disk found in this way is 8 billion years. A white dwarf will eventually, in many trillions of years, cool and become a non-radiating black dwarf in approximate thermal equilibrium with its surroundings and with the cosmic background radiation. No black dwarfs are thought to exist yet. At very low temperatures (<4000 K) white dwarfs with hydrogen in their atmosphere will be affected by collision induced absorption (CIA) of hydrogen molecules colliding with helium atoms. This affects the optical red and infrared brightness of white dwarfs with a hydrogen or mixed hydrogen-helium atmosphere. This makes old white dwarfs with this kind of atmosphere bluer than the main cooling sequence. White dwarfs with hydrogen-poor atmospheres, such as WD J2147–4035, are less affected by CIA and therefore have a yellow to orange color. White dwarf core material is a completely ionized plasma—a mixture of nuclei and electrons—that is initially in a fluid state. It was theoretically predicted in the 1960s that at a late stage of cooling, it should crystallize into a solid state, starting at its center. The crystal structure is thought to be a body-centered cubic lattice. In 1995 it was suggested that asteroseismological observations of pulsating white dwarfs yielded a potential test of the crystallization theory, and in 2004, observations were made that suggested approximately 90% of the mass of BPM 37093 had crystallized. Other work gives a crystallized mass fraction of between 32% and 82%. As a white dwarf core undergoes crystallization into a solid phase, latent heat is released, which provides a source of thermal energy that delays its cooling. Another possible mechanism that was suggested to explain this cooling anomaly in some types of white dwarfs is a solid–liquid distillation process: the crystals formed in the core are buoyant and float up, thereby displacing heavier liquid downward, thus causing a net release of gravitational energy. Chemical fractionation between the ionic species in the plasma mixture can release a similar or even greater amount of energy. This energy release was first confirmed in 2019 after the identification of a pile up in the cooling sequence of more than 15000 white dwarfs observed with the Gaia satellite. Low-mass helium white dwarfs (mass < 0.20 M☉), often referred to as extremely low-mass white dwarfs (ELM WDs), are formed in binary systems. As a result of their hydrogen-rich envelopes, residual hydrogen burning via the CNO cycle may keep these white dwarfs hot for hundreds of millions of years. In addition, they remain in a bloated proto-white dwarf stage for up to 2 Gyr before they reach the cooling track. Although most white dwarfs are thought to be composed of carbon and oxygen, spectroscopy typically shows that their emitted light comes from an atmosphere that is observed to be either hydrogen or helium dominated. The dominant element is usually at least 1000 times more abundant than all other elements. As explained by Schatzman in the 1940s, the high surface gravity is thought to cause this purity by gravitationally separating the atmosphere so that heavy elements are below and the lighter above.: §§5–6 This atmosphere, the only part of the white dwarf visible to us, is thought to be the top of an envelope that is a residue of the star's envelope in the AGB phase and may also contain material accreted from the interstellar medium. The envelope is believed to consist of a helium-rich layer with mass no more than ⁠1/ 100 ⁠ of the star's total mass, which, if the atmosphere is hydrogen-dominated, is overlain by a hydrogen-rich layer with mass approximately ⁠1/ 10000 ⁠ of the star's total mass.: §§4–5 Although thin, these outer layers determine the thermal evolution of the white dwarf. The degenerate electrons in the bulk of a white dwarf conduct heat well. Most of a white dwarf's mass is therefore at almost the same temperature (isothermal), and it is also hot: a white dwarf with surface temperature between 8000 K and 16000 K will have a core temperature between approximately 5000000 K and 20000000 K. The white dwarf is kept from cooling very quickly only by its outer layers' opacity to radiation. The first attempt to classify white dwarf spectra appears to have been by G.P. Kuiper in 1941, and various classification schemes have been proposed and used since then. The system currently in use was introduced by Edward M. Sion, Jesse L. Greenstein and their coauthors in 1983 and has been subsequently revised several times. It classifies a spectrum by a symbol that consists of an initial D, a letter describing the primary feature of the spectrum followed by an optional sequence of letters describing secondary features of the spectrum (as shown in the adjacent table), and a temperature index number, computed by dividing 50400 K by the effective temperature. For example, a white dwarf with only He I lines in its spectrum and an effective temperature of 15000 K could be given the classification of "DB3", or, if warranted by the precision of the temperature measurement, "DB3.5". Likewise, a white dwarf with a polarized magnetic field, an effective temperature of 17000 K, and a spectrum dominated by He I lines that also had hydrogen features could be given the classification of DBAP3. The symbols "?" and ":" may also be used if the correct classification is uncertain. White dwarfs whose primary spectral classification is DA have hydrogen-dominated atmospheres. They make up the majority, approximately 80%, of all observed white dwarfs. The next class in number is of DBs, approximately 16%. The hot, above 15000 K, DQ class (roughly 0.1%) have carbon-dominated atmospheres. Those classified as DB, DC, DO, DZ, and cool DQ have helium-dominated atmospheres. Assuming that carbon and metals are not present, which spectral classification is seen depends on the effective temperature. Between approximately 100000 K to 45000 K, the spectrum will be classified DO, dominated by singly ionized helium. From 30000 K to 12000 K, the spectrum will be DB, showing neutral helium lines, and below about 12000 K, the spectrum will be featureless and classified DC.: §2.4 Molecular hydrogen (H2) has been detected in spectra of the atmospheres of some white dwarfs. While theoretical work suggests that some types of white dwarfs may have stellar corona, searches at X-ray and radio wavelengths, where coronae are most easily detected, have been unsuccessful. A few white dwarfs have been observed to have inhomogeneous atmosphere with one side dominated by hydrogen and the other side dominated by helium. Around 25–33% of white dwarfs have metal lines in their spectra, which is notable because any heavy elements in a white dwarf should sink into the star's interior in just a small fraction of the star's lifetime. The prevailing explanation for metal-rich white dwarfs is that they have recently accreted rocky planetesimals. The bulk composition of the accreted object can be measured from the strengths of the metal lines. For example, a 2015 study of the white dwarf Ton 345 concluded that its metal abundances were consistent with those of a differentiated, rocky planet whose mantle had been eroded by the host star's wind during its asymptotic giant branch phase. Magnetic fields in white dwarfs with a strength at the surface of c. 1 million gauss (100 teslas) were predicted by P. M. S. Blackett in 1947 as a consequence of a physical law he had proposed, which stated that an uncharged, rotating body should generate a magnetic field proportional to its angular momentum. This putative law, sometimes called the Blackett effect, was never generally accepted, and by the 1950s even Blackett felt it had been refuted.: 39–43 In the 1960s, it was proposed that white dwarfs might have magnetic fields due to conservation of total surface magnetic flux that existed in its progenitor star phase. A surface magnetic field of c. 100 gauss (0.01 T) in the progenitor star would thus become a surface magnetic field of c. 100 × 1002 = 1 million gauss (100 T) once the star's radius had shrunk by a factor of 100.: §8 : 484 The first magnetic white dwarf to be discovered was GJ 742 (also known as GRW +70 8247), which was identified by James Kemp, John Swedlund, John Landstreet and Roger Angel in 1970 to host a magnetic field by its emission of circularly polarized light. It is thought to have a surface field of approximately 300 million gauss (30 kT).: §8 Since 1970, magnetic fields have been discovered in well over 200 white dwarfs, ranging from 2×103 to 109 gauss (0.2 T to 100 kT). Many of the presently known magnetic white dwarfs are identified by low-resolution spectroscopy, which is able to reveal the presence of a magnetic field of 1 megagauss or more. Thus the basic identification process also sometimes results in discovery of magnetic fields. White dwarf magnetic fields may also be measured without spectral lines, using the techniques of broadband circular polarimetry, or maybe through measurement of their frequencies of radio emission via the electron cyclotron maser. It has been estimated that at least 10% of white dwarfs have fields in excess of 1 million gauss (100 T). The magnetic fields in a white dwarf may allow for the existence of a new type of chemical bond, perpendicular paramagnetic bonding, in addition to ionic and covalent bonds, though detecting molecules bonded in this way is expected to be difficult. The highly magnetized white dwarf in the binary system AR Scorpii was identified in 2016 as the first pulsar in which the compact object is a white dwarf instead of a neutron star. A second white dwarf pulsar was discovered in 2023. Variability Early calculations suggested that there might be white dwarfs whose luminosity varied with a period of around 10 seconds, but searches in the 1960s failed to observe this.: §7.1.1 The first variable white dwarf found was HL Tau 76; in 1965 and 1966, and was observed to vary with a period of approximately 12.5 minutes. The reason for this period being longer than predicted is that the variability of HL Tau 76, like that of the other pulsating variable white dwarfs known, arises from non-radial gravity wave pulsations.: §7 Known types of pulsating white dwarf include the DAV, or ZZ Ceti, stars, including HL Tau 76, with hydrogen-dominated atmospheres and the spectral type DA;: 891, 895 DBV, or V777 Her, stars, with helium-dominated atmospheres and the spectral type DB;: 3525 and GW Vir stars, sometimes subdivided into DOV and PNNV stars, with atmospheres dominated by helium, carbon, and oxygen. GW Vir stars are not, strictly speaking, white dwarfs, but are stars that are in a position on the Hertzsprung–Russell diagram between the asymptotic giant branch and the white dwarf region. They may be called pre-white dwarfs. These variables all exhibit small (1%–30%) variations in light output, arising from a superposition of vibrational modes with periods of hundreds to thousands of seconds. Observation of these variations gives asteroseismological evidence about the interiors of white dwarfs. Formation After the hydrogen-fusing period of a main-sequence star of low or intermediate mass ends, such a star will expand to a red giant and fuse helium to carbon and oxygen in its core by the triple-alpha process. If a red giant has insufficient mass to generate the core temperatures required to fuse carbon (around 109 K), an inert mass of carbon and oxygen will build up at its center. After such a star sheds its outer layers and forms a planetary nebula, it will leave behind a core, which is the remnant white dwarf. White dwarf stars with less than 25% of a solar mass are all found in binary star systems. The proportion of white dwarfs in binary systems declines to 34% for masses in the range 0.5–0.75 M☉. These objects all have helium cores. It is thought that, over a lifespan that considerably exceeds the age of the universe (c. 13.8 billion years), such a star will eventually burn all its hydrogen, for a while becoming a blue dwarf, and end its evolution as a helium white dwarf composed chiefly of helium-4 nuclei. Due to the very long time this process takes, it is not thought to be the origin of the observed helium white dwarfs. Rather, they are believed to be mostly the product of mass loss in binary systems. Proposals to explain those helium white dwarfs that are not part of binary systems include mass loss due to a large exoplanetary companion, stars being stripped of material by companions exploding as supernovae, and various types of stellar mergers. If the mass of a main-sequence star is between 0.5 and 8 M☉, its core will become sufficiently hot to fuse helium into carbon and oxygen via the triple-alpha process, but it will never become sufficiently hot to fuse neon. Near the end of the period in which it undergoes fusion reactions, such a star will have a carbon–oxygen or oxygen-neon core that does not undergo fusion reactions, surrounded by an inner helium-burning shell and an outer hydrogen-burning shell. On the Hertzsprung–Russell diagram, it will be found on the asymptotic giant branch. It will then expel most of its outer material to form a planetary nebula, leaving only the carbon–oxygen core. White dwarfs with a mass greater than 1.05 M☉ are termed ultramassive white dwarfs. When formed in single-star systems, these are expected to have an oxygen-neon core. However, a significant fraction (~20%) of ultramassive white dwarfs are formed through white dwarf mergers. In this case the result is a carbon-oxygen ultramassive white dwarf. If a star is massive enough, its core will eventually become sufficiently hot to fuse carbon to neon, and then to fuse neon to iron. Such a star will not become a white dwarf, because the mass of its central, non-fusing core, initially supported by electron degeneracy pressure, will eventually exceed the largest possible mass supportable by degeneracy pressure. At this point the core of the star will collapse and it will explode in a core-collapse supernova that will leave behind a remnant neutron star, black hole, or possibly a more exotic form of compact star. Some main-sequence stars, of perhaps 8 to 10 M☉, although sufficiently massive to fuse carbon to neon and magnesium, may be insufficiently massive to fuse neon. Such a star may leave a remnant white dwarf composed chiefly of oxygen, neon, and magnesium, provided that its core does not collapse, and provided that fusion does not proceed so violently as to blow apart the star in a supernova. Although a few white dwarfs have been identified that may be of this type, most evidence for the existence of such comes from the novae called ONeMg or neon novae. The spectra of these novae exhibit abundances of neon, magnesium, and other intermediate-mass elements that appear to be only explicable by the accretion of material onto an oxygen–neon–magnesium white dwarf. Type Iax supernovae, that involve helium accretion by a white dwarf, have been proposed to be a channel for transformation of this type of stellar remnant. In this scenario, the carbon detonation produced in a Type Ia supernova is too weak to destroy the white dwarf, expelling just a small part of its mass as ejecta, but produces an asymmetric explosion that kicks the star, often known as a zombie star, to the high speeds of a hypervelocity star. The matter processed in the failed detonation is re-accreted by the white dwarf with the heaviest elements such as iron falling to its core where it accumulates. These iron-core white dwarfs would be smaller than the carbon–oxygen kind of similar mass and would cool and crystallize faster than those. Fate Once formed, a white dwarf is stable and will usually continue to cool almost indefinitely, eventually to become a black dwarf. Assuming that the universe continues to expand, it is thought that in 1019 to 1020 years, the galaxies will evaporate as their stars escape into intergalactic space.: §IIIA White dwarfs should generally survive galactic dispersion, although an occasional collision between white dwarfs may produce a new fusing star (e.g. an extreme helium star) or a super-Chandrasekhar mass white dwarf that will explode in a Type Ia supernova.: §§IIIC, IV The lifetime of a white dwarf is thought to be on the order of the hypothetical lifetime of the proton, known to be at least 1034–1035 years. Some grand unified theories predict a proton lifetime between 1030 and 1036 years. If these theories are not valid, the proton might still decay by complicated nuclear reactions or through quantum gravitational processes involving virtual black holes; in these cases, the lifetime is estimated to be no more than 10200 years. If protons do decay, the mass of a white dwarf will decrease very slowly with time as its nuclei decay, until it loses enough mass to become a non-degenerate lump of matter, and finally disappears completely.: §IV A white dwarf can also be cannibalized or evaporated by a companion star, causing the white dwarf to lose so much mass that it becomes a planetary mass object. The resultant object, orbiting the former companion, now host star, could be a helium planet or diamond planet. Debris disks and planets A white dwarf's stellar and planetary system is inherited from its progenitor star and may interact with the white dwarf in various ways. There are several indications that a white dwarf has a remnant planetary system. The most common observable evidence of a remnant planetary system is pollution of the spectrum of a white dwarf with metal absorption lines. 27–50% of white dwarfs show a spectrum polluted with metals, but these heavy elements settle out in the atmosphere of white dwarfs colder than 20000 K. The most widely accepted hypothesis is that this pollution comes from tidally disrupted rocky bodies. The first observation of a metal-polluted white dwarf was by van Maanen in 1917 at the Mount Wilson Observatory and is now recognized as the first evidence of exoplanets in astronomy. The white dwarf van Maanen 2 shows iron, calcium and magnesium in its atmosphere, but van Maanen misclassified it as the faintest F-type star based on the calcium H- and K-lines. The nitrogen in white dwarfs is thought to come from nitrogen-ice of extrasolar Kuiper Belt objects, the lithium is thought to come from accreted crust material and the beryllium is thought to come from exomoons. A less common observable evidence is infrared excess due to a flat and optically thick debris disk, which is found in around 1%–4% of white dwarfs. The first white dwarf with infrared excess was discovered by Zuckerman and Becklin in 1987 in the near-infrared around Giclas 29-38 and later confirmed as a debris disk. White dwarfs hotter than 27000 K sublimate all the dust formed by tidally disrupting a rocky body, preventing the formation of a debris disk. In colder white dwarfs, a rocky body might be tidally disrupted near the Roche radius and forced into a circular orbit by the Poynting–Robertson drag, which is stronger for less massive white dwarfs. The Poynting–Robertson drag will also cause the dust to orbit closer and closer towards the white dwarf, until it will eventually sublimate and the disk will disappear. A debris disk will have a lifetime of around a few million years for white dwarfs hotter than 10000 K. Colder white dwarfs can have disk-lifetimes of a few 10 million years, which is enough time to tidally disrupt a second rocky body and forming a second disk around a white dwarf, such as the two rings around LSPM J0207+3331. The least common observable evidence of planetary systems are detected major or minor planets. Only a handful of giant planets and a handful of minor planets are known around white dwarfs. Some estimations based on the metal content of the atmospheres of the white dwarfs consider that at least 15% of them may be orbited by planets or asteroids, or at least their debris. Another suggested idea is that white dwarfs could be orbited by the stripped cores of rocky planets, that would have survived the red giant phase of their star but losing their outer layers and, given those planetary remnants would likely be made of metals, to attempt to detect them looking for the signatures of their interaction with the white dwarf's magnetic field. Other suggested ideas of how white dwarfs are polluted with dust involve the scattering of asteroids by planets or via planet-planet scattering. Liberation of exomoons from their host planet could cause white dwarf pollution with dust. Either the liberation could cause asteroids to be scattered towards the white dwarf or the exomoon could be scattered into the Roche radius of the white dwarf. The mechanism behind the pollution of white dwarfs in binaries was also explored as these systems are more likely to lack a major planet, but this idea cannot explain the presence of dust around single white dwarfs. While old white dwarfs show evidence of dust accretion, white dwarfs older than ~1 billion years or >7000 K with dusty infrared excess were not detected until the discovery of LSPM J0207+3331 in 2018, which has a cooling age of ~3 billion years. The white dwarf shows two dusty components that are being explained with two rings with different temperatures. Another possible way to detect planetary systems around white dwarfs is through their radio emissions. In 2004 and 2005, A. J. Willes and K. Wu hypothesized that when an exoplanet travels through the magnetosphere of a white dwarf, it may generate auroral radio emissions from the magnetic poles of the white dwarf, similar to how Io stimulates radio emissions from Jupiter; a search for such radio emission from nine white dwarfs by researchers using the Arecibo radio telescope did not find any so far. Infrared spectroscopic observations made by NASA's Spitzer Space Telescope of WD 2226-210, the central star of the Helix Nebula, suggest the presence of a dust cloud, which may be caused by cometary collisions. It is possible that infalling material from this may cause X-ray emission from the central star. Similarly, observations made in 2004 indicated the presence of a dust cloud around the young (estimated to have formed from its AGB progenitor about 500 million years ago) white dwarf G29-38, which may have been created by tidal disruption of a comet passing close to the white dwarf. The metal-rich white dwarf WD 1145+017 is the first white dwarf observed with a disintegrating minor planet that transits the star. The disintegration of the planetesimal generates a debris cloud that passes in front of the star every 4.5 hours, causing a 5-minute-long fade in the star's optical brightness. The depth of the transit is highly variable. The giant planet WD J0914+1914b is first gas planet orbiting around a white dwarf and is being evaporated by the strong ultraviolet radiation of the hot white dwarf. Part of the evaporated material is being accreted in a gaseous disk around the white dwarf. The weak hydrogen line as well as other lines in the spectrum of the white dwarf revealed the presence of the giant planet. The white dwarf WD 0145+234 shows brightening in the mid-infrared, seen in NEOWISE data. The brightening, not seen before 2018, may be due to the tidal disruption of an exoasteroid, the first time such an event has been observed. WD 1856+534 is the first transiting major planet to be observed orbiting a white dwarf, and remains the only such example as of 2023. MOA-2010-BLG-477L, a white dwarf discovered thanks to a microlensing event, is also known to have a giant planet. GD 140 and LAWD 37 are suspected to have giant exoplanets due to anomaly in the Hipparcos-Gaia proper motion. For GD 140 it is suspected to be a planet several times more massive than Jupiter and for LAWD 37 it is suspected to be a planet less massive than Jupiter. Additionally, WD 0141-675 was suspected to have a super-Jupiter with an orbital period of 33.65 days based on Gaia astrometry. This is remarkable because WD 0141-675 is polluted with metals and metal polluted white dwarfs have long been suspected to host giant planets that disturb the orbits of minor planets, causing the pollution. Both GD 140 and WD 0141 will be observed with JWST in cycle 2 with the aim to detect infrared excess caused by the planets. The planet candidate at WD 0141-675 was found to be a false positive caused by a software error. Habitability A search has been proposed for transits of hypothetical Earth-like planets around white dwarfs with surface temperatures of less than 10000 K. Such stars that could harbor a habitable zone at a distance of c. 0.005 to 0.02 AU that would last upwards of 3 billion years. This is so close that any habitable planets would be tidally locked. As a white dwarf has a size similar to that of a planet, these kinds of transits would produce strong eclipses. Newer research casts some doubts on this idea, given that the close orbits of those hypothetical planets around their parent stars would subject them to strong tidal forces that could render them uninhabitable by triggering a greenhouse effect. Another suggested constraint to this idea is the origin of those planets. Leaving aside formation from the accretion disk surrounding the white dwarf, there are two ways a planet could end in a close orbit around stars of this kind: by surviving being engulfed by the star during its red giant phase, and then spiralling inward, or inward migration after the white dwarf has formed. The former case is implausible for low-mass bodies, as they are unlikely to survive being absorbed by their stars. In the latter case, the planets would have to expel so much orbital energy as heat, through tidal interactions with the white dwarf, that they would likely end as uninhabitable embers. Binary stars and novae If a white dwarf is in a binary star system and is accreting matter from its companion, a variety of phenomena may occur, including novae and Type Ia supernovae. It may also be a super-soft x-ray source if it is able to take material from its companion fast enough to sustain fusion on its surface. On the other hand, phenomena in binary systems such as tidal interaction and star–disc interaction, moderated by magnetic fields or not, act on the rotation of accreting white dwarfs. In fact, the (securely known) fastest-spinning white dwarfs are members of binary systems (the fastest one being the white dwarf in CTCV J2056-3014). A close binary system of two white dwarfs can lose angular momentum and radiate energy in the form of gravitational waves, causing their mutual orbit to steadily shrink until the stars merge. The mass of an isolated, nonrotating white dwarf cannot exceed the Chandrasekhar limit of ~ 1.4 M☉. This limit may increase if the white dwarf is rotating rapidly and nonuniformly. White dwarfs in binary systems can accrete material from a companion star, increasing both their mass and their density. As their mass approaches the Chandrasekhar limit, this could theoretically lead to either the explosive ignition of fusion in the white dwarf or its collapse into a neutron star. There are two models that might explain the progenitor systems of Type Ia supernovae: the single-degenerate model and the double-degenerate model. In the single-degenerate model, a carbon–oxygen white dwarf accretes mass and compresses its core by pulling mass from a companion non-degenerate star.: 14 It is believed that compressional heating of the core leads to ignition of carbon fusion as the mass approaches the Chandrasekhar limit. Because the white dwarf is supported against gravity by quantum degeneracy pressure instead of by thermal pressure, adding heat to the star's interior increases its temperature but not its pressure, so the white dwarf does not expand and cool in response. Rather, the increased temperature accelerates the rate of the fusion reaction, in a runaway process that feeds on itself. The thermonuclear flame consumes much of the white dwarf in a few seconds, causing a Type Ia supernova explosion that obliterates the star. In another possible mechanism for Type Ia supernovae, the double-degenerate model, two carbon–oxygen white dwarfs in a binary system merge, creating an object with mass greater than the Chandrasekhar limit in which carbon fusion is then ignited.: 14 In both cases, the white dwarfs are not expected to survive the Type Ia supernova. The single-degenerate model was the favored mechanism for Type Ia supernovae, but now, because of observations, the double-degenerate model is thought to be the more likely scenario. Predicted rates of white dwarf-white dwarf mergers are comparable to the rate of Type Ia supernovae and would explain the lack of hydrogen in the spectra of Type Ia supernovae. The main mechanism for Type Ia supernovae remains an open question. In the single-degenerate scenario, the accretion rate onto the white dwarf needs to be within a narrow range dependent on its mass so that the hydrogen burning on the surface of the white dwarf is stable. If the accretion rate is too low, novae on the surface of the white dwarf will blow away accreted material. If it is too high, the white dwarf will expand and the white dwarf and companion star will be in a common envelope. This stops the growth of the white dwarf thus preventing it from reaching the Chandrasekhar limit and exploding. For the single-degenerate model its companion is expected to survive, but there is no strong evidence of such a star near Type Ia supernovae sites. In the double-degenerate scenario, white dwarfs need to be in very close binaries; otherwise their inspiral time is longer than the age of the universe. It is also likely that instead of a Type Ia supernova, the merger of two white dwarfs will lead to core-collapse. As a white dwarf accretes material quickly, the core can ignite off-center, which leads to gravitational instabilities that could create a neutron star. The historical bright SN 1006 is thought to have been a Type Ia supernova from a white dwarf, possibly the merger of two white dwarfs. Tycho's Supernova of 1572 was also a type Ia supernova, and its remnant has been detected. WD 0810–353, a white dwarf 11 parsecs away from the Sun, is possibly a hypervelocity runaway ejected from a Type Ia supernova, though this has been disputed. A post-common envelope binary (PCEB) is a binary consisting of a white dwarf or hot subdwarf and a closely tidally-locked red dwarf (in other cases this might be a brown dwarf instead of a red dwarf). These binaries form when the red dwarf is engulfed in the red giant phase. As the red dwarf orbits inside the common envelope, it is slowed down in the denser environment. This slowed orbital speed is compensated with a decrease of the orbital distance between the red dwarf and the core of the red giant. The red dwarf spirals inwards towards the core and might merge with the core. If this does not happen and instead the common envelope is ejected, then the binary ends up in a close orbit, consisting of a white dwarf and a red dwarf. This type of binary is called a post-common envelope binary. The evolution of the PCEB continues as the two dwarf stars orbit closer and closer due to magnetic braking and by releasing gravitational waves. The binary might then evolve into one of several dramatic outcomes: a high-field magnetic white dwarf, a white dwarf pulsar, a double-degenerate binary, or even a Type Ia supernova. Because a PCEB may evolve at some point into a cataclysmic variable, some of them are also called pre-cataclysmic variables. Before accretion of material pushes a white dwarf close to the Chandrasekhar limit, accreted hydrogen-rich material on the surface may ignite in a less destructive type of thermonuclear explosion powered by hydrogen fusion. These surface explosions can be repeated as long as the white dwarf's core remains intact. This weaker kind of repetitive cataclysmic phenomenon is called a (classical) nova. Astronomers have also observed dwarf novae, which have smaller, more frequent luminosity peaks than the classical novae. These are thought to be caused by the release of gravitational potential energy when part of the accretion disc collapses onto the star, rather than through a release of energy due to fusion. In general, binary systems with a white dwarf accreting matter from a stellar companion are called cataclysmic variables. As well as novae and dwarf novae, several other classes of these variables are known, including polars and intermediate polars, both of which feature highly magnetic white dwarfs. Both fusion- and accretion-powered cataclysmic variables have been observed to be X-ray sources. Other binaries include those that consist of a main sequence star (or giant) and a white dwarf. The binary Sirius AB is an example pair of this type. White dwarfs can also exist as binaries or multiple star systems that only consist of white dwarfs. An example of a resolved triple white dwarf system is WD J1953−1019, discovered with Gaia DR2 data. One interesting field is the study of remnant planetary systems around white dwarfs. It is expected that planets orbiting several AU from a star will survive the star's post-main-sequence transformation into a white dwarf. Moreover, white dwarfs, being much smaller and correspondingly less luminous than their progenitors, are less likely to outshine any bodies in orbit around them. This makes white dwarfs advantageous targets for direct-imaging searches for exoplanets and brown dwarfs. The first brown dwarf to be detected by direct imaging was the companion to the white dwarf GD 165 A, discovered in 1988. More recently, the white dwarf WD 0806−661 was found to have a cold companion body of substellar mass, variously described as a brown dwarf or an exoplanet. Nearest white dwarfs See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_note-Webopedia-2010-133] | [TOKENS: 9291]
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Thirty-seventh_government_of_Israel#Women's_rights] | [TOKENS: 9915]
Contents Thirty-seventh government of Israel The thirty-seventh government of Israel is the current cabinet of Israel, formed on 29 December 2022, following the Knesset election the previous month. The coalition government currently consists of five parties — Likud, Shas, Otzma Yehudit, Religious Zionist Party and New Hope — and is led by Benjamin Netanyahu, who took office as the prime minister of Israel for the sixth time. The government is widely regarded as the most right-wing government in the country's history, and includes far-right politicians. Several of the government's policy proposals have led to controversies, both within Israel and abroad, with the government's attempts at reforming the judiciary leading to a wave of demonstrations across the country. Following the outbreak of the Gaza war, opposition leader Yair Lapid initiated discussions with Netanyahu on the formation of an emergency government. On 11 October 2023, National Unity MKs Benny Gantz, Gadi Eisenkot, Gideon Sa'ar, Hili Tropper, and Yifat Shasha-Biton joined the Security Cabinet of Israel to form an emergency national unity government. Their accession to the Security Cabinet and to the government (as ministers without portfolio) was approved by the Knesset the following day. Gantz, Netanyahu, and Defense Minister Yoav Gallant became part of the newly formed Israeli war cabinet, with Eisenkot and Ron Dermer serving as observers. National Unity left the government in June 2024. New Hope rejoined the government in September. Otzma Yehudit announced on 19 January 2025 that it had withdrawn from the government, which took effect on 21 January, following the cabinet's acceptance of the three-phase Gaza war ceasefire proposal, though it rejoined two months later. United Torah Judaism left the government in July 2025 over dissatisfaction with the government's draft conscription law. Shas left the government several days later, though it remains part of the coalition. Background The right-wing bloc of parties, led by Benjamin Netanyahu, known in Israel as the national camp, won 64 of the 120 seats in the elections for the Knesset, while the coalition led by the incumbent prime minister Yair Lapid won 51 seats. The new majority has been variously described as the most right-wing government in Israeli history, as well as Israel's most religious government. Shortly after the elections, Lapid conceded to Netanyahu, and congratulated him, wishing him luck "for the sake of the Israeli people". On 15 November, the swearing-in ceremony for the newly elected members of the 25th Knesset was held during the opening session. The vote to appoint a new Speaker of the Knesset, which is usually conducted at the opening session, as well as the swearing in of cabinet members were postponed since ongoing coalition negotiations had not yet resulted in agreement on these positions. Government formation Yair Lapid Yesh Atid Benjamin Netanyahu Likud On 3 November 2022, Netanyahu told his aide Yariv Levin to begin informal coalition talks with allied parties, after 97% of the vote was counted. The leader of the Shas party Aryeh Deri met with Yitzhak Goldknopf, the leader of United Torah Judaism and its Agudat Yisrael faction, on 4 November. The two parties agreed to cooperate as members of the next government. The Degel HaTorah faction of United Torah Judaism stated on 5 November that it will maintain its ideological stance about not seeking any ministerial posts, as per the instruction of its spiritual leader Rabbi Gershon Edelstein, but will seek other senior posts like Knesset committee chairmen and deputy ministers. Netanyahu himself started holding talks on 6 November. He first met with Moshe Gafni, the leader of Degel HaTorah, and then with Goldknopf. Meanwhile, the Religious Zionist Party leader Bezalel Smotrich and the leader of its Otzma Yehudit faction Itamar Ben-Gvir pledged that they would not enter the coalition without the other faction. Gafni later met with Smotrich for coalition talks. Smotrich then met with Netanyahu. On 7 November, Netanyahu met with Ben-Gvir who demanded the Ministry of Public Security with expanded powers for himself and the Ministry of Education or Transport and Road Safety for Yitzhak Wasserlauf. A major demand among all of Netanyahu's allies was that the Knesset be allowed to ignore the rulings of the Supreme Court. Netanyahu met with the Noam faction leader and its sole MK Avi Maoz on 8 November after he threatened to boycott the coalition. He demanded complete control of the Western Wall by the Haredi rabbinate and removal of what he considered as anti-Zionist and anti-Jewish content in schoolbooks. President Isaac Herzog began consultations with heads of all the political parties on 9 November after the election results were certified. During the consultations, he expressed his reservations about Ben-Gvir becoming a member in the next government. Shas met with Likud for coalition talks on 10 November. By 11 November, Netanyahu had secured recommendations from 64 MKs, which constituted a majority. He was given the mandate to form the thirty-seventh government of Israel by President Herzog on 13 November. Otzma Yehudit and Noam officially split from Religious Zionism on 20 November as per a pre-election agreement. On 25 November, Otzma Yehudit and Likud signed a coalition agreement, under which Ben-Gvir will assume the newly created position of National Security Minister, whose powers would be more expansive than that of the Minister of Public Security, including overseeing the Israel Police and the Israel Border Police in the West Bank, as well as giving powers to authorities to shoot thieves stealing from military bases. Yitzhak Wasserlauf was given the Ministry for the Development of the Negev and the Galilee with expanded powers to regulate new West Bank settlements, while separating it from the "Periphery" portfolio, which will be given to Shas. The deal also includes giving the Ministry of Heritage to Amihai Eliyahu, separating it from the "Jerusalem Affairs" portfolio, the chairmanship of the Knesset's Public Security Committee to Zvika Fogel and that of the Special Committee for the Israeli Citizens' Fund to Limor Son Har-Melech, the post of Deputy Economic Minister to Almog Cohen, establishment of a national guard, and expansion of mobilization of reservists in the Border Police. Netanyahu and Maoz signed a coalition agreement on 27 November, under which the latter would become a deputy minister, would head an agency on Jewish identity in the Prime Minister's Office, and would also head Nativ, which processes the aliyah from the former Soviet Union. The agency for Jewish identity would have authority over educational content taught outside the regular curriculum in schools, in addition to the department of the Ministry of Education overseeing external teaching and partnerships, which would bring nonofficial organisations permitted to teach and lecture at schools under its purview. Likud signed a coalition agreement with the Religious Zionist Party on 1 December. Under the deal, Smotrich would serve as the Minister of Finance in rotation with Aryeh Deri, and the party will receive the post of a minister within the Ministry of Defense with control over the departments administering settlement and open lands under the Coordinator of Government Activities in the Territories, in addition to another post of a deputy minister. The deal also includes giving the post of Minister of Aliyah and Integration to Ofir Sofer, the newly created National Missions Ministry to Orit Strook, and the chairmanship of the Knesset's Constitution, Law and Justice Committee to Simcha Rothman. Likud and United Torah Judaism signed a coalition agreement on 6 December, to allow request for an extension to the deadline. Under it, the party would receive the Ministry of Construction and Housing, the chairmanship of the Knesset Finance Committee which will be given to Moshe Gafni, the Ministry of Jerusalem and Tradition (which would replace the Ministry of Jerusalem Affairs and Heritage), in addition to several posts of deputy ministers and chairmanships of Knesset committees. Likud also signed a deal with Shas by 8 December, securing interim coalition agreements with all of their allies. Under the deal, Deri will first serve as the Minister of Interior and Health, before rotating posts with Smotrich after two years. The party will also receive the Ministry of Religious Services and Welfare Ministries, as well as posts of deputy ministers in the Ministry of Education and Interior. The vote to replace then-incumbent Knesset speaker Mickey Levy was scheduled for 13 December, after Likud and its allies secured the necessary number of signatures for it. Yariv Levin of Likud was elected as an interim speaker by 64 votes, while his opponents Merav Ben-Ari of Yesh Atid and Ayman Odeh of Hadash received 45 and five votes respectively. Netanyahu asked Herzog for a 14-day extension after the agreement with Shas to finalise the roles his allied parties would play. Herzog on 9 December extended the deadline to 21 December. On that date, Netanyahu informed Herzog that he had succeeded in forming a coalition, with the new government expected to be sworn in by 2 January 2023. The government was sworn in on 29 December 2022. Timeline Israeli law stated that people convicted of crimes cannot serve in the government. An amendment to that law was made in late 2022, known colloquially as the Deri Law, to allow those who had been convicted without prison time to serve. This allowed Deri to be appointed to the cabinet. Shas leader Aryeh Deri was appointed to be Minister of Health, Minister of the Interior, and Vice Prime Minister in December 2022. He was fired in January 2023, following a Supreme Court decision that his appointment was unreasonable, since he had been convicted of fraud, and had promised not to seek government roles through a plea deal. In March 2023, Defence Minister Yoav Gallant called on the government to delay legislation related to the judicial reform. Prime Minister Netanyahu announced that he had been dismissed from his position, leading to the continuation of mass protests across the country (which had started in January in Tel Aviv). Gallant continued to serve as a minister as he had not received formal notice of dismissal, and two weeks later it was announced that Netanyahu had reversed his decision. Public Safety Minister Itamar Ben-Gvir (Otzma Yehudit leader) and Minister of Justice Yariv Levin (Likud) both threatened to resign if the judicial reform was delayed.[better source needed] After the outbreak of the Gaza war, five members of the National Unity party joined the government as ministers without portfolio, with leader Benny Gantz being made a member of the new Israeli war cabinet (along with Netanyahu and Gallant). As the war progressed, minister of national security Itamar Ben-Gvir threatened to leave the government if the war was ended. A month later in mid December, he again threatened to leave if the war did not maintain "full strength". Gideon Sa'ar stated on 16 March that his New Hope party would resign from the government and join the opposition if Prime Minister Benjamin Netanyahu did not appoint him to the Israeli war cabinet. Netanyahu did not do so, resulting in Sa'ar's New Hope party leaving the government nine days later, reducing the size of the coalition from 76 MKs to 72. Ben-Gvir and Bezalel Smotrich, of the National Religious Party–Religious Zionism party, have indicated that they will withdraw their parties from the government if the January 2025 Gaza war ceasefire is adopted, which would bring down the government. Ben-Gvir announced on 5 June that the members of his party would be allowed to vote as they wish, though his party resumed support on 9 June. On 18 May, Gantz set an 8 June deadline for withdrawal from the coalition, which was delayed by a day following the 2024 Nuseirat rescue operation. Gantz and his party left the government on 9 June, giving the government 64 seats in the Knesset. Sa'ar and his New Hope party rejoined the Netanyahu government on 30 September, increasing the number of seats held by the government to 68. The High Court of Justice ruled on 28 March 2024 that yeshiva funds would no longer be available for students who are "eligible for enlistment", effectively allowing ultra-Orthodox Jews to be drafted into the IDF. Attorney general Gali Baharav-Miara indicated on 31 March that the conscription process must begin on 1 April. The court ruled on 25 June that the IDF must begin to draft yeshiva students. Likud announced on 7 July that it would not put forward any legislation after Shas and United Torah Judaism said that they would boycott the plenary session over the lack of legislation dealing with the Haredi draft. The Ultra-Orthodox boycott continued for a second day, with UTJ briefly ending its boycott on 9 July to unsuccessfully vote in favor of a bill which would have weakened the Law of Return. Yuli Edelstein, who was replaced by Boaz Bismuth on the Foreign Affairs and Defense Committee in early August, published a draft version of the conscription law shortly before his ouster. Bismuth cancelled the work on the draft law in September 2025, which Edelstein called "a shame." Bismuth released the official version of the draft law in late November 2025. It weakened penalties for draft evaders, with Edelstein saying it was "the exact opposite" of the bill which he attempted to pass. Members of Otzma Yehudit resigned from the government on 19 January 2025 over the January 2025 Gaza war ceasefire, which took effect on 21 January. The members rejoined in March, following the "resumption" of the war in Gaza. Avi Maoz of the Noam party left the government in March 2025. On 4 June 2025, senior rabbis for United Torah Judaism Dov Lando and Moshe Hillel Hirsch instructed the party's MKs to pass a bill which would dissolve the Knesset. Yesh Atid, Yisrael Beytenu and The Democrats announced that they will "submit a bill" for dissolution on 11 June, with Yesh Atid tabling the bill on 4 June. There were also reports that Shas would vote in favor of Knesset dissolution amidst division within the governing coalition on Haredi conscription. This jeopardized the coalition's majority and would have triggered new elections if the bill passed. The following day, Agudat Yisrael, one of the United Torah Judaism factions, confirmed that it would submit a bill to dissolve the Knesset. Asher Medina, a Shas spokesman, indicated on 9 June that the party would vote in favor of a preliminary bill to dissolve the Knesset. The rabbis of Degel HaTorah instructed the parties' MKs on 12 June 2025 to oppose the dissolution of the Knesset, which was followed by Yuli Edelstein and the Shas and Degel HaTorah parties announcing that a deal had been reached, with "rabbinical leaders" telling their parties to delay the dissolution vote by a week. Shas and Degel HaTorah voted against the dissolution bill, which led to the bill failing its preliminary reading in a vote of 61 against and 53 in favor. MKs Ya'akov Tessler and Moshe Roth of Agudat Yisrael voted in favor of dissolution. Another dissolution bill will be unable to be brought forward for six months. If the bill had passed its preliminary reading, in addition to three more readings, an election would have been held in approximately three months; The Jerusalem Post posited it would have been held in October. Degel HaTorah announced on 14 July 2025 that it would leave the government because members of the party were dissatisfied after viewing the proposed draft bill by Yuli Edelstein regarding Haredi exemptions from the Israeli draft. Several hours later, Agudat Yisrael announced that it would also leave the government. Deputy Transportation Minister Uri Maklev, Moshe Gafni, the head of the Knesset Finance Committee, Ya'akov Asher, the head of the Knesset Interior and Environment Protection Committee and Jerusalem Affairs minister Meir Porush all submitted their resignations, with their resignations taking effect in 48 hours. Sports Minister Ya'akov Tessler and "Special Committee for Public Petitions Chair" Yitzhak Pindrus also submitted resignations. Yisrael Eichler submitted his resignation as the "head of the Knesset Labor and Welfare Committee" the same day. The resignations will leave Netanyahu's government with a 60-seat majority in the Knesset, as Avi Maoz, of the Noam party, left the government in March 2025. Despite Edelstein's ouster in August, a spokesman for UTJ head Yitzhak Goldknopf remarked that it would not change the faction's withdrawal from the government. The religious council for Shas, called the Moetzet Chachmei HaTorah, instructed the party on 16 July to leave the government, but stay in the coalition. The following day, various cabinet ministers submitted their resignations, including "Interior Minister Moshe Arbel, Social Affairs Minister Ya'akov Margi and Religious Services Minister Michael Malchieli." Malchieli reportedly has postponed his resignation so he could attend a 20 July meeting of the panel investigating whether attorney general Gali Baharav-Miara should be dismissed. Deputy Minister of Agriculture Moshe Abutbul, Minister of Health Uriel Buso and Haim Biton, a minister in the Education Ministry, also submitted their resignation letters, while Arbel retracted his resignation letter. The last cabinet member from the party to submit it was Labor Minister Yoav Ben-Tzur. The ministers who resigned will return to the Knesset, replacing MKs Moshe Roth, Yitzhak Pindrus and Eliyahu Baruchi. Members of government Listed below are the current ministers in the government: Principles and priorities According to the agreements signed between Likud and each of its coalition partners, and the incoming government's published guideline principles, its stated priorities are to combat the cost of living, further centralize Orthodox control over the state religious services, pass judicial reforms which include legislation to reduce judicial controls on executive and legislative power, expand settlements in the West Bank, and consider an annexation of the West Bank. Before the vote of confidence in his new government in the Knesset, Netanyahu presented three top priorities for the new government: internal security and governance, halting the nuclear program of Iran, and the development of infrastructure, with a focus on further connecting the center of the country with its periphery. Policies The government's flagship program, centered around reforms in the judicial branch, drew widespread criticism. Critics said it would have negative effects on the separation of powers, the office of the Attorney General, the economy, public health, women and minorities, workers' rights, scientific research, the overall strength of Israel's democracy and its foreign relations. After weeks of public protests on Israel's streets, joined by a growing number of military reservists, Minister of Defense Yoav Gallant spoke against the reform on 25 March, calling for a halt of the legislative process "for the sake of Israel's security". The next day, Netanyahu announced that he would be removed from his post, sparking another wave of protest across Israel and ultimately leading to Netanyahu agreeing to pause the legislation. On 10 April, Netanyahu announced that Gallant would keep his post. On 27 March 2023, after the public protests and general strikes, Netanyahu announced a pause in the reform process to allow for dialogue with opposition parties. However, negotiations aimed at reaching a compromise collapsed in June, and the government resumed its plans to unilaterally pass parts of the legislation. On 24 July 2023, the Knesset passed a bill that curbs the power of the Supreme Court to declare government decisions unreasonable; on 1 January 2024, the Supreme Court struck the bill down. The Knesset passed a "watered-down" version of the judicial reform package in late March 2025 which "changes the composition" of the judicial selection committee. In December 2022 Minister of National Security Itamar Ben-Gvir sought to amend the law that regulates the operations of the Israel Police, such that the ministry will have more direct control of its forces and policies, including its investigative priorities. Attorney General Gali Baharav-Miara objected to the draft proposal, raising concerns that the law would enable the politicization of police work, and the draft was amended to partially address those concerns. Nevertheless, in March 2023 Deputy Attorney General Gil Limon stated that the Attorney General's fears had been realized, referring to several instances of ministerial involvement in the day-to-day work of the otherwise independent police force – statements that were repeated by the Attorney General herself two days later. Separately, Police Commissioner Kobi Shabtai instructed Deputy Commissioners to avoid direct communication with the minister, later stating that "the Israel Police will remain apolitical, and act only according to law". Following appeals by the Association for Civil Rights in Israel and the Movement for Quality Government in Israel, the High Court of Justice instructed Ben-Gvir "to refrain from giving operational directions to the police... [especially] as regards to protests and demonstrations against the government." As talks of halting the judicial reform gained wind during March 2023, Minister of National Security Itamar Ben-Gvir threatened to resign if the legislation implementing the changes was suspended. To appease Ben-Gvir, Prime Minister Netanyahu announced that the government would promote the creation of a new National Guard, to be headed by Ben-Gvir. On 29 March, thousands of Israelis demonstrated in Tel Aviv, Haifa and Jerusalem against this decision. On 1 April, the New York Times quoted Gadeer Nicola, head of the Arab department at the Association for Civil Rights in Israel, as saying "If this thing passes, it will be an imminent danger to the rights of Arab citizens in this country. This will create two separate systems of applying the law. The regular police which will operate against Jewish citizens — and a militarized militia to deal only with Arab citizens." The same day, while speaking on Israel's Channel 13 about those whom he'd like to see enlist in the National Guard, Ben-Gvir specifically mentioned La Familia, the far-right fan club of the Beitar Jerusalem soccer team. On 2 April, Israel's cabinet approved the establishment of a law enforcement body that would operate independently of the police, under Ben-Gvir's authority. According to the decision, the Minister was to establish a committee chaired by the Director General of the Ministry of National Security, with representatives of the ministries of defense, justice and finance, as well as the police and the IDF, to outline the operations of the new organization. The committee's recommendations will be submitted to the government for consideration. Addressing a conference on 4 April, Police Commissioner Kobi Shabtai said that he is not opposed to the establishment of a security body which would answer to the police, but "a separate body? Absolutely not." The police chief said he had warned Ben-Gvir that the establishment of a security body separate from the police is "unnecessary, with extremely high costs that may harm citizens' personal security." During a press conference on 10 April, Prime Minister Netanyahu said, in what has been seen by some news outlets as a concession to the protesters, that "This will not be anyone's militia, it will be a security body, orderly, professional, that will be subordinate to one of the [existing] security bodies." The committee established by the government recommended the government to order the establishment of the National Guard immediately while allocating budgets. The National Guard, under whose command will be a superintendent of the police, will not be subordinate to Ben-Gvir. It will be subordinate to the police commissioner and will be part of Israel Border Police. The Ministry of Defense and Finance opposed the conclusions. The Israeli National Security Council called for further discussion on this. The coalition's efforts to expand the purview of Rabbinical courts; force some organizations, such as hospitals, to enforce certain religious practices; amend the Law Prohibiting Discrimination to allow gender segregation and discrimination on the grounds of religious belief; expand funding for religious causes; and put into law the exemption of yeshiva and kolel students from conscription have drawn criticism. According to the Haaretz op-ed of 7 March 2023, "the current coalition is interested... in modifying the public space so it suits the religious lifestyle. The legal coup is meant to castrate anyone who can prevent it, most of all the HCJ." Several banks and institutional investors, including the Israel Discount Bank and AIG have committed to avoid investing in, or providing credit to any organization that will discriminate against others on ground of religion, race, gender or sexual orientation. A series of technology companies and investment firms including Wiz, Intel Israel, Salesforce and Microsoft Israel Research and Development, have criticized the proposed changes to the Law Prohibiting Discrimination, with Wiz stating that it will require its suppliers to commit to preventing discrimination. Over sixty prominent law firms pledged that they will neither represent, nor do business with discriminating individuals and organizations. Insight Partners, a major private equity fund operating in Israel, released a statement warning against intolerance and any attempt to harm personal liberties. Orit Lahav, chief executive of the women's rights organization Mavoi Satum ("Dead End"), said that "the Rabbinical courts are the most discriminatory institution in the State of Israel... Limiting the HCJ[d] while expanding the jurisdiction of the Rabbinical courts would... cause significant harm to women." Anat Thon Ashkenazy, Director of the Center for Democratic Values and Institutions at the Israel Democracy Institute, said that "almost every part of the reform could harm women... the meaning of an override clause is that even if the court says that the law on gender segregation is illegitimate, is harmful, the Knesset could say 'Okay, we say otherwise'". She added that "there is a very broad institutional framework here, after which there will come legislation that harms women's right and we will have no way of protecting or stopping it." During July 2023, 20 professional medical associations signed a letter of position warning against the ramifications to public health that would result from the exclusion of women from the public sphere. They cited, among others, a rise in prevalence of risk factors for cardiovascular disease, pregnancy-related ailments, psychological distress, and the risk of suicide. On 30 July the Knesset passed an amendment to penal law adding sexual offenses to those offenses whose penalty can be doubled if done on grounds of "nationalistic terrorism, racism or hostility towards a certain community". According to MK Limor Son Har-Melech, the bill is meant to penalize any individual who "[intends to] harm a woman sexually based on her Jewishness". The law was criticized by MK Gilad Kariv as "populist, nationalistic, and dangerous towards the Arab citizens of Israel", and by MK Ahmad Tibi as a "race law", and was objected to by legal advisors at the Ministry of Justice and the Knesset Committee on National Security. Activist Orit Kamir wrote that "the amendment... is neither feminist, equal, nor progressive, but the opposite: it subordinates women's sexuality to the nationalistic, racist patriarchy. It hijacks the Law for Prevention of Sexual Harassment to serve a world view that tags women as sexual objects that personify the nation's honor." Yael Sherer, director of the Lobby to Combat Sexual Violence, criticized the law as being informed by dated ideas about sexual assault, and proposed that MKs "dedicate a session... to give victims of sexual assault an opportunity to come out of the darkness... instead of [submitting] declarative bills that change nothing and are not meant but for grabbing headlines". In Israel, during 2022, 24 women "were murdered because they were women," which was an increase of 50% compared to 2021. A law permitting courts to order men subject to a restraining order following domestic violence offenses to wear electronic tags was drafted during the previous Knesset and had passed its first reading unanimously. On 22 March 2023, the Knesset voted to reject the bill. It had been urged to do so by National Security Minister Itamar Ben-Gvir, who said that the bill was unfair to men. Earlier in the week, Ben-Gvir had blocked the measure from advancing in the ministerial legislative committee. The MKs voting against the bill included Prime Minister Netanyahu. The Association of Families of Murder Victims said that by rejecting the law, National Security Minister Itamar Ben-Gvir "brings joy to violent men and abandons the women threatened with murder… unsupervised restraining orders endanger women's lives even more. They give women the illusion of being protected, and then they are murdered." MK Pnina Tamano-Shata, chairwoman of the Knesset Committee on the Status of Women and Gender Equality, said that "the coalition proved today that it despises women's lives." The NGO Amutat Bat Melech [he], which assists Orthodox and ultra-Orthodox women who suffer from domestic violence, said that: "Rejecting the electronic bracelet bill is disconnected from the terrible reality of seven femicides since the beginning of the year. This is an effective tool of the first degree that could have saved lives and reduced the threat to women suffering from domestic violence. This is a matter of life and death, whose whole purpose is to provide a solution to defend women." The agreement signed by the coalition parties includes the setting up of a committee to draft changes to the Law of Return. Israeli religious parties have long demanded that the "grandchild clause" of the Law of Return be cancelled. This clause grants citizenship to anyone with at least one Jewish grandparent, as long as they do not practice another religion. If the grandchild clause were to be removed from the Law of Return then around 3 million people who are currently eligible for aliyah would no longer be eligible. The heads of the Jewish Agency, the Jewish Federations of North America, the World Zionist Organization and Keren Hayesod sent a joint letter to Prime Minister Netanyahu, expressing their "deep concern" about any changes to the Law of Return, adding that "Any change in the delicate and sensitive status quo on issues such as the Law of Return or conversion could threaten to unravel the ties between us and keep us away from each other." The Executive Council of Australian Jewry and the Zionist Federation of Australia issued a joint statement saying "We… view with deep concern… proposals in relation to religious pluralism and the law of return that risk damaging Israel's… relationship with Diaspora Jewry." On 19 March 2023, Israeli Finance Minister Bezalel Smotrich spoke in Paris at a memorial service for a Likud activist. The lectern at which Smotrich spoke was covered with a flag depicting the 'Greater Land of Israel,' encompassing the whole of Mandatory Palestine, as well as Trans-Jordan. During his speech, Smotrich said that "there's no such thing as Palestinians because there's no such thing as a Palestinian people." He added that the Palestinian people are a fictitious nation invented only to fight the Zionist movement, asking "Is there a Palestinian history or culture? There isn't any." The event received widespread media coverage. On 21 March, a spokesman for the US State Department sharply criticized Smotrich's comments. "The comments, which were delivered at a podium adorned with an inaccurate and provocative map, are offensive, they are deeply concerning, and, candidly, they're dangerous. The Palestinians have a rich history and culture, and the United States greatly values our partnership with the Palestinian people," he said. The Jordanian Foreign Ministry also voiced disapproval: "The Israeli Minister of Finance's use, during his participation in an event held yesterday in Paris, of a map of Israel that includes the borders of the Hashemite Kingdom of Jordan and the occupied Palestinian territories represents a reckless inflammatory act, and a violation of international norms and the Jordanian-Israeli peace treaty." Additionally, a map encompassing Mandatory Palestine and Trans-Jordan with a Jordanian flag on it was placed on a central lectern in the Jordanian Parliament. Jordan's parliament voted to expel the Israeli ambassador. Israel's Ministry of Foreign Affairs released a clarification relating to the matter, stating that "Israel is committed to the 1994 peace agreement with Jordan. There has been no change in the position of the State of Israel, which recognizes the territorial integrity of the Hashemite Kingdom of Jordan". Ahead of a Europe Day event due to take place on 9 May 2023, far-right wing National Security Minister Itamar Ben-Gvir was assigned as a representative of the government and a speaker at the event by the government secretariat, which deals with placing ministers at receptions on the occasion of the national days of the foreign embassies. The European Union requested that Ben-Gvir not attend, but the government did not make changes to the plan. On 8 May, the European delegation to Israel cancelled the reception, stating that: "The EU Delegation to Israel is looking forward to celebrating Europe Day on May 9, as it does every year. Regrettably, this year we have decided to cancel the diplomatic reception, as we do not want to offer a platform to someone whose views contradict the values the European Union stands for. However, the Europe Day cultural event for the Israeli public will be maintained to celebrate with our friends and partners in Israel the strong and constructive bilateral relationship". Israel's Opposition Leader Yair Lapid stated: "Sending Itamar Ben-Gvir to a gathering of EU ambassadors is a serious professional mistake. The government is embarrassing a large group of friendly countries, jeopardizing future votes in international institutions, and damaging our foreign relations. Last year, after a decade of efforts, we succeeded in signing an economic-political agreement with the European Union that will contribute to the Israeli economy and our foreign relations. Why risk it, and for what? Ben-Gvir is not a legitimate person in the international community (and not really in Israel either), and sometimes you have to be both wise and just and simply send someone else". On 23 February 2023, Defense Minister Gallant signed an agreement assigning governmental powers in the West Bank to a body to be headed by Minister Bezalel Smotrich, who will effectively become the governor of the West Bank, controlling almost all areas of life in the area, including planning, building and infrastructure. Israeli governments have hitherto been careful to keep the occupation as a military government. The temporary holding of power by an occupying military force, pending a negotiated settlement, is a principle of international law – an expression of the prohibition against obtaining sovereignty through conquest that was introduced in the wake of World War II. An editorial in Haaretz noted that the assignment of governmental powers in the West Bank to a civilian governor, alongside the plan to expand the dual justice system so that Israeli law will apply fully to settlers in the West Bank, constitutes de jure annexation of the West Bank. On 26 February 2023, following the 2023 Huwara shooting in which two Israelis were killed by an unidentified attacker, hundreds of Israeli settlers attacked the Palestinian town of Huwara and three nearby villages, setting alight hundreds of Palestinian homes (some with people in them), businesses, a school, and numerous vehicles, killing one Palestinian man and injuring 100 others. Bezalel Smotrich subsequently called on Twitter for Huwara to be "wiped out" by the Israeli government. Zvika Fogel MK, of the ultra-nationalist Otzma Yehudit, which forms part of the governing coalition, said that he "looks very favorably upon" the results of the rampage. Members of the coalition proposed an amendment to the Disengagement Law, which would allow Israelis to resettle settlements vacated during the 2005 Israeli disengagement from Gaza and the northern West Bank. The evacuated settlements were considered illegal under international law, according to most countries. The proposal was approved for voting by the Foreign Affairs and Defense Committee on 9 March 2023, while the committee was still waiting for briefing materials from the NSS, IDF, MFA and Shin Bet, and was passed on 21 March. The US has requested clarification from Israeli ambassador Michael Herzog. A US State Department spokesman stated that "The U.S. strongly urges Israel to refrain from allowing the return of settlers to the area covered by the legislation, consistent with both former Prime Minister Sharon and the current Israeli Government's commitment to the United States," noting that the actions represent a clear violation of undertakings given by the Sharon government to the Bush administration in 2005 and Netanyahu's far-right coalition to the Biden administration the previous week. Minister of Communication Shlomo Karhi had initially intended to cut the funding of the Israeli Public Broadcasting Corporation (also known by its blanket branding Kan) by 400 million shekels – roughly half of its total budget – closing several departments, and privatizing content creation. In response, the Director-General of the European Broadcasting Union, Noel Curran, sent two urgent letters to Netanyahu, expressing his concerns and calling on the Israeli government to "safeguard the independence of our Member KAN and ensure it is allowed to operate in a sustainable way, with funding that is both stable, adequate, fair, and transparent." On 25 January 2023, nine journalist organizations representing some of Kan's competitors issued a statement of concern, acknowledging the "important contribution of public broadcasting in creating a worthy, unbiased and non-prejudicial journalistic platform", and noting that "the existence of the [broadcasting] corporation as a substantial public broadcast organization strengthens media as a whole, adding to the competition in the market rather than weakening it." They also expressed their concern that the "real reason" for the proposal was actually "an attempt to silence voices from which... [the Minister] doesn't always draw satisfaction". The same day, hundreds of journalists, actors and filmmakers protested in Tel Aviv. The proposal was eventually put on hold. On 22 February 2023 it was reported that Prime Minister Netanyahu was attempting to appoint his close associate Yossi Shelley as the deputy to the National Statistician — a highly sensitive position in charge of providing accurate data for decision makers. The appointment of Shelley, who did not possess the required qualifications for the role, was withdrawn following publication. In its daily editorial, Haaretz tied this attempt with the judicial reform: "once they take control of the judiciary, law enforcement and public media, they wish to control the state's data base, the dry numerical data it uses to plan its future". Netanyahu also proposed Avi Simhon for the role, and eventually froze all appointments at the Israel Central Bureau of Statistics. Also on 22 February 2023, it was revealed that Yoav Kish, the Minister of Education, was promoting a draft government decision change to the National Library of Israel board of directors which would grant him more power over the institution. In response, the Hebrew University — which owned the library until 2008 – announced that if the draft is accepted, it will withdraw its collections from the library. The university's collections, which according to the university constitute some 80% of the library's collection, include the Agnon archive, the original manuscript of Hatikvah, and the Rothschild Haggadah, the oldest known Haggadah. A group of 300 authors and poets signed an open letter against the move, further noting their objection against "political takeover" of public broadcasting, as well as "any legislation that will castrate the judiciary and damage the democratic foundations of the state of Israel". Several days later, it was reported that a series of donors decided to withhold their donations to the library, totaling some 80 million shekels. On 3 March a petition against the move by 1,500 academics, including Israel Prize laureates, was sent to Kish. The proposal has been seen by some as retribution against Shai Nitzan, the former State Attorney and the library's current rector. On 5 March it was reported that the Legal Advisor to the Ministry of Finance, Asi Messing, was withholding the proposal. According to Messing, the proposal – which was being promoted as part of the Economic Arrangements Law – "was not reviewed... by the qualified personnel in the Ministry of Finance, does not align with any of the common goals of the economic plan, was not agreed to by myself and was not approved by the Attorney General." As of February 2023, the government has been debating several proposals that will significantly weaken the Ministry of Environmental Protection, including reducing the environmental regulation of planning and development and electricity production. One of the main proposals, the transferal of a 3 billion shekel fund meant to finance waste management plants from the Ministry of Environmental Protection to the Ministry of the Interior, was eventually withdrawn. The Minister of Environmental Protection, Idit Silman, has been criticized for using for meeting with climate change denialists, for wasteful and personally-motivated travel on the ministry's expense, for politicizing the role, and for engaging in political activity on the ministry's time. The government has been noted for an unusually high number of dismissals and resignations of senior career civil servants, and for the frequent attempts to replace them with candidates with known political associations, who are often less competent. According to sources, Netanyahu and people in his vicinity are seeking out civil servants who were appointed by the previous government, intent on replacing them with people loyal to him. Governmental nominees for various positions have been criticized for lack of expertise. In addition to the nominee to the position of Deputy National Statistician (see above), the Director General of the Ministry of Finance, Shlomi Heisler; the Director General of the Ministry of Justice, Itamar Donenfeld; and the Director General of Ministry of Transport, Moshe Ben Zaken, have all been criticized for incompetence, lack of familiarity with their Ministries' subject matter, lack of interest in the job, or lack of experience in managing large organizations. It has been reported that in some ministries, senior officials were enacting slowdowns as a means for dealing with the new ministers and director generals. On 28 July the director general of the Ministry of Education resigned, citing as reason the societal "rift". Asaf Zalel, a retired Air Force Brigadier General, was appointed in January. When asked about attempts to appoint his personal friend and attorney to the board of directors of a state-owned company, Minister David Amsalem replied: "that is my job, due to my authority to appoint directors. I put forward people that I know and hold in esteem". Under Minister of Transport Miri Regev, the ministry has either dismissed or lost the heads of the National Public Transport Authority, Israel Airports Authority, National Road Safety Authority, Israel Railways, and several officials in Netivei Israel. The current chair of Netivei Israel is Likud member and Regev associate Yigal Amadi, and the legal counsel is Einav Abuhzira, daughter of a former Likud branch chair. Abuhzira was appointed instead of Elad Berdugo, nephew of Netanyahu surrogate Yaakov Bardugo, after he was disqualified for the role by the Israel Government Companies Authority. In July 2023 the Ministry of Communications, Shlomo Karhi, and the minister in charge of the Israel Government Companies Authority, Dudi Amsalem, deposed the chair of the Israel Postal Company, Michael Vaknin. The chair, who was hired to lead the company's financial recovery after years of operational loss and towards privatization, has gained the support of officials at the Authority and at the Ministry of Finance; nevertheless, the ministers claimed that his performance is inadequate, and nominated in his place Yiftah Ron-Tal, who has known ties to Netanyahu and Smotrich. They also nominated four new directors, two of which have known political associations, and a third who was a witness in Netanyahu's trial. The coalition is allowed to spend a portion of the state's budget on a discretionary basis, meant to coax member parties to reach an agreement on the budget. As of May 2023, the government was pushing an allocation of over 13 billion shekels over two years - almost seven times the amount allocated by the previous government. Most of the funds will be allocated for uses associated with the religious, orthodox and settler communities. The head of the Budget Department at the Ministry of Finance, Yoav Gardos, objected to the allocations, claiming they would exacerbate unemployment in the Orthodox community, which is projected to cost the economy a total of 6.7 trillion shekels in lost produce by 2065. At the onset of the Gaza war and the declaration of a state of national emergency, Minister of Finance Bezalel Smotrich instructed government agencies to continue with the planned distribution of discretionary funds. Corruption During March 2023, the government was promoting an amendment to the Law on Public Service (Gifts) that would allow Netanyahu to receive donations to fund his legal defense. The amendment follows a decision by the High Court of Justice (HCJ) that forced Netanyahu to refund US$270,000 given to him and his wife by his late cousin, Nathan Mileikowsky, for their legal defense. This is in contrast to past statements by Minister of Justice Yariv Levin, who spoke against the possible conflict of interests that can result from such transactions. The bill was opposed by the Attorney General Gali Baharav-Miara, who stressed that it could "create a real opportunity for governmental corruption", and was eventually withdrawn at the end of March. As of March 2023, the coalition was promoting a bill that would prevent judicial review of ministerial appointments. The bill is intended to prevent the HCJ from reviewing the appointment of the twice-convicted chairman of Shas, Aryeh Deri (convicted of bribery, fraud, and breach of trust), to a ministerial position, after his previous appointment was annulled on grounds of unreasonableness. The bill follows on the heels of another amendment, that relaxed the ban on the appointment of convicted criminals, so that Deri - who was handed a suspended sentence after his second conviction - could be appointed. The bill is opposed by the Attorney General, as well as by the Knesset Legal Adviser, Sagit Afik. Israeli law allows for declaring a Prime Minister (as well as several other high-ranking public officials) to be temporarily or permanently incapacitated, but does not specify the conditions which can lead to a declaration of incapacitation. In the case of the Prime Minister, the authority to do so is given to the Attorney General. In March 2023, the coalition advanced a bill that passes this authority from the Attorney General to the government with the approval of the Knesset committee, and clarified that incapacitation can only result from medical or mental conditions. On 3 January 2024, the Supreme Court ruled by a majority of 6 out of 11 that the validity of the law will be postponed to the next Knesset because the bill in its immediate application is a personal law and is intended to serve a distinct personal purpose. Later, the court rejected a petition regarding the definition of Netanyahu as an incapacitated prime minister due to his ongoing trial and conflict of interests. Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Thirty-seventh_government_of_Israel#Sexual_assault_on_grounds_of_nationality] | [TOKENS: 9915]
Contents Thirty-seventh government of Israel The thirty-seventh government of Israel is the current cabinet of Israel, formed on 29 December 2022, following the Knesset election the previous month. The coalition government currently consists of five parties — Likud, Shas, Otzma Yehudit, Religious Zionist Party and New Hope — and is led by Benjamin Netanyahu, who took office as the prime minister of Israel for the sixth time. The government is widely regarded as the most right-wing government in the country's history, and includes far-right politicians. Several of the government's policy proposals have led to controversies, both within Israel and abroad, with the government's attempts at reforming the judiciary leading to a wave of demonstrations across the country. Following the outbreak of the Gaza war, opposition leader Yair Lapid initiated discussions with Netanyahu on the formation of an emergency government. On 11 October 2023, National Unity MKs Benny Gantz, Gadi Eisenkot, Gideon Sa'ar, Hili Tropper, and Yifat Shasha-Biton joined the Security Cabinet of Israel to form an emergency national unity government. Their accession to the Security Cabinet and to the government (as ministers without portfolio) was approved by the Knesset the following day. Gantz, Netanyahu, and Defense Minister Yoav Gallant became part of the newly formed Israeli war cabinet, with Eisenkot and Ron Dermer serving as observers. National Unity left the government in June 2024. New Hope rejoined the government in September. Otzma Yehudit announced on 19 January 2025 that it had withdrawn from the government, which took effect on 21 January, following the cabinet's acceptance of the three-phase Gaza war ceasefire proposal, though it rejoined two months later. United Torah Judaism left the government in July 2025 over dissatisfaction with the government's draft conscription law. Shas left the government several days later, though it remains part of the coalition. Background The right-wing bloc of parties, led by Benjamin Netanyahu, known in Israel as the national camp, won 64 of the 120 seats in the elections for the Knesset, while the coalition led by the incumbent prime minister Yair Lapid won 51 seats. The new majority has been variously described as the most right-wing government in Israeli history, as well as Israel's most religious government. Shortly after the elections, Lapid conceded to Netanyahu, and congratulated him, wishing him luck "for the sake of the Israeli people". On 15 November, the swearing-in ceremony for the newly elected members of the 25th Knesset was held during the opening session. The vote to appoint a new Speaker of the Knesset, which is usually conducted at the opening session, as well as the swearing in of cabinet members were postponed since ongoing coalition negotiations had not yet resulted in agreement on these positions. Government formation Yair Lapid Yesh Atid Benjamin Netanyahu Likud On 3 November 2022, Netanyahu told his aide Yariv Levin to begin informal coalition talks with allied parties, after 97% of the vote was counted. The leader of the Shas party Aryeh Deri met with Yitzhak Goldknopf, the leader of United Torah Judaism and its Agudat Yisrael faction, on 4 November. The two parties agreed to cooperate as members of the next government. The Degel HaTorah faction of United Torah Judaism stated on 5 November that it will maintain its ideological stance about not seeking any ministerial posts, as per the instruction of its spiritual leader Rabbi Gershon Edelstein, but will seek other senior posts like Knesset committee chairmen and deputy ministers. Netanyahu himself started holding talks on 6 November. He first met with Moshe Gafni, the leader of Degel HaTorah, and then with Goldknopf. Meanwhile, the Religious Zionist Party leader Bezalel Smotrich and the leader of its Otzma Yehudit faction Itamar Ben-Gvir pledged that they would not enter the coalition without the other faction. Gafni later met with Smotrich for coalition talks. Smotrich then met with Netanyahu. On 7 November, Netanyahu met with Ben-Gvir who demanded the Ministry of Public Security with expanded powers for himself and the Ministry of Education or Transport and Road Safety for Yitzhak Wasserlauf. A major demand among all of Netanyahu's allies was that the Knesset be allowed to ignore the rulings of the Supreme Court. Netanyahu met with the Noam faction leader and its sole MK Avi Maoz on 8 November after he threatened to boycott the coalition. He demanded complete control of the Western Wall by the Haredi rabbinate and removal of what he considered as anti-Zionist and anti-Jewish content in schoolbooks. President Isaac Herzog began consultations with heads of all the political parties on 9 November after the election results were certified. During the consultations, he expressed his reservations about Ben-Gvir becoming a member in the next government. Shas met with Likud for coalition talks on 10 November. By 11 November, Netanyahu had secured recommendations from 64 MKs, which constituted a majority. He was given the mandate to form the thirty-seventh government of Israel by President Herzog on 13 November. Otzma Yehudit and Noam officially split from Religious Zionism on 20 November as per a pre-election agreement. On 25 November, Otzma Yehudit and Likud signed a coalition agreement, under which Ben-Gvir will assume the newly created position of National Security Minister, whose powers would be more expansive than that of the Minister of Public Security, including overseeing the Israel Police and the Israel Border Police in the West Bank, as well as giving powers to authorities to shoot thieves stealing from military bases. Yitzhak Wasserlauf was given the Ministry for the Development of the Negev and the Galilee with expanded powers to regulate new West Bank settlements, while separating it from the "Periphery" portfolio, which will be given to Shas. The deal also includes giving the Ministry of Heritage to Amihai Eliyahu, separating it from the "Jerusalem Affairs" portfolio, the chairmanship of the Knesset's Public Security Committee to Zvika Fogel and that of the Special Committee for the Israeli Citizens' Fund to Limor Son Har-Melech, the post of Deputy Economic Minister to Almog Cohen, establishment of a national guard, and expansion of mobilization of reservists in the Border Police. Netanyahu and Maoz signed a coalition agreement on 27 November, under which the latter would become a deputy minister, would head an agency on Jewish identity in the Prime Minister's Office, and would also head Nativ, which processes the aliyah from the former Soviet Union. The agency for Jewish identity would have authority over educational content taught outside the regular curriculum in schools, in addition to the department of the Ministry of Education overseeing external teaching and partnerships, which would bring nonofficial organisations permitted to teach and lecture at schools under its purview. Likud signed a coalition agreement with the Religious Zionist Party on 1 December. Under the deal, Smotrich would serve as the Minister of Finance in rotation with Aryeh Deri, and the party will receive the post of a minister within the Ministry of Defense with control over the departments administering settlement and open lands under the Coordinator of Government Activities in the Territories, in addition to another post of a deputy minister. The deal also includes giving the post of Minister of Aliyah and Integration to Ofir Sofer, the newly created National Missions Ministry to Orit Strook, and the chairmanship of the Knesset's Constitution, Law and Justice Committee to Simcha Rothman. Likud and United Torah Judaism signed a coalition agreement on 6 December, to allow request for an extension to the deadline. Under it, the party would receive the Ministry of Construction and Housing, the chairmanship of the Knesset Finance Committee which will be given to Moshe Gafni, the Ministry of Jerusalem and Tradition (which would replace the Ministry of Jerusalem Affairs and Heritage), in addition to several posts of deputy ministers and chairmanships of Knesset committees. Likud also signed a deal with Shas by 8 December, securing interim coalition agreements with all of their allies. Under the deal, Deri will first serve as the Minister of Interior and Health, before rotating posts with Smotrich after two years. The party will also receive the Ministry of Religious Services and Welfare Ministries, as well as posts of deputy ministers in the Ministry of Education and Interior. The vote to replace then-incumbent Knesset speaker Mickey Levy was scheduled for 13 December, after Likud and its allies secured the necessary number of signatures for it. Yariv Levin of Likud was elected as an interim speaker by 64 votes, while his opponents Merav Ben-Ari of Yesh Atid and Ayman Odeh of Hadash received 45 and five votes respectively. Netanyahu asked Herzog for a 14-day extension after the agreement with Shas to finalise the roles his allied parties would play. Herzog on 9 December extended the deadline to 21 December. On that date, Netanyahu informed Herzog that he had succeeded in forming a coalition, with the new government expected to be sworn in by 2 January 2023. The government was sworn in on 29 December 2022. Timeline Israeli law stated that people convicted of crimes cannot serve in the government. An amendment to that law was made in late 2022, known colloquially as the Deri Law, to allow those who had been convicted without prison time to serve. This allowed Deri to be appointed to the cabinet. Shas leader Aryeh Deri was appointed to be Minister of Health, Minister of the Interior, and Vice Prime Minister in December 2022. He was fired in January 2023, following a Supreme Court decision that his appointment was unreasonable, since he had been convicted of fraud, and had promised not to seek government roles through a plea deal. In March 2023, Defence Minister Yoav Gallant called on the government to delay legislation related to the judicial reform. Prime Minister Netanyahu announced that he had been dismissed from his position, leading to the continuation of mass protests across the country (which had started in January in Tel Aviv). Gallant continued to serve as a minister as he had not received formal notice of dismissal, and two weeks later it was announced that Netanyahu had reversed his decision. Public Safety Minister Itamar Ben-Gvir (Otzma Yehudit leader) and Minister of Justice Yariv Levin (Likud) both threatened to resign if the judicial reform was delayed.[better source needed] After the outbreak of the Gaza war, five members of the National Unity party joined the government as ministers without portfolio, with leader Benny Gantz being made a member of the new Israeli war cabinet (along with Netanyahu and Gallant). As the war progressed, minister of national security Itamar Ben-Gvir threatened to leave the government if the war was ended. A month later in mid December, he again threatened to leave if the war did not maintain "full strength". Gideon Sa'ar stated on 16 March that his New Hope party would resign from the government and join the opposition if Prime Minister Benjamin Netanyahu did not appoint him to the Israeli war cabinet. Netanyahu did not do so, resulting in Sa'ar's New Hope party leaving the government nine days later, reducing the size of the coalition from 76 MKs to 72. Ben-Gvir and Bezalel Smotrich, of the National Religious Party–Religious Zionism party, have indicated that they will withdraw their parties from the government if the January 2025 Gaza war ceasefire is adopted, which would bring down the government. Ben-Gvir announced on 5 June that the members of his party would be allowed to vote as they wish, though his party resumed support on 9 June. On 18 May, Gantz set an 8 June deadline for withdrawal from the coalition, which was delayed by a day following the 2024 Nuseirat rescue operation. Gantz and his party left the government on 9 June, giving the government 64 seats in the Knesset. Sa'ar and his New Hope party rejoined the Netanyahu government on 30 September, increasing the number of seats held by the government to 68. The High Court of Justice ruled on 28 March 2024 that yeshiva funds would no longer be available for students who are "eligible for enlistment", effectively allowing ultra-Orthodox Jews to be drafted into the IDF. Attorney general Gali Baharav-Miara indicated on 31 March that the conscription process must begin on 1 April. The court ruled on 25 June that the IDF must begin to draft yeshiva students. Likud announced on 7 July that it would not put forward any legislation after Shas and United Torah Judaism said that they would boycott the plenary session over the lack of legislation dealing with the Haredi draft. The Ultra-Orthodox boycott continued for a second day, with UTJ briefly ending its boycott on 9 July to unsuccessfully vote in favor of a bill which would have weakened the Law of Return. Yuli Edelstein, who was replaced by Boaz Bismuth on the Foreign Affairs and Defense Committee in early August, published a draft version of the conscription law shortly before his ouster. Bismuth cancelled the work on the draft law in September 2025, which Edelstein called "a shame." Bismuth released the official version of the draft law in late November 2025. It weakened penalties for draft evaders, with Edelstein saying it was "the exact opposite" of the bill which he attempted to pass. Members of Otzma Yehudit resigned from the government on 19 January 2025 over the January 2025 Gaza war ceasefire, which took effect on 21 January. The members rejoined in March, following the "resumption" of the war in Gaza. Avi Maoz of the Noam party left the government in March 2025. On 4 June 2025, senior rabbis for United Torah Judaism Dov Lando and Moshe Hillel Hirsch instructed the party's MKs to pass a bill which would dissolve the Knesset. Yesh Atid, Yisrael Beytenu and The Democrats announced that they will "submit a bill" for dissolution on 11 June, with Yesh Atid tabling the bill on 4 June. There were also reports that Shas would vote in favor of Knesset dissolution amidst division within the governing coalition on Haredi conscription. This jeopardized the coalition's majority and would have triggered new elections if the bill passed. The following day, Agudat Yisrael, one of the United Torah Judaism factions, confirmed that it would submit a bill to dissolve the Knesset. Asher Medina, a Shas spokesman, indicated on 9 June that the party would vote in favor of a preliminary bill to dissolve the Knesset. The rabbis of Degel HaTorah instructed the parties' MKs on 12 June 2025 to oppose the dissolution of the Knesset, which was followed by Yuli Edelstein and the Shas and Degel HaTorah parties announcing that a deal had been reached, with "rabbinical leaders" telling their parties to delay the dissolution vote by a week. Shas and Degel HaTorah voted against the dissolution bill, which led to the bill failing its preliminary reading in a vote of 61 against and 53 in favor. MKs Ya'akov Tessler and Moshe Roth of Agudat Yisrael voted in favor of dissolution. Another dissolution bill will be unable to be brought forward for six months. If the bill had passed its preliminary reading, in addition to three more readings, an election would have been held in approximately three months; The Jerusalem Post posited it would have been held in October. Degel HaTorah announced on 14 July 2025 that it would leave the government because members of the party were dissatisfied after viewing the proposed draft bill by Yuli Edelstein regarding Haredi exemptions from the Israeli draft. Several hours later, Agudat Yisrael announced that it would also leave the government. Deputy Transportation Minister Uri Maklev, Moshe Gafni, the head of the Knesset Finance Committee, Ya'akov Asher, the head of the Knesset Interior and Environment Protection Committee and Jerusalem Affairs minister Meir Porush all submitted their resignations, with their resignations taking effect in 48 hours. Sports Minister Ya'akov Tessler and "Special Committee for Public Petitions Chair" Yitzhak Pindrus also submitted resignations. Yisrael Eichler submitted his resignation as the "head of the Knesset Labor and Welfare Committee" the same day. The resignations will leave Netanyahu's government with a 60-seat majority in the Knesset, as Avi Maoz, of the Noam party, left the government in March 2025. Despite Edelstein's ouster in August, a spokesman for UTJ head Yitzhak Goldknopf remarked that it would not change the faction's withdrawal from the government. The religious council for Shas, called the Moetzet Chachmei HaTorah, instructed the party on 16 July to leave the government, but stay in the coalition. The following day, various cabinet ministers submitted their resignations, including "Interior Minister Moshe Arbel, Social Affairs Minister Ya'akov Margi and Religious Services Minister Michael Malchieli." Malchieli reportedly has postponed his resignation so he could attend a 20 July meeting of the panel investigating whether attorney general Gali Baharav-Miara should be dismissed. Deputy Minister of Agriculture Moshe Abutbul, Minister of Health Uriel Buso and Haim Biton, a minister in the Education Ministry, also submitted their resignation letters, while Arbel retracted his resignation letter. The last cabinet member from the party to submit it was Labor Minister Yoav Ben-Tzur. The ministers who resigned will return to the Knesset, replacing MKs Moshe Roth, Yitzhak Pindrus and Eliyahu Baruchi. Members of government Listed below are the current ministers in the government: Principles and priorities According to the agreements signed between Likud and each of its coalition partners, and the incoming government's published guideline principles, its stated priorities are to combat the cost of living, further centralize Orthodox control over the state religious services, pass judicial reforms which include legislation to reduce judicial controls on executive and legislative power, expand settlements in the West Bank, and consider an annexation of the West Bank. Before the vote of confidence in his new government in the Knesset, Netanyahu presented three top priorities for the new government: internal security and governance, halting the nuclear program of Iran, and the development of infrastructure, with a focus on further connecting the center of the country with its periphery. Policies The government's flagship program, centered around reforms in the judicial branch, drew widespread criticism. Critics said it would have negative effects on the separation of powers, the office of the Attorney General, the economy, public health, women and minorities, workers' rights, scientific research, the overall strength of Israel's democracy and its foreign relations. After weeks of public protests on Israel's streets, joined by a growing number of military reservists, Minister of Defense Yoav Gallant spoke against the reform on 25 March, calling for a halt of the legislative process "for the sake of Israel's security". The next day, Netanyahu announced that he would be removed from his post, sparking another wave of protest across Israel and ultimately leading to Netanyahu agreeing to pause the legislation. On 10 April, Netanyahu announced that Gallant would keep his post. On 27 March 2023, after the public protests and general strikes, Netanyahu announced a pause in the reform process to allow for dialogue with opposition parties. However, negotiations aimed at reaching a compromise collapsed in June, and the government resumed its plans to unilaterally pass parts of the legislation. On 24 July 2023, the Knesset passed a bill that curbs the power of the Supreme Court to declare government decisions unreasonable; on 1 January 2024, the Supreme Court struck the bill down. The Knesset passed a "watered-down" version of the judicial reform package in late March 2025 which "changes the composition" of the judicial selection committee. In December 2022 Minister of National Security Itamar Ben-Gvir sought to amend the law that regulates the operations of the Israel Police, such that the ministry will have more direct control of its forces and policies, including its investigative priorities. Attorney General Gali Baharav-Miara objected to the draft proposal, raising concerns that the law would enable the politicization of police work, and the draft was amended to partially address those concerns. Nevertheless, in March 2023 Deputy Attorney General Gil Limon stated that the Attorney General's fears had been realized, referring to several instances of ministerial involvement in the day-to-day work of the otherwise independent police force – statements that were repeated by the Attorney General herself two days later. Separately, Police Commissioner Kobi Shabtai instructed Deputy Commissioners to avoid direct communication with the minister, later stating that "the Israel Police will remain apolitical, and act only according to law". Following appeals by the Association for Civil Rights in Israel and the Movement for Quality Government in Israel, the High Court of Justice instructed Ben-Gvir "to refrain from giving operational directions to the police... [especially] as regards to protests and demonstrations against the government." As talks of halting the judicial reform gained wind during March 2023, Minister of National Security Itamar Ben-Gvir threatened to resign if the legislation implementing the changes was suspended. To appease Ben-Gvir, Prime Minister Netanyahu announced that the government would promote the creation of a new National Guard, to be headed by Ben-Gvir. On 29 March, thousands of Israelis demonstrated in Tel Aviv, Haifa and Jerusalem against this decision. On 1 April, the New York Times quoted Gadeer Nicola, head of the Arab department at the Association for Civil Rights in Israel, as saying "If this thing passes, it will be an imminent danger to the rights of Arab citizens in this country. This will create two separate systems of applying the law. The regular police which will operate against Jewish citizens — and a militarized militia to deal only with Arab citizens." The same day, while speaking on Israel's Channel 13 about those whom he'd like to see enlist in the National Guard, Ben-Gvir specifically mentioned La Familia, the far-right fan club of the Beitar Jerusalem soccer team. On 2 April, Israel's cabinet approved the establishment of a law enforcement body that would operate independently of the police, under Ben-Gvir's authority. According to the decision, the Minister was to establish a committee chaired by the Director General of the Ministry of National Security, with representatives of the ministries of defense, justice and finance, as well as the police and the IDF, to outline the operations of the new organization. The committee's recommendations will be submitted to the government for consideration. Addressing a conference on 4 April, Police Commissioner Kobi Shabtai said that he is not opposed to the establishment of a security body which would answer to the police, but "a separate body? Absolutely not." The police chief said he had warned Ben-Gvir that the establishment of a security body separate from the police is "unnecessary, with extremely high costs that may harm citizens' personal security." During a press conference on 10 April, Prime Minister Netanyahu said, in what has been seen by some news outlets as a concession to the protesters, that "This will not be anyone's militia, it will be a security body, orderly, professional, that will be subordinate to one of the [existing] security bodies." The committee established by the government recommended the government to order the establishment of the National Guard immediately while allocating budgets. The National Guard, under whose command will be a superintendent of the police, will not be subordinate to Ben-Gvir. It will be subordinate to the police commissioner and will be part of Israel Border Police. The Ministry of Defense and Finance opposed the conclusions. The Israeli National Security Council called for further discussion on this. The coalition's efforts to expand the purview of Rabbinical courts; force some organizations, such as hospitals, to enforce certain religious practices; amend the Law Prohibiting Discrimination to allow gender segregation and discrimination on the grounds of religious belief; expand funding for religious causes; and put into law the exemption of yeshiva and kolel students from conscription have drawn criticism. According to the Haaretz op-ed of 7 March 2023, "the current coalition is interested... in modifying the public space so it suits the religious lifestyle. The legal coup is meant to castrate anyone who can prevent it, most of all the HCJ." Several banks and institutional investors, including the Israel Discount Bank and AIG have committed to avoid investing in, or providing credit to any organization that will discriminate against others on ground of religion, race, gender or sexual orientation. A series of technology companies and investment firms including Wiz, Intel Israel, Salesforce and Microsoft Israel Research and Development, have criticized the proposed changes to the Law Prohibiting Discrimination, with Wiz stating that it will require its suppliers to commit to preventing discrimination. Over sixty prominent law firms pledged that they will neither represent, nor do business with discriminating individuals and organizations. Insight Partners, a major private equity fund operating in Israel, released a statement warning against intolerance and any attempt to harm personal liberties. Orit Lahav, chief executive of the women's rights organization Mavoi Satum ("Dead End"), said that "the Rabbinical courts are the most discriminatory institution in the State of Israel... Limiting the HCJ[d] while expanding the jurisdiction of the Rabbinical courts would... cause significant harm to women." Anat Thon Ashkenazy, Director of the Center for Democratic Values and Institutions at the Israel Democracy Institute, said that "almost every part of the reform could harm women... the meaning of an override clause is that even if the court says that the law on gender segregation is illegitimate, is harmful, the Knesset could say 'Okay, we say otherwise'". She added that "there is a very broad institutional framework here, after which there will come legislation that harms women's right and we will have no way of protecting or stopping it." During July 2023, 20 professional medical associations signed a letter of position warning against the ramifications to public health that would result from the exclusion of women from the public sphere. They cited, among others, a rise in prevalence of risk factors for cardiovascular disease, pregnancy-related ailments, psychological distress, and the risk of suicide. On 30 July the Knesset passed an amendment to penal law adding sexual offenses to those offenses whose penalty can be doubled if done on grounds of "nationalistic terrorism, racism or hostility towards a certain community". According to MK Limor Son Har-Melech, the bill is meant to penalize any individual who "[intends to] harm a woman sexually based on her Jewishness". The law was criticized by MK Gilad Kariv as "populist, nationalistic, and dangerous towards the Arab citizens of Israel", and by MK Ahmad Tibi as a "race law", and was objected to by legal advisors at the Ministry of Justice and the Knesset Committee on National Security. Activist Orit Kamir wrote that "the amendment... is neither feminist, equal, nor progressive, but the opposite: it subordinates women's sexuality to the nationalistic, racist patriarchy. It hijacks the Law for Prevention of Sexual Harassment to serve a world view that tags women as sexual objects that personify the nation's honor." Yael Sherer, director of the Lobby to Combat Sexual Violence, criticized the law as being informed by dated ideas about sexual assault, and proposed that MKs "dedicate a session... to give victims of sexual assault an opportunity to come out of the darkness... instead of [submitting] declarative bills that change nothing and are not meant but for grabbing headlines". In Israel, during 2022, 24 women "were murdered because they were women," which was an increase of 50% compared to 2021. A law permitting courts to order men subject to a restraining order following domestic violence offenses to wear electronic tags was drafted during the previous Knesset and had passed its first reading unanimously. On 22 March 2023, the Knesset voted to reject the bill. It had been urged to do so by National Security Minister Itamar Ben-Gvir, who said that the bill was unfair to men. Earlier in the week, Ben-Gvir had blocked the measure from advancing in the ministerial legislative committee. The MKs voting against the bill included Prime Minister Netanyahu. The Association of Families of Murder Victims said that by rejecting the law, National Security Minister Itamar Ben-Gvir "brings joy to violent men and abandons the women threatened with murder… unsupervised restraining orders endanger women's lives even more. They give women the illusion of being protected, and then they are murdered." MK Pnina Tamano-Shata, chairwoman of the Knesset Committee on the Status of Women and Gender Equality, said that "the coalition proved today that it despises women's lives." The NGO Amutat Bat Melech [he], which assists Orthodox and ultra-Orthodox women who suffer from domestic violence, said that: "Rejecting the electronic bracelet bill is disconnected from the terrible reality of seven femicides since the beginning of the year. This is an effective tool of the first degree that could have saved lives and reduced the threat to women suffering from domestic violence. This is a matter of life and death, whose whole purpose is to provide a solution to defend women." The agreement signed by the coalition parties includes the setting up of a committee to draft changes to the Law of Return. Israeli religious parties have long demanded that the "grandchild clause" of the Law of Return be cancelled. This clause grants citizenship to anyone with at least one Jewish grandparent, as long as they do not practice another religion. If the grandchild clause were to be removed from the Law of Return then around 3 million people who are currently eligible for aliyah would no longer be eligible. The heads of the Jewish Agency, the Jewish Federations of North America, the World Zionist Organization and Keren Hayesod sent a joint letter to Prime Minister Netanyahu, expressing their "deep concern" about any changes to the Law of Return, adding that "Any change in the delicate and sensitive status quo on issues such as the Law of Return or conversion could threaten to unravel the ties between us and keep us away from each other." The Executive Council of Australian Jewry and the Zionist Federation of Australia issued a joint statement saying "We… view with deep concern… proposals in relation to religious pluralism and the law of return that risk damaging Israel's… relationship with Diaspora Jewry." On 19 March 2023, Israeli Finance Minister Bezalel Smotrich spoke in Paris at a memorial service for a Likud activist. The lectern at which Smotrich spoke was covered with a flag depicting the 'Greater Land of Israel,' encompassing the whole of Mandatory Palestine, as well as Trans-Jordan. During his speech, Smotrich said that "there's no such thing as Palestinians because there's no such thing as a Palestinian people." He added that the Palestinian people are a fictitious nation invented only to fight the Zionist movement, asking "Is there a Palestinian history or culture? There isn't any." The event received widespread media coverage. On 21 March, a spokesman for the US State Department sharply criticized Smotrich's comments. "The comments, which were delivered at a podium adorned with an inaccurate and provocative map, are offensive, they are deeply concerning, and, candidly, they're dangerous. The Palestinians have a rich history and culture, and the United States greatly values our partnership with the Palestinian people," he said. The Jordanian Foreign Ministry also voiced disapproval: "The Israeli Minister of Finance's use, during his participation in an event held yesterday in Paris, of a map of Israel that includes the borders of the Hashemite Kingdom of Jordan and the occupied Palestinian territories represents a reckless inflammatory act, and a violation of international norms and the Jordanian-Israeli peace treaty." Additionally, a map encompassing Mandatory Palestine and Trans-Jordan with a Jordanian flag on it was placed on a central lectern in the Jordanian Parliament. Jordan's parliament voted to expel the Israeli ambassador. Israel's Ministry of Foreign Affairs released a clarification relating to the matter, stating that "Israel is committed to the 1994 peace agreement with Jordan. There has been no change in the position of the State of Israel, which recognizes the territorial integrity of the Hashemite Kingdom of Jordan". Ahead of a Europe Day event due to take place on 9 May 2023, far-right wing National Security Minister Itamar Ben-Gvir was assigned as a representative of the government and a speaker at the event by the government secretariat, which deals with placing ministers at receptions on the occasion of the national days of the foreign embassies. The European Union requested that Ben-Gvir not attend, but the government did not make changes to the plan. On 8 May, the European delegation to Israel cancelled the reception, stating that: "The EU Delegation to Israel is looking forward to celebrating Europe Day on May 9, as it does every year. Regrettably, this year we have decided to cancel the diplomatic reception, as we do not want to offer a platform to someone whose views contradict the values the European Union stands for. However, the Europe Day cultural event for the Israeli public will be maintained to celebrate with our friends and partners in Israel the strong and constructive bilateral relationship". Israel's Opposition Leader Yair Lapid stated: "Sending Itamar Ben-Gvir to a gathering of EU ambassadors is a serious professional mistake. The government is embarrassing a large group of friendly countries, jeopardizing future votes in international institutions, and damaging our foreign relations. Last year, after a decade of efforts, we succeeded in signing an economic-political agreement with the European Union that will contribute to the Israeli economy and our foreign relations. Why risk it, and for what? Ben-Gvir is not a legitimate person in the international community (and not really in Israel either), and sometimes you have to be both wise and just and simply send someone else". On 23 February 2023, Defense Minister Gallant signed an agreement assigning governmental powers in the West Bank to a body to be headed by Minister Bezalel Smotrich, who will effectively become the governor of the West Bank, controlling almost all areas of life in the area, including planning, building and infrastructure. Israeli governments have hitherto been careful to keep the occupation as a military government. The temporary holding of power by an occupying military force, pending a negotiated settlement, is a principle of international law – an expression of the prohibition against obtaining sovereignty through conquest that was introduced in the wake of World War II. An editorial in Haaretz noted that the assignment of governmental powers in the West Bank to a civilian governor, alongside the plan to expand the dual justice system so that Israeli law will apply fully to settlers in the West Bank, constitutes de jure annexation of the West Bank. On 26 February 2023, following the 2023 Huwara shooting in which two Israelis were killed by an unidentified attacker, hundreds of Israeli settlers attacked the Palestinian town of Huwara and three nearby villages, setting alight hundreds of Palestinian homes (some with people in them), businesses, a school, and numerous vehicles, killing one Palestinian man and injuring 100 others. Bezalel Smotrich subsequently called on Twitter for Huwara to be "wiped out" by the Israeli government. Zvika Fogel MK, of the ultra-nationalist Otzma Yehudit, which forms part of the governing coalition, said that he "looks very favorably upon" the results of the rampage. Members of the coalition proposed an amendment to the Disengagement Law, which would allow Israelis to resettle settlements vacated during the 2005 Israeli disengagement from Gaza and the northern West Bank. The evacuated settlements were considered illegal under international law, according to most countries. The proposal was approved for voting by the Foreign Affairs and Defense Committee on 9 March 2023, while the committee was still waiting for briefing materials from the NSS, IDF, MFA and Shin Bet, and was passed on 21 March. The US has requested clarification from Israeli ambassador Michael Herzog. A US State Department spokesman stated that "The U.S. strongly urges Israel to refrain from allowing the return of settlers to the area covered by the legislation, consistent with both former Prime Minister Sharon and the current Israeli Government's commitment to the United States," noting that the actions represent a clear violation of undertakings given by the Sharon government to the Bush administration in 2005 and Netanyahu's far-right coalition to the Biden administration the previous week. Minister of Communication Shlomo Karhi had initially intended to cut the funding of the Israeli Public Broadcasting Corporation (also known by its blanket branding Kan) by 400 million shekels – roughly half of its total budget – closing several departments, and privatizing content creation. In response, the Director-General of the European Broadcasting Union, Noel Curran, sent two urgent letters to Netanyahu, expressing his concerns and calling on the Israeli government to "safeguard the independence of our Member KAN and ensure it is allowed to operate in a sustainable way, with funding that is both stable, adequate, fair, and transparent." On 25 January 2023, nine journalist organizations representing some of Kan's competitors issued a statement of concern, acknowledging the "important contribution of public broadcasting in creating a worthy, unbiased and non-prejudicial journalistic platform", and noting that "the existence of the [broadcasting] corporation as a substantial public broadcast organization strengthens media as a whole, adding to the competition in the market rather than weakening it." They also expressed their concern that the "real reason" for the proposal was actually "an attempt to silence voices from which... [the Minister] doesn't always draw satisfaction". The same day, hundreds of journalists, actors and filmmakers protested in Tel Aviv. The proposal was eventually put on hold. On 22 February 2023 it was reported that Prime Minister Netanyahu was attempting to appoint his close associate Yossi Shelley as the deputy to the National Statistician — a highly sensitive position in charge of providing accurate data for decision makers. The appointment of Shelley, who did not possess the required qualifications for the role, was withdrawn following publication. In its daily editorial, Haaretz tied this attempt with the judicial reform: "once they take control of the judiciary, law enforcement and public media, they wish to control the state's data base, the dry numerical data it uses to plan its future". Netanyahu also proposed Avi Simhon for the role, and eventually froze all appointments at the Israel Central Bureau of Statistics. Also on 22 February 2023, it was revealed that Yoav Kish, the Minister of Education, was promoting a draft government decision change to the National Library of Israel board of directors which would grant him more power over the institution. In response, the Hebrew University — which owned the library until 2008 – announced that if the draft is accepted, it will withdraw its collections from the library. The university's collections, which according to the university constitute some 80% of the library's collection, include the Agnon archive, the original manuscript of Hatikvah, and the Rothschild Haggadah, the oldest known Haggadah. A group of 300 authors and poets signed an open letter against the move, further noting their objection against "political takeover" of public broadcasting, as well as "any legislation that will castrate the judiciary and damage the democratic foundations of the state of Israel". Several days later, it was reported that a series of donors decided to withhold their donations to the library, totaling some 80 million shekels. On 3 March a petition against the move by 1,500 academics, including Israel Prize laureates, was sent to Kish. The proposal has been seen by some as retribution against Shai Nitzan, the former State Attorney and the library's current rector. On 5 March it was reported that the Legal Advisor to the Ministry of Finance, Asi Messing, was withholding the proposal. According to Messing, the proposal – which was being promoted as part of the Economic Arrangements Law – "was not reviewed... by the qualified personnel in the Ministry of Finance, does not align with any of the common goals of the economic plan, was not agreed to by myself and was not approved by the Attorney General." As of February 2023, the government has been debating several proposals that will significantly weaken the Ministry of Environmental Protection, including reducing the environmental regulation of planning and development and electricity production. One of the main proposals, the transferal of a 3 billion shekel fund meant to finance waste management plants from the Ministry of Environmental Protection to the Ministry of the Interior, was eventually withdrawn. The Minister of Environmental Protection, Idit Silman, has been criticized for using for meeting with climate change denialists, for wasteful and personally-motivated travel on the ministry's expense, for politicizing the role, and for engaging in political activity on the ministry's time. The government has been noted for an unusually high number of dismissals and resignations of senior career civil servants, and for the frequent attempts to replace them with candidates with known political associations, who are often less competent. According to sources, Netanyahu and people in his vicinity are seeking out civil servants who were appointed by the previous government, intent on replacing them with people loyal to him. Governmental nominees for various positions have been criticized for lack of expertise. In addition to the nominee to the position of Deputy National Statistician (see above), the Director General of the Ministry of Finance, Shlomi Heisler; the Director General of the Ministry of Justice, Itamar Donenfeld; and the Director General of Ministry of Transport, Moshe Ben Zaken, have all been criticized for incompetence, lack of familiarity with their Ministries' subject matter, lack of interest in the job, or lack of experience in managing large organizations. It has been reported that in some ministries, senior officials were enacting slowdowns as a means for dealing with the new ministers and director generals. On 28 July the director general of the Ministry of Education resigned, citing as reason the societal "rift". Asaf Zalel, a retired Air Force Brigadier General, was appointed in January. When asked about attempts to appoint his personal friend and attorney to the board of directors of a state-owned company, Minister David Amsalem replied: "that is my job, due to my authority to appoint directors. I put forward people that I know and hold in esteem". Under Minister of Transport Miri Regev, the ministry has either dismissed or lost the heads of the National Public Transport Authority, Israel Airports Authority, National Road Safety Authority, Israel Railways, and several officials in Netivei Israel. The current chair of Netivei Israel is Likud member and Regev associate Yigal Amadi, and the legal counsel is Einav Abuhzira, daughter of a former Likud branch chair. Abuhzira was appointed instead of Elad Berdugo, nephew of Netanyahu surrogate Yaakov Bardugo, after he was disqualified for the role by the Israel Government Companies Authority. In July 2023 the Ministry of Communications, Shlomo Karhi, and the minister in charge of the Israel Government Companies Authority, Dudi Amsalem, deposed the chair of the Israel Postal Company, Michael Vaknin. The chair, who was hired to lead the company's financial recovery after years of operational loss and towards privatization, has gained the support of officials at the Authority and at the Ministry of Finance; nevertheless, the ministers claimed that his performance is inadequate, and nominated in his place Yiftah Ron-Tal, who has known ties to Netanyahu and Smotrich. They also nominated four new directors, two of which have known political associations, and a third who was a witness in Netanyahu's trial. The coalition is allowed to spend a portion of the state's budget on a discretionary basis, meant to coax member parties to reach an agreement on the budget. As of May 2023, the government was pushing an allocation of over 13 billion shekels over two years - almost seven times the amount allocated by the previous government. Most of the funds will be allocated for uses associated with the religious, orthodox and settler communities. The head of the Budget Department at the Ministry of Finance, Yoav Gardos, objected to the allocations, claiming they would exacerbate unemployment in the Orthodox community, which is projected to cost the economy a total of 6.7 trillion shekels in lost produce by 2065. At the onset of the Gaza war and the declaration of a state of national emergency, Minister of Finance Bezalel Smotrich instructed government agencies to continue with the planned distribution of discretionary funds. Corruption During March 2023, the government was promoting an amendment to the Law on Public Service (Gifts) that would allow Netanyahu to receive donations to fund his legal defense. The amendment follows a decision by the High Court of Justice (HCJ) that forced Netanyahu to refund US$270,000 given to him and his wife by his late cousin, Nathan Mileikowsky, for their legal defense. This is in contrast to past statements by Minister of Justice Yariv Levin, who spoke against the possible conflict of interests that can result from such transactions. The bill was opposed by the Attorney General Gali Baharav-Miara, who stressed that it could "create a real opportunity for governmental corruption", and was eventually withdrawn at the end of March. As of March 2023, the coalition was promoting a bill that would prevent judicial review of ministerial appointments. The bill is intended to prevent the HCJ from reviewing the appointment of the twice-convicted chairman of Shas, Aryeh Deri (convicted of bribery, fraud, and breach of trust), to a ministerial position, after his previous appointment was annulled on grounds of unreasonableness. The bill follows on the heels of another amendment, that relaxed the ban on the appointment of convicted criminals, so that Deri - who was handed a suspended sentence after his second conviction - could be appointed. The bill is opposed by the Attorney General, as well as by the Knesset Legal Adviser, Sagit Afik. Israeli law allows for declaring a Prime Minister (as well as several other high-ranking public officials) to be temporarily or permanently incapacitated, but does not specify the conditions which can lead to a declaration of incapacitation. In the case of the Prime Minister, the authority to do so is given to the Attorney General. In March 2023, the coalition advanced a bill that passes this authority from the Attorney General to the government with the approval of the Knesset committee, and clarified that incapacitation can only result from medical or mental conditions. On 3 January 2024, the Supreme Court ruled by a majority of 6 out of 11 that the validity of the law will be postponed to the next Knesset because the bill in its immediate application is a personal law and is intended to serve a distinct personal purpose. Later, the court rejected a petition regarding the definition of Netanyahu as an incapacitated prime minister due to his ongoing trial and conflict of interests. Notes References External links
========================================