id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
4624116
https://en.wikipedia.org/wiki/Fanfin
Fanfin
Fanfins or hairy anglerfish are a family, Caulophrynidae, of marine ray-finned fishes within the order Lophiiformes, the anglerfishes. The fishes in this family are found almost around the world in the deeper, aphotic waters of the oceans. Taxonomy The fanfin family, Caulophrynidae, was first proposed in 1896 by the American ichthyologists George Brown Goode and Tarleton Hoffman Bean. The 5th edition of Fishes of the World classifies the Caulophrynidae within the suborder Ceratioidei of the order Lophiiformes. This family was thought to be basal within the suborder Ceratioidei of the anglerfish order Lophiiformes, but phylogenetic analyses have recovered it as forming a sister taxon to the clade containing Gigantactinidae, Neoceratiidae, and Linophrynidae, and so they are deeply embedded within the suborder. However, molecular studies show that the familial relationships within the Ceratoidei are still to be fully resolved and Caulophrynidae may be the most basal taxon in the suborder. Etymology The fanfin family, Caulophrynidae, takes its name from the genus Caulophryne. This name is a combination of caulis, which mean" stem", an allusion to the stem-like base of the illicium, with phryne, meaning "toad", a suffix commonly used in the names of anglerfish genera. Its use may date as far back as Aristotle and Cicero, who referred to anglerfishes as "fishing-frogs" and "sea-frogs", respectively, possibly because of their resemblance to frogs and toads. Genera The fanfin family, Caulophrynidae, contains the following two genera: Characteristics Fanfins have a high degree of sexual dimorphism. The females have short, round bodies with large mouths. The lower jaw reaches back past the base of the pectoral fin. The teeth in the jaws are thin, backwards curving and depressible. They have highly elongated dorsal and anal fins, with the soft rays of these fins resembling long threads. There are 8 fin rays in the caudal fin. They do not have pelvic fins. The sensory cells of the lateral line system are at the tips of the filamentous rays of the dorsal and anal fins. They have a simple esca, or lure, which lacks a bulb but which may have filaments or appendages. The skin is naked and they do not have any dermal spines. The males are much smaller than the females and have more elongated bodies. They have large eyes and large nostrils, with large olfactory receptors. They have no teeth in the jaws, although there are tooth-like structures on the jaw bones which are used to attach to the larger female. The male do not have elongated dorsal and anal fins but so have large pectoral fins. The juveniles are the only ceratoid anglerfishes to have pelvic fins. The largest species of fanfin is Caulophryne polynema with a maximum published total length of . Distribution and habitat Fanfins are found in the Atlantic, Indian and Pacific Oceans where they live in the bathypelagic zone. Biology Fanfins are predators on other fishes. They reproduce by means of pelagic eggs which hatch into pelagic larvae. The short, rounded larvae have swollen skin and well-developed pectoral and pelvic fins, the pelvic fins being lost as they metamorphose. Both larval males and females have a basic illicium. Metamorphosis starts at a standard length of . The large, well-developed eyes and olfactory apparatus of the metamorphosed males are used to detect and home in on a species specific chemical released by the female to attract males. When the male finds a female he bites her and the tissue and circulatory systems of the pair fuse, he becomes a sexual parasite on the female and is nourished by the female through shared blood. For the remainder of his life he is attached to the female and fertilises her eggs.
Biology and health sciences
Acanthomorpha
Animals
4627070
https://en.wikipedia.org/wiki/Frogfish
Frogfish
Frogfishes are any member of the anglerfish family Antennariidae, of the order Lophiiformes. Antennariids are known as anglerfish in Australia, where the term "frogfish" refers to members of the unrelated family Batrachoididae. Frogfishes are found in almost all tropical and subtropical oceans and seas around the world, the primary exception being the Mediterranean Sea. Frogfishes are small, short and stocky, and sometimes covered in spinules and other appendages to aid in camouflage. The camouflage aids in protection from predators and enables them to lure prey. Many species can change colour; some are covered with other organisms such as algae or hydrozoa. In keeping with this camouflage, frogfishes typically move slowly, lying in wait for prey, and then striking extremely rapidly, in as little as 6 milliseconds. Few traces of frogfishes remain in the fossil record, though Antennarius monodi is known from the Miocene of Algeria and Eophryne barbuttii is known from the Eocene of Italy. Taxonomy The frogfish family, Antennariidae, was first proposed as a family in 1822 by the Polish zoologist Feliks Paweł Jarocki. The 5th edition of the Fishes of the World recognises 13 genera within the family but no subfamilies. Other authorities recognise two subfamilies, the Antennariinae and the Histiophryninae, while others treat these as two separate families. The Antennariidae is classified within the suborder Antennarioidei within the order Lophiiformes, the anglerfishes. The Antennariidae is regarded, with its sister taxon, the Tetrabrachiidae as the most derived clade within the suborder Antennarioidei. Etymology The frogfish family, Antennariidae, has its name derived from Antennarius, its type genus. Antennarius suffixes -ius to antenna, an allusion to first dorsal spine being adapted into a tentacle on the snout used as a lure to attract prey. Genera The frogfish family, Antennariidae is divided into the following genera: The 5th edition of Fishes of the World classifies another seven genera within the Antennariidae: However, Catalog of Fishes and FishBase classify these genera in the separate family Histiophrynidae, which other authorities treat as a subfamily of Antennariidae, the Histiophryninae. ‘kThe monospecific genus Tathicarpus is the most derived member of this grouping and represents a separate lineage from all other frogfishes, leading to some consideration of it being placed in its own family, the Tathicarpidae. Range Frogfishes live in the tropical and subtropical regions of the Atlantic and Pacific, as well as in the Indian Ocean and the Red Sea. Their habitat lies for the most part between the 20 °C isotherms, in areas where the surface level water usually has a temperature of or more. They extend beyond the 20 °C isotherms in the area of the Azores, Madeira and the Canary Islands, along the Atlantic coast of the United States, on the south coast of Australia and the northern tip of New Zealand, coastal Japan, around Durban, South Africa, and at Baja California, Mexico. The greatest diversity of species is in the Indo-Pacific region, with the highest concentration around Indonesia. In the small Lembeh Strait, north-east of Sulawesi, divers have found 9 different species. Frogfish live generally on the ocean floor around coral or rock reefs, at most to deep. A few exceptions to these general limits are known. The brackishwater frogfish is at home in ocean waters as well as brackish and fresh water around river mouths. The sargassum fish lives in clumps of drifting sargassum, which often floats into the deeper ocean and has been known to take the sargassum fish as far north as Norway. Features Frogfishes have a stocky and unusual appearance for a fish. Ranging from long, their plump, high-backed, unstreamlined body is scaleless and bare, often covered with bumpy, bifurcated spinules. Their short bodies have between 18 and 23 vertebrae and their mouths are upward-pointed with palatal teeth. They are often brightly coloured, white, yellow, red, green, or black or spotted in several colours to blend in with their coral surroundings. Coloration can also vary within one species, making it difficult to differentiate between them. Rather than typical dorsal fins, the front-most of the three fins is called the illicium or "rod" and is topped with the esca or "lure". The illicium often has striped markings, while the esca takes a different form in each species. Because of the variety of colours even within a single species, the esca and illicium are useful tools to differentiate among different varieties. Some of them resemble fish, some shrimp, some polychaetes, some tubeworms, and some simply a formless lump; one genus, Echinophryne, has no esca at all. Despite very specific mimicry in the esca, examinations of stomach contents do not reveal any specialized predation patterns, for example, only worm-eating fish consumed by frogfishes with worm-mimicking esca. If lost, the esca can be regenerated. In many species, the illicium and esca can be withdrawn into a depression between the second and third dorsal fins for protection when they are not needed. Frogfish have small, round gill openings behind their pectoral fins. With the exception of Butler's frogfish and the rough anglerfish, frogfish use a gas bladder to control their buoyancy. Mimicry and camouflage The unusual appearance of the frogfish functions to conceal it from predators and sometimes to mimic a potential meal to lure it in. In the study of animal behavior, this is known as aggressive mimicry. Their unusual shape, colour, and skin textures disguise frogfish. Some resemble stones or coral, while others imitate sponges or sea squirts with dark splotches instead of holes. In 2005, a species was discovered, the striated frogfish, that mimics a sea urchin, while the sargassumfish is coloured to blend in with the surrounding sargassum. Some frogfish are covered with algae or hydrozoa. Their camouflage can be so perfect that sea slugs have been known to crawl over the fish without recognizing them. For the scaleless and unprotected frogfish, camouflage is an important defense against predators. Some species can also inflate themselves, like pufferfish, by sucking in water in a threat display. In aquaria and in nature, frogfish have been observed, when flushed from their hiding spots and clearly visible, to be attacked by clownfish, damselfish, and wrasses, and in aquaria, to be killed. Many frogfishes can change their colour. The light colours are generally yellows or yellow-browns, while the darker are green, black, or dark red. They usually appear with the lighter color, but the change can last from a few days to several weeks. What triggers the change is unknown. Movement Frogfishes generally do not move very much, preferring to lie on the sea floor and wait for prey to approach. Once the prey is spotted, they can approach slowly using their pectoral and pelvic fins to walk along the floor. They rarely swim, preferring to clamber over the sea bottom with their fins in one of two "gaits". In the first, they alternately move their pectoral fins forward, propelling themselves somewhat like a two-legged tetrapod, leaving the pelvic fins out. Alternately, they can move in something like a slow gallop, whereby they move their pectoral fins simultaneously forward and back, transferring their weight to the pelvic fins while moving the pectorals forward. With either gait, they can cover only short stretches. In open water, frogfishes can swim with strokes of the caudal fin. They also use jet propulsion, often used by younger frogfish. It is achieved by rhythmically gulping water and forcing it out through their gill openings, also called opercular openings, which lie behind their pectoral fins. The sargassum frogfish has adapted fins which can grab strands of sargassum, enabling it to "climb" through the seaweed. Hunting Frogfishes eat crustaceans, other fish, and even each other. When potential prey is first spotted, the frogfish follows it with its eyes. Then, when it approaches within roughly seven body-lengths, the frogfish begins to move its illicium in such a way that the esca mimics the motions of the animal it resembles. As the prey approaches, the frogfish slowly moves to prepare for its attack; sometimes this involves approaching the prey or "stalking", while sometimes it is simply adjusting its mouth angle. The catch itself is made by the sudden opening of the jaws, which enlarges the volume of the mouth cavity up to 12-fold, pulling the prey into the mouth along with water. The attack can be as fast as 6 milliseconds. The water flows out through the gills, while the prey is swallowed and the esophagus closed with a special muscle to keep the victim from escaping. In addition to expanding their mouths, frogfish can also expand their stomachs to swallow animals up to twice their size. Slow-motion filming has shown that the frogfish sucks in its prey in just six milliseconds, so fast that other animals cannot see it happen. Reproduction The reproductive behavior of the normally solitary frogfish is still not fully researched. Few observations in aquaria and even fewer from the wild have been made. Most species are free-spawning, with females laying the eggs in the water and males coming in behind to fertilize them. From eight hours to several days before the egg-laying, the abdomen of the female starts to swell as up to 180,000 eggs absorb water. The male begins to approach the female around two days before the spawning. Whether the spawn is predetermined by some external factor, such as the phase of the moon, or if the male is attracted to a smell or signal released by the female, is unknown. In all hitherto observed breeding pairs, one partner was noticeably larger than the other, sometimes as much as 10 times. When the gender could be determined, the larger partner was always the female. During the free-spawning courtship ritual, the male swims beside and somewhat behind the female, nudges her with his mouth, then remains near her cloaca. Just before the spawning, the female begins to swim above the ocean floor toward the surface. At the highest point of their swim, they release the eggs and sperm before descending. Sometimes, the male pulls the eggs out of the female with his mouth. After mating, the partners depart quickly as otherwise the smaller male would likely be eaten. A few species are substrate-spawners, notably the genera Lophiocharon, Phyllophryne, and Rhycherus, which lay their eggs on a solid surface, such as a plant or rock. Some species guard their eggs, a duty assigned to the male in almost all species, while most others do not. Several species practice brood carrying, for example the three-spot frogfish, whose eggs are attached to the male, and those in the genus Histiophryne, whose brood are carried in the pectoral fins. The eggs are in diameter and cohere in a gelatinous mass or long ribbon, which in sargassumfish are up to a metre (3.3 ft) long and wide. These egg masses can include up to 180,000 eggs. For most species, the eggs drift on the surface. After two to five days, the fish hatch and the newly hatched alevin are between long. For the first few days, they live on the yolk sac while their digestive systems continue to develop. The young have long fin filaments and can resemble tiny, tentacled jellyfish. For one to two months, they live planktonically. After this stage, at a length between , they have the form of adult frogfish and begin their lives on the sea floor. Young frogfish often mimic the coloration of poisonous sea slugs or flatworms. Fossil record Very few fossil remains of frogfishes have been found. In the northern Italian formation at Monte Bolca, formed from the sedimentation of the Tethys Ocean in the middle Eocene (45 million years ago), a 3-cm (1.2 in) fossil named Histionotophorus bassani was initially described as a frogfish, but was later thought to belong to the closely related extant genus Brachionichthys or handfish. In 2005, a fossil from Miocene Algeria (3 to 23 million years ago), Antennarius monodi, is the first proven fossil frogfish, believed to be most closely related to the extant Senegalese frogfish. In 2009, a new fossil from the upper Ypresian Stage of the early Eocene found in Monte Bolca, Italy was described as a new species, Eophryne barbuttii, and is the oldest known member of the family.
Biology and health sciences
Acanthomorpha
Animals
7977203
https://en.wikipedia.org/wiki/Engineering%20economics
Engineering economics
Engineering economics, previously known as engineering economy, is a subset of economics concerned with the use and "...application of economic principles" in the analysis of engineering decisions. As a discipline, it is focused on the branch of economics known as microeconomics in that it studies the behavior of individuals and firms in making decisions regarding the allocation of limited resources. Thus, it focuses on the decision making process, its context and environment. It is pragmatic by nature, integrating economic theory with engineering practice. But, it is also a simplified application of microeconomic theory in that it assumes elements such as price determination, competition and demand/supply to be fixed inputs from other sources. As a discipline though, it is closely related to others such as statistics, mathematics and cost accounting. It draws upon the logical framework of economics but adds to that the analytical power of mathematics and statistics. Engineers seek solutions to problems, and along with the technical aspects, the economic viability of each potential solution is normally considered from a specific viewpoint that reflects its economic utility to a constituency. Fundamentally, engineering economics involves formulating, estimating, and evaluating the economic outcomes when alternatives to accomplish a defined purpose are available. In some U.S. undergraduate civil engineering curricula, engineering economics is a required course. It is a topic on the Fundamentals of Engineering examination, and questions might also be asked on the Principles and Practice of Engineering examination; both are part of the Professional Engineering registration process. Considering the time value of money is central to most engineering economic analyses. Cash flows are discounted using an interest rate, except in the most basic economic studies. For each problem, there are usually many possible alternatives. One option that must be considered in each analysis, and is often the choice, is the do nothing alternative. The opportunity cost of making one choice over another must also be considered. There are also non-economic factors to be considered, like color, style, public image, etc.; such factors are termed attributes. Costs as well as revenues are considered, for each alternative, for an analysis period that is either a fixed number of years or the estimated life of the project. The salvage value is often forgotten, but is important, and is either the net cost or revenue for decommissioning the project. Some other topics that may be addressed in engineering economics are inflation, uncertainty, replacements, depreciation, resource depletion, taxes, tax credits, accounting, cost estimations, or capital financing. All these topics are primary skills and knowledge areas in the field of cost engineering. Since engineering is an important part of the manufacturing sector of the economy, engineering industrial economics is an important part of industrial or business economics. Major topics in engineering industrial economics are: The economics of the management, operation, and growth and profitability of engineering firms; Macro-level engineering economic trends and issues; Engineering product markets and demand influences; and The development, marketing, and financing of new engineering technologies and products. Benefit–cost ratio Examples of usage Some examples of engineering economic problems range from value analysis to economic studies. Each of these is relevant in different situations, and most often used by engineers or project managers. For example, engineering economic analysis helps a company not only determine the difference between fixed and incremental costs of certain operations, but also calculates that cost, depending upon a number of variables. Further uses of engineering economics include: Value analysis Linear programming Critical path economy Interest and money - time relationships Depreciation and valuation Capital budgeting Risk, uncertainty, and sensitivity analysis Fixed, incremental, and sunk costs Replacement studies Minimum cost formulas Various economic studies in relation to both public and private ventures Each of the previous components of engineering economics is critical at certain junctures, depending on the situation, scale, and objective of the project at hand. Critical path economy, as an example, is necessary in most situations as it is the coordination and planning of material, labor, and capital movements in a specific project. The most critical of these "paths" are determined to be those that have an effect upon the outcome both in time and cost. Therefore, the critical paths must be determined and closely monitored by engineers and managers alike. Engineering economics helps provide the Gantt charts and activity-event networks to ascertain the correct use of time and resources. Value Analysis Proper value analysis finds its roots in the need for industrial engineers and managers to not only simplify and improve processes and systems, but also the logical simplification of the designs of those products and systems. Though not directly related to engineering economy, value analysis is nonetheless important, and allows engineers to properly manage new and existing systems/processes to make them more simple and save money and time. Further, value analysis helps combat common "roadblock excuses" that may trip up managers or engineers. Sayings such as "The customer wants it this way" are retorted by questions such as; has the customer been told of cheaper alternatives or methods? "If the product is changed, machines will be idle for lack of work" can be combated by; can management not find new and profitable uses for these machines? Questions like these are part of engineering economy, as they preface any real studies or analyses. Linear Programming Linear programming is the use of mathematical methods to find optimized solutions, whether they be minimized or maximized in nature. This method uses a series of lines to create a polygon then to determine the largest, or smallest, point on that shape. Manufacturing operations often use linear programming to help mitigate costs and maximize profits or production. Interest and Money – Time Relationships Considering the prevalence of capital to be lent for a certain period of time, with the understanding that it will be returned to the investor, money-time relationships analyze the costs associated with these types of actions. Capital itself must be divided into two different categories, equity capital and debt capital. Equity capital is money already at the disposal of the business, and mainly derived from profit, and therefore is not of much concern, as it has no owners that demand its return with interest. Debt capital does indeed have owners, and they require that its usage be returned with "profit", otherwise known as interest. The interest to be paid by the business is going to be an expense, while the capital lenders will take interest as a profit, which may confuse the situation. To add to this, each will change the income tax position of the participants. Interest and money time relationships come into play when the capital required to complete a project must be either borrowed or derived from reserves. To borrow brings about the question of interest and value created by the completion of the project. While taking capital from reserves also denies its usage on other projects that may yield more results. Interest in the simplest terms is defined by the multiplication of the principle, the units of time, and the interest rate. The complexity of interest calculations, however, becomes much higher as factors such as compounding interest or annuities come into play. Engineers often utilize compound interest tables to determine the future or present value of capital. These tables can also be used to determine the effect annuities have on loans, operations, or other situations. All one needs to utilize a compound interest table is three things; the time period of the analysis, the minimum attractive rate of return (MARR), and the capital value itself. The table will yield a multiplication factor to be used with the capital value, this will then give the user the proper future or present value. Examples of Present, Future, and Annuity Analysis Using the compound interest tables mentioned above, an engineer or manager can quickly determine the value of capital over a certain time period. For example, a company wishes to borrow $5,000.00 to finance a new machine, and will need to repay that loan in 5 years at 7%. Using the table, 5 years and 7% gives the factor of 1.403, which will be multiplied by $5,000.00. This will result in $7,015.00. This is of course under the assumption that the company will make a lump payment at the conclusion of the five years, not making any payments prior. A much more applicable example is one with a certain piece of equipment that will yield benefit for a manufacturing operation over a certain period of time. For instance, the machine benefits the company $2,500.00 every year, and has a useful life of 8 years. The MARR is determined to be roughly 5%. The compound interest tables yield a different factor for different types of analysis in this scenario. If the company wishes to know the Net Present Benefit (NPB) of these benefits; then the factor is the P/A of 8 yrs at 5%. This is 6.463. If the company wishes to know the future worth of these benefits; then the factors is the F/A of 8 yrs at 5%; which is 9.549. The former gives a NPB of $16,157.50, while the latter gives a future value of $23,872.50. These scenarios are extremely simple in nature, and do not reflect the reality of most industrial situations. Thus, an engineer must begin to factor in costs and benefits, then find the worth of the proposed machine, expansion, or facility. Depreciation and Valuation The fact that assets and material in the real world eventually wear down, and thence break, is a situation that must be accounted for. Depreciation itself is defined by the decreasing of value of any given asset, though some exceptions do exist. Valuation can be considered the basis for depreciation in a basic sense, as any decrease in value would be based on an original value. The idea and existence of depreciation becomes especially relevant to engineering and project management is the fact that capital equipment and assets used in operations will slowly decrease in worth, which will also coincide with an increase in the likelihood of machine failure. Hence the recording and calculation of depreciation is important for two major reasons. To give an estimate of "recovery capital" that has been put back into the property. To enable depreciation to be charged against profits that, like other costs, can be used for income taxation purposes. Both of these reasons, however, cannot make up for the "fleeting" nature of depreciation, which make direct analysis somewhat difficult. To further add to the issues associated with depreciation, it must be broken down into three separate types, each having intricate calculations and implications. Normal depreciation, due to physical or functional losses. Price depreciation, due to changes in market value. Depletion, due to the use of all available resources. Calculation of depreciation also comes in a number of forms; straight line, declining balance, sum-of-the-year's, and service output. The first method being perhaps the easiest to calculate, while the remaining have varying levels of difficulty and utility. Most situations faced by managers in regards to depreciation can be solved using any of these formulas, however, company policy or preference of individual may affect the choice of model. The main form of depreciation used inside the U.S. is the Modified Accelerated Capital Recovery System (MACRS), and it is based on a number of tables that give the class of asset, and its life. Certain classes are given certain lifespans, and these affect the value of an asset that can be depreciated each year. This does not necessarily mean that an asset must be discarded after its MACRS life is fulfilled, just that it can no longer be used for tax deductions. Capital Budgeting Capital budgeting, in relation to engineering economics, is the proper usage and utilization of capital to achieve project objectives. It can be fully defined by the statement; "... as the series of decisions by individuals and firms concerning how much and where resources will be obtained and expended to meet future objectives." This definition almost perfectly explains capital and its general relation to engineering, though some special cases may not lend themselves to such a concise explanation. The actual acquisition of that capital has many different routes, from equity to bonds to retained profits, each having unique strengths and weakness, especially when in relation to income taxation. Factors such as risk of capital loss, along with possible or expected returns must also be considered when capital budgeting is underway. For example, if a company has $20,000 to invest in a number of high, moderate, and low risk projects, the decision would depend upon how much risk the company is willing to take on, and if the returns offered by each category offset this perceived risk. Continuing with this example, if the high risk offered only 20% return, while the moderate offered 19% return, engineers and managers would most likely choose the moderate risk project, as its return is far more favorable for its category. The high risk project failed to offer proper returns to warrant its risk status. A more difficult decision may be between a moderate risk offering 15% while a low risk offering 11% return. The decision here would be much more subject to factors such as company policy, extra available capital, and possible investors. "In general, the firm should estimate the project opportunities, including investment requirements and prospective rates of return for each, expected to be available for the coming period. Then the available capital should be tentatively allocated to the most favorable projects. The lowest prospective rate of return within the capital available then becomes the minimum acceptable rate of return for analyses of any projects during that period." Minimum Cost Formulas Being one of the most important and integral operations in the engineering economic field is the minimization of cost in systems and processes. Time, resources, labor, and capital must all be minimized when placed into any system, so that revenue, product, and profit can be maximized. Hence, the general equation; where C is total cost, a b and k are constants, and x is the variable number of units produced. There are a great number of cost analysis formulas, each for particular situations and are warranted by the policies of the company in question, or the preferences of the engineer at hand. Economic Studies, both Private and Public in Nature Economic studies, which are much more common outside of engineering economics, are still used from time to time to determine feasibility and utility of certain projects. They do not, however, truly reflect the "common notion" of economic studies, which is fixated upon macroeconomics, something engineers have little interaction with. Therefore, the studies conducted in engineering economics are for specific companies and limited projects inside those companies. At most one may expect to find some feasibility studies done by private firms for the government or another business, but these again are in stark contrast to the overarching nature of true economic studies. Studies have a number of major steps that can be applied to almost every type of situation, those being as follows; Planning and screening - Mainly reviewing objectives and issues that may be encountered. Reference to standard economic studies - Consultation of standard forms. Estimating - Speculating as to the magnitude of costs and other variables. Reliability - The ability to properly estimate. Comparison between actual and projected performance - Verify savings, review failures, to ensure that proposals were valid, and to add to future studies. Objectivity of the analyst - To ensure the individual that advanced proposals or conducted analysis was not biased toward certain outcomes.
Technology
Basics
null
15440316
https://en.wikipedia.org/wiki/Wind
Wind
Wind is the natural movement of air or other gases relative to a planet's surface. Winds occur on a range of scales, from thunderstorm flows lasting tens of minutes, to local breezes generated by heating of land surfaces and lasting a few hours, to global winds resulting from the difference in absorption of solar energy between the climate zones on Earth. The study of wind is called anemology. The two main causes of large-scale atmospheric circulation are the differential heating between the equator and the poles, and the rotation of the planet (Coriolis effect). Within the tropics and subtropics, thermal low circulations over terrain and high plateaus can drive monsoon circulations. In coastal areas the sea breeze/land breeze cycle can define local winds; in areas that have variable terrain, mountain and valley breezes can prevail. Winds are commonly classified by their spatial scale, their speed and direction, the forces that cause them, the regions in which they occur, and their effect. Winds have various defining aspects such as velocity (wind speed), the density of the gases involved, and energy content or wind energy. In meteorology, winds are often referred to according to their strength, and the direction from which the wind is blowing. The convention for directions refer to where the wind comes from; therefore, a 'western' or 'westerly' wind blows from the west to the east, a 'northern' wind blows south, and so on. This is sometimes counter-intuitive. Short bursts of high speed wind are termed gusts. Strong winds of intermediate duration (around one minute) are termed squalls. Long-duration winds have various names associated with their average strength, such as breeze, gale, storm, and hurricane. In outer space, solar wind is the movement of gases or charged particles from the Sun through space, while planetary wind is the outgassing of light chemical elements from a planet's atmosphere into space. The strongest observed winds on a planet in the Solar System occur on Neptune and Saturn. In human civilization, the concept of wind has been explored in mythology, influenced the events of history, expanded the range of transport and warfare, and provided a power source for mechanical work, electricity, and recreation. Wind powers the voyages of sailing ships across Earth's oceans. Hot air balloons use the wind to take short trips, and powered flight uses it to increase lift and reduce fuel consumption. Areas of wind shear caused by various weather phenomena can lead to dangerous situations for aircraft. When winds become strong, trees and human-made structures can be damaged or destroyed. Winds can shape landforms, via a variety of aeolian processes such as the formation of fertile soils, for example loess, and by erosion. Dust from large deserts can be moved great distances from its source region by the prevailing winds; winds that are accelerated by rough topography and associated with dust outbreaks have been assigned regional names in various parts of the world because of their significant effects on those regions. Wind also affects the spread of wildfires. Winds can disperse seeds from various plants, enabling the survival and dispersal of those plant species, as well as flying insect and bird populations. When combined with cold temperatures, the wind has a negative impact on livestock. Wind affects animals' food stores, as well as their hunting and defensive strategies. Causes Wind is caused by differences in atmospheric pressure, which are primarily due to temperature differences. When a difference in atmospheric pressure exists, air moves from the higher to the lower pressure area, resulting in winds of various speeds. On a rotating planet, air will also be deflected by the Coriolis effect, except exactly on the equator. Globally, the two major driving factors of large-scale wind patterns (the atmospheric circulation) are the differential heating between the equator and the poles (difference in absorption of solar energy leading to buoyancy forces) and the rotation of the planet. Outside the tropics and aloft from frictional effects of the surface, the large-scale winds tend to approach geostrophic balance. Near the Earth's surface, friction causes the wind to be slower than it would be otherwise. Surface friction also causes winds to blow more inward into low-pressure areas. Winds defined by an equilibrium of physical forces are used in the decomposition and analysis of wind profiles. They are useful for simplifying the atmospheric equations of motion and for making qualitative arguments about the horizontal and vertical distribution of horizontal winds. The geostrophic wind component is the result of the balance between Coriolis force and pressure gradient force. It flows parallel to isobars and approximates the flow above the atmospheric boundary layer in the midlatitudes. The thermal wind is the difference in the geostrophic wind between two levels in the atmosphere. It exists only in an atmosphere with horizontal temperature gradients. The ageostrophic wind component is the difference between actual and geostrophic wind, which is responsible for air "filling up" cyclones over time. The gradient wind is similar to the geostrophic wind but also includes centrifugal force (or centripetal acceleration). Measurement Wind direction is usually expressed in terms of the direction from which it originates. For example, a northerly wind blows from the north to the south. Weather vanes pivot to indicate the direction of the wind. At airports, windsocks indicate wind direction, and can also be used to estimate wind speed by the angle of hang. Wind speed is measured by anemometers, most commonly using rotating cups or propellers. When a high measurement frequency is needed (such as in research applications), wind can be measured by the propagation speed of ultrasound signals or by the effect of ventilation on the resistance of a heated wire. Another type of anemometer uses pitot tubes that take advantage of the pressure differential between an inner tube and an outer tube that is exposed to the wind to determine the dynamic pressure, which is then used to compute the wind speed. Sustained wind speeds are reported globally at a height and are averaged over a 10‑minute time frame. The United States reports winds over a 1‑minute average for tropical cyclones, and a 2‑minute average within weather observations. India typically reports winds over a 3‑minute average. Knowing the wind sampling average is important, as the value of a one-minute sustained wind is typically 14% greater than a ten-minute sustained wind. A short burst of high speed wind is termed a wind gust; one technical definition of a wind gust is: the maxima that exceed the lowest wind speed measured during a ten-minute time interval by for periods of seconds. A squall is an increase of the wind speed above a certain threshold, which lasts for a minute or more. To determine winds aloft, radiosondes determine wind speed by GPS, radio navigation, or radar tracking of the probe. Alternatively, movement of the parent weather balloon position can be tracked from the ground visually using theodolites. Remote sensing techniques for wind include SODAR, Doppler lidars and radars, which can measure the Doppler shift of electromagnetic radiation scattered or reflected off suspended aerosols or molecules, and radiometers and radars can be used to measure the surface roughness of the ocean from space or airplanes. Ocean roughness can be used to estimate wind velocity close to the sea surface over oceans. Geostationary satellite imagery can be used to estimate the winds at cloud top based upon how far clouds move from one image to the next. Wind engineering describes the study of the effects of the wind on the built environment, including buildings, bridges and other artificial objects. Models Models can provide spatial and temporal information about airflow. Spatial information can be obtained through the interpolation of data from various measurement stations, allowing for horizontal data calculation. Alternatively, profiles, such as the logarithmic wind profile, can be utilized to derive vertical information. Temporal information is typically computed by solving the Navier-Stokes equations within numerical weather prediction models, generating global data for General Circulation Models or specific regional data. The calculation of wind fields is influenced by factors such as radiation differentials, Earth's rotation, and friction, among others. Solving the Navier-Stokes equations is a time-consuming numerical process, but machine learning techniques can help expedite computation time. Numerical weather prediction models have significantly advanced our understanding of atmospheric dynamics and have become indispensable tools in weather forecasting and climate research. By leveraging both spatial and temporal data, these models enable scientists to analyze and predict global and regional wind patterns, contributing to our comprehension of the Earth's complex atmospheric system. Wind force scale Historically, the Beaufort wind force scale, created by Francis Beaufort, provides an empirical description of wind speed based on observed sea conditions. Originally it was a 13-level scale (012), but during the 1940s, the scale was expanded to 18 levels (017). There are general terms that differentiate winds of different average speeds such as a breeze, a gale, a storm, or a hurricane. Within the Beaufort scale, gale-force winds lie between and with preceding adjectives such as moderate, fresh, strong, and whole used to differentiate the wind's strength within the gale category. A storm has winds of to . The terminology for tropical cyclones differs from one region to another globally. Most ocean basins use the average wind speed to determine the tropical cyclone's category. Below is a summary of the classifications used by Regional Specialized Meteorological Centers worldwide: Enhanced Fujita scale The Enhanced Fujita Scale (EF Scale) rates the strength of tornadoes by using damage to estimate wind speed. It has six levels, from visible damage to complete destruction. It is used in the United States and in some other countries, including Canada and France, with small modifications. Station model The station model plotted on surface weather maps uses a wind barb to show both wind direction and speed. The wind barb shows the speed using "flags" on the end. Each half of a flag depicts of wind. Each full flag depicts of wind. Each pennant (filled triangle) depicts of wind. Winds are depicted as blowing from the direction the barb is facing. Therefore, a northeast wind will be depicted with a line extending from the cloud circle to the northeast, with flags indicating wind speed on the northeast end of this line. Once plotted on a map, an analysis of isotachs (lines of equal wind speeds) can be accomplished. Isotachs are particularly useful in diagnosing the location of the jet stream on upper-level constant pressure charts, and are usually located at or above the 300 hPa level. Global climatology Easterly winds, on average, dominate the flow pattern across the poles, westerly winds blow across the mid-latitudes of the Earth, polewards of the subtropical ridge, while easterlies again dominate the tropics. Directly under the subtropical ridge are the doldrums, or horse latitudes, where winds are lighter. Many of the Earth's deserts lie near the average latitude of the subtropical ridge, where descent reduces the relative humidity of the air mass. The strongest winds are in the mid-latitudes where cold polar air meets warm air from the tropics. Tropics The trade winds (also called trades) are the prevailing pattern of easterly surface winds found in the tropics towards the Earth's equator. The trade winds blow predominantly from the northeast in the Northern Hemisphere and from the southeast in the Southern Hemisphere. The trade winds act as the steering flow for tropical cyclones that form over the world's oceans. Trade winds also steer African dust westward across the Atlantic Ocean into the Caribbean, as well as portions of southeast North America. A monsoon is a seasonal prevailing wind that lasts for several months within tropical regions. The term was first used in English in India, Bangladesh, Pakistan, and neighboring countries to refer to the big seasonal winds blowing from the Indian Ocean and Arabian Sea in the southwest bringing heavy rainfall to the area. Its poleward progression is accelerated by the development of a heat low over the Asian, African, and North American continents during May through July, and over Australia in December. Westerlies and their impact The Westerlies or the Prevailing Westerlies are the prevailing winds in the middle latitudes between 35 and 65 degrees latitude. These prevailing winds blow from the west to the east, and steer extratropical cyclones in this general manner. The winds are predominantly from the southwest in the Northern Hemisphere and from the northwest in the Southern Hemisphere. They are strongest in the winter when the pressure is lower over the poles, and weakest during the summer and when pressures are higher over the poles. Together with the trade winds, the westerlies enabled a round-trip trade route for sailing ships crossing the Atlantic and Pacific Oceans, as the westerlies lead to the development of strong ocean currents on the western sides of oceans in both hemispheres through the process of western intensification. These western ocean currents transport warm, sub-tropical water polewards toward the polar regions. The westerlies can be particularly strong, especially in the southern hemisphere, where there is less land in the middle latitudes to cause the flow pattern to amplify, which slows the winds down. The strongest westerly winds in the middle latitudes are within a band known as the Roaring Forties, between 40 and 50 degrees latitude south of the equator. The Westerlies play an important role in carrying the warm, equatorial waters and winds to the western coasts of continents, especially in the southern hemisphere because of its vast oceanic expanse. Polar easterlies The polar easterlies, also known as Polar Hadley cells, are dry, cold prevailing winds that blow from the high-pressure areas of the polar highs at the north and South Poles towards the low-pressure areas within the Westerlies at high latitudes. Unlike the Westerlies, these prevailing winds blow from the east to the west, and are often weak and irregular. Because of the low sun angle, cold air builds up and subsides at the pole creating surface high-pressure areas, forcing an equatorward outflow of air; that outflow is deflected westward by the Coriolis effect. Local considerations Sea and land breezes In coastal regions, sea breezes and land breezes can be important factors in a location's prevailing winds. The sea is warmed by the sun more slowly because of water's greater specific heat compared to land. As the temperature of the surface of the land rises, the land heats the air above it by conduction. The warm air is less dense than the surrounding environment and so it rises. The cooler air above the sea, now with higher sea level pressure, flows inland into the lower pressure, creating a cooler breeze near the coast. A background along-shore wind either strengthens or weakens the sea breeze, depending on its orientation with respect to the Coriolis force. At night, the land cools off more quickly than the ocean because of differences in their specific heat values. This temperature change causes the daytime sea breeze to dissipate. When the temperature onshore cools below the temperature offshore, the pressure over the water will be lower than that of the land, establishing a land breeze, as long as an onshore wind is not strong enough to oppose it. Near mountains Over elevated surfaces, heating of the ground exceeds the heating of the surrounding air at the same altitude above sea level, creating an associated thermal low over the terrain and enhancing any thermal lows that would have otherwise existed, and changing the wind circulation of the region. In areas where there is rugged topography that significantly interrupts the environmental wind flow, the wind circulation between mountains and valleys is the most important contributor to the prevailing winds. Hills and valleys substantially distort the airflow by increasing friction between the atmosphere and landmass by acting as a physical block to the flow, deflecting the wind parallel to the range just upstream of the topography, which is known as a barrier jet. This barrier jet can increase the low-level wind by 45%. Wind direction also changes because of the contour of the land. If there is a pass in the mountain range, winds will rush through the pass with considerable speed because of the Bernoulli principle that describes an inverse relationship between speed and pressure. The airflow can remain turbulent and erratic for some distance downwind into the flatter countryside. These conditions are dangerous to ascending and descending airplanes. Cool winds accelerating through mountain gaps have been given regional names. In Central America, examples include the Papagayo wind, the Panama wind, and the Tehuano wind. In Europe, similar winds are known as the Bora, Tramontane, and Mistral. When these winds blow over open waters, they increase mixing of the upper layers of the ocean that elevates cool, nutrient rich waters to the surface, which leads to increased marine life. In mountainous areas, local distortion of the airflow becomes severe. Jagged terrain combines to produce unpredictable flow patterns and turbulence, such as rotors, which can be topped by lenticular clouds. Strong updrafts, downdrafts, and eddies develop as the air flows over hills and down valleys. Orographic precipitation occurs on the windward side of mountains and is caused by the rising air motion of a large-scale flow of moist air across the mountain ridge, also known as upslope flow, resulting in adiabatic cooling and condensation. In mountainous parts of the world subjected to relatively consistent winds (for example, the trade winds), a more moist climate usually prevails on the windward side of a mountain than on the leeward or downwind side. Moisture is removed by orographic lift, leaving drier air on the descending and generally warming, leeward side where a rain shadow is observed. Winds that flow over mountains down into lower elevations are known as downslope winds. These winds are warm and dry. In Europe downwind of the Alps, they are known as foehn. In Poland, an example is the halny wiatr. In Argentina, the local name for down sloped winds is zonda. In Java, the local name for such winds is koembang. In New Zealand, they are known as the Nor'west arch, and are accompanied by the cloud formation they are named after that has inspired artwork over the years. In the Great Plains of the United States, these winds are known as a chinook. Downslope winds also occur in the foothills of the Appalachian mountains of the United States, and they can be as strong as other downslope winds and unusual compared to other foehn winds in that the relative humidity typically changes little due to the increased moisture in the source air mass. In California, downslope winds are funneled through mountain passes, which intensify their effect, and examples include the Santa Ana and sundowner winds. Wind speeds during downslope wind effect can exceed . Shear Wind shear, sometimes referred to as wind gradient, is a difference in wind speed and direction over a relatively short distance in the Earth's atmosphere. Wind shear can be broken down into vertical and horizontal components, with horizontal wind shear seen across weather fronts and near the coast, and vertical shear typically near the surface, though also at higher levels in the atmosphere near upper level jets and frontal zones aloft. Wind shear itself is a microscale meteorological phenomenon occurring over a very small distance, but it can be associated with mesoscale or synoptic scale weather features such as squall lines and cold fronts. It is commonly observed near microbursts and downbursts caused by thunderstorms, weather fronts, areas of locally higher low level winds referred to as low level jets, near mountains, radiation inversions that occur because of clear skies and calm winds, buildings, wind turbines, and sailboats. Wind shear has a significant effect on the control of aircraft during take-off and landing, and was a significant cause of aircraft accidents involving large loss of life within the United States. Sound movement through the atmosphere is affected by wind shear, which can bend the wave front, causing sounds to be heard where they normally would not, or vice versa. Strong vertical wind shear within the troposphere also inhibits tropical cyclone development, but helps to organize individual thunderstorms into living longer life cycles that can then produce severe weather. The thermal wind concept explains how differences in wind speed with height are dependent on horizontal temperature differences, and explains the existence of the jet stream. In civilization Religion As a natural force, the wind was often personified as one or more wind gods or as an expression of the supernatural in many cultures. Vayu is the Vedic and Hindu God of Wind. The Greek wind gods include Boreas, Notus, Eurus, and Zephyrus. Aeolus, in varying interpretations the ruler or keeper of the four winds, has also been described as Astraeus, the god of dusk who fathered the four winds with Eos, goddess of dawn. The ancient Greeks also observed the seasonal change of the winds, as evidenced by the Tower of the Winds in Athens. Venti are the Roman gods of the winds. Fūjin is the Japanese wind god and is one of the eldest Shinto gods. According to legend, he was present at the creation of the world and first let the winds out of his bag to clear the world of mist. In Norse mythology, Njörðr is the god of the wind. There are also four dvärgar (Norse dwarves), named Norðri, Suðri, Austri and Vestri, and probably the four stags of Yggdrasil, personify the four winds, and parallel the four Greek wind gods. Stribog is the name of the Slavic god of winds, sky and air. He is said to be the ancestor (grandfather) of the winds of the eight directions. History Kamikaze is a Japanese word, usually translated as divine wind, believed to be a gift from the gods. The term is first known to have been used as the name of a pair or series of typhoons that are said to have saved Japan from two Mongol fleets under Kublai Khan that attacked Japan in 1274 and again in 1281. Protestant Wind is a name for the storm that deterred the Spanish Armada from an invasion of England in 1588 where the wind played a pivotal role, or the favorable winds that enabled William of Orange to invade England in 1688. During Napoleon's Egyptian Campaign, the French soldiers had a hard time with the khamsin wind: when the storm appeared "as a blood-stint in the distant sky", the Ottomans went to take cover, while the French "did not react until it was too late, then choked and fainted in the blinding, suffocating walls of dust". During the North African Campaign of the World War II, "allied and German troops were several times forced to halt in mid-battle because of sandstorms caused by khamsin... Grains of sand whirled by the wind blinded the soldiers and created electrical disturbances that rendered compasses useless." Transportation There are many different forms of sailing ships, but they all have certain basic things in common. Except for rotor ships using the Magnus effect, every sailing ship has a hull, rigging and at least one mast to hold up the sails that use the wind to power the ship. Ocean journeys by sailing ship can take many months, and a common hazard is becoming becalmed because of lack of wind, or being blown off course by severe storms or winds that do not allow progress in the desired direction. A severe storm could lead to shipwreck, and the loss of all hands. Sailing ships can only carry a certain quantity of supplies in their hold, so they have to plan long voyages carefully to include appropriate provisions, including fresh water. For aerodynamic aircraft which operate relative to the air, winds affect groundspeed, and in the case of lighter-than-air vehicles, wind may play a significant or solitary role in their movement and ground track. The velocity of surface wind is generally the primary factor governing the direction of flight operations at an airport, and airfield runways are aligned to account for the common wind direction(s) of the local area. While taking off with a tailwind may be necessary under certain circumstances, a headwind is generally desirable. A tailwind increases takeoff distance required and decreases the climb gradient. Power source The ancient Sinhalese of Anuradhapura and in other cities around Sri Lanka used the monsoon winds to power furnaces as early as 300 BCE. The furnaces were constructed on the path of the monsoon winds to bring the temperatures inside up to . A rudimentary windmill was used to power an organ in the first century CE. Windmills were later built in Sistan, Afghanistan, from the 7th century CE. These were vertical-axle windmills, with sails covered in reed matting or cloth material. These windmills were used to grind corn and draw up water, and were used in the gristmilling and sugarcane industries. Horizontal-axle windmills were later used extensively in Northwestern Europe to grind flour beginning in the 1180s, and many Dutch windmills still exist. Wind power is now one of the main sources of renewable energy, and its use is growing rapidly, driven by innovation and falling prices. Most of the installed capacity in wind power is onshore, but offshore wind power offers a large potential as wind speeds are typically higher and more constant away from the coast. Wind energy the kinetic energy of the air, is proportional to the third power of wind velocity. Betz's law described the theoretical upper limit of what fraction of this energy wind turbines can extract, which is about 59%. Recreation Wind figures prominently in several popular sports, including recreational hang gliding, hot air ballooning, kite flying, snowkiting, kite landboarding, kite surfing, paragliding, sailing, and windsurfing. In gliding, wind gradients just above the surface affect the takeoff and landing phases of flight of a glider. Wind gradient can have a noticeable effect on ground launches, also known as winch launches or wire launches. If the wind gradient is significant or sudden, or both, and the pilot maintains the same pitch attitude, the indicated airspeed will increase, possibly exceeding the maximum ground launch tow speed. The pilot must adjust the airspeed to deal with the effect of the gradient. When landing, wind shear is also a hazard, particularly when the winds are strong. As the glider descends through the wind gradient on final approach to landing, airspeed decreases while sink rate increases, and there is insufficient time to accelerate prior to ground contact. The pilot must anticipate the wind gradient and use a higher approach speed to compensate for it. In the natural world In arid climates, the main source of erosion is wind. The general wind circulation moves small particulates such as dust across wide oceans thousands of kilometers downwind of their point of origin, which is known as deflation. Westerly winds in the mid-latitudes of the planet drive the movement of ocean currents from west to east across the world's oceans. Wind has a very important role in aiding plants and other immobile organisms in dispersal of seeds, spores, pollen, etc. Although wind is not the primary form of seed dispersal in plants, it provides dispersal for a large percentage of the biomass of land plants. Erosion Erosion can be the result of material movement by the wind. There are two main effects. First, wind causes small particles to be lifted and therefore moved to another region. This is called deflation. Second, these suspended particles may impact on solid objects causing erosion by abrasion (ecological succession). Wind erosion generally occurs in areas with little or no vegetation, often in areas where there is insufficient rainfall to support vegetation. An example is the formation of sand dunes, on a beach or in a desert. Loess is a homogeneous, typically nonstratified, porous, friable, slightly coherent, often calcareous, fine-grained, silty, pale yellow or buff, windblown (Aeolian) sediment. It generally occurs as a widespread blanket deposit that covers areas of hundreds of square kilometers and tens of meters thick. Loess often stands in either steep or vertical faces. Loess tends to develop into highly rich soils. Under appropriate climatic conditions, areas with loess are among the most agriculturally productive in the world. Loess deposits are geologically unstable by nature, and will erode very readily. Therefore, windbreaks (such as big trees and bushes) are often planted by farmers to reduce the wind erosion of loess. Desert dust migration During mid-summer (July in the northern hemisphere), the westward-moving trade winds south of the northward-moving subtropical ridge expand northwestward from the Caribbean into southeastern North America. When dust from the Sahara moving around the southern periphery of the ridge within the belt of trade winds moves over land, rainfall is suppressed and the sky changes from a blue to a white appearance, which leads to an increase in red sunsets. Its presence negatively impacts air quality by adding to the count of airborne particulates. Over 50% of the African dust that reaches the United States affects Florida. Since 1970, dust outbreaks have worsened because of periods of drought in Africa. There is a large variability in the dust transport to the Caribbean and Florida from year to year. Dust events have been linked to a decline in the health of coral reefs across the Caribbean and Florida, primarily since the 1970s. Similar dust plumes originate in the Gobi Desert, which combined with pollutants, spread large distances downwind, or eastward, into North America. There are local names for winds associated with sand and dust storms. The Calima carries dust on southeast winds into the Canary islands. The Harmattan carries dust during the winter into the Gulf of Guinea. The Sirocco brings dust from north Africa into southern Europe because of the movement of extratropical cyclones through the Mediterranean. Spring storm systems moving across the eastern Mediterranean Sea cause dust to carry across Egypt and the Arabian Peninsula, which are locally known as Khamsin. The Shamal is caused by cold fronts lifting dust into the atmosphere for days at a time across the Persian Gulf states. Effect on plants Wind dispersal of seeds, or anemochory, is one of the more primitive means of dispersal. Wind dispersal can take on one of two primary forms: seeds can float on the breeze or alternatively, they can flutter to the ground. The classic examples of these dispersal mechanisms include dandelions (Taraxacum spp., Asteraceae), which have a feathery pappus attached to their seeds and can be dispersed long distances, and maples (Acer (genus) spp., Sapindaceae), which have winged seeds and flutter to the ground. An important constraint on wind dispersal is the need for abundant seed production to maximize the likelihood of a seed landing in a site suitable for germination. There are also strong evolutionary constraints on this dispersal mechanism. For instance, species in the Asteraceae on islands tended to have reduced dispersal capabilities (i.e., larger seed mass and smaller pappus) relative to the same species on the mainland. Reliance upon wind dispersal is common among many weedy or ruderal species. Unusual mechanisms of wind dispersal include tumbleweeds. A related process to anemochory is anemophily, which is the process where pollen is distributed by wind. Large families of plants are pollinated in this manner, which is favored when individuals of the dominant plant species are spaced closely together. Wind also limits tree growth. On coasts and isolated mountains, the tree line is often much lower than in corresponding altitudes inland and in larger, more complex mountain systems, because strong winds reduce tree growth. High winds scour away thin soils through erosion, as well as damage limbs and twigs. When high winds knock down or uproot trees, the process is known as windthrow. This is most likely on windward slopes of mountains, with severe cases generally occurring to tree stands that are 75 years or older. Plant varieties near the coast, such as the Sitka spruce and sea grape, are pruned back by wind and salt spray near the coastline. Wind can also cause plants damage through sand abrasion. Strong winds will pick up loose sand and topsoil and hurl it through the air at speeds ranging from to . Such windblown sand causes extensive damage to plant seedlings because it ruptures plant cells, making them vulnerable to evaporation and drought. Using a mechanical sandblaster in a laboratory setting, scientists affiliated with the Agricultural Research Service studied the effects of windblown sand abrasion on cotton seedlings. The study showed that the seedlings responded to the damage created by the windblown sand abrasion by shifting energy from stem and root growth to the growth and repair of the damaged stems. After a period of four weeks, the growth of the seedling once again became uniform throughout the plant, as it was before the windblown sand abrasion occurred. Besides plant gametes (seeds) wind also helps plants' enemies: Spores and other propagules of plant pathogens are even lighter and able to travel long distances. A few plant diseases are known to have been known to travel over marginal seas and even entire oceans. Humans are unable to prevent or even slow down wind dispersal of plant pathogens, requiring prediction and amelioration instead. Effect on animals Cattle and sheep are prone to wind chill caused by a combination of wind and cold temperatures, when winds exceed , rendering their hair and wool coverings ineffective. Although penguins use both a layer of fat and feathers to help guard against coldness in both water and air, their flippers and feet are less immune to the cold. In the coldest climates such as Antarctica, emperor penguins use huddling behavior to survive the wind and cold, continuously alternating the members on the outside of the assembled group, which reduces heat loss by 50%. Flying insects, a subset of arthropods, are swept along by the prevailing winds, while birds follow their own course taking advantage of wind conditions, in order to either fly or glide. As such, fine line patterns within weather radar imagery, associated with converging winds, are dominated by insect returns. Bird migration, which tends to occur overnight within the lowest of the Earth's atmosphere, contaminates wind profiles gathered by weather radar, particularly the WSR-88D, by increasing the environmental wind returns by to . Pikas use a wall of pebbles to store dry plants and grasses for the winter in order to protect the food from being blown away. Cockroaches use slight winds that precede the attacks of potential predators, such as toads, to survive their encounters. Their cerci are very sensitive to the wind, and help them survive half of their attacks. Elk have a keen sense of smell that can detect potential upwind predators at a distance of . Increases in wind above signals glaucous gulls to increase their foraging and aerial attacks on thick-billed murres. Related damage High winds are known to cause damage, depending upon the magnitude of their velocity and pressure differential. Wind pressures are positive on the windward side of a structure and negative on the leeward side. Infrequent wind gusts can cause poorly designed suspension bridges to sway. When wind gusts are at a similar frequency to the swaying of the bridge, the bridge can be destroyed more easily, such as what occurred with the Tacoma Narrows Bridge in 1940. Wind speeds as low as can lead to power outages due to tree branches disrupting the flow of energy through power lines. While no species of tree is guaranteed to stand up to hurricane-force winds, those with shallow roots are more prone to uproot, and brittle trees such as eucalyptus, sea hibiscus, and avocado are more prone to damage. Hurricane-force winds cause substantial damage to mobile homes, and begin to structurally damage homes with foundations. Winds of this strength due to downsloped winds off terrain have been known to shatter windows and sandblast paint from cars. Once winds exceed , homes completely collapse, and significant damage is done to larger buildings. Total destruction to artificial structures occurs when winds reach . The Saffir–Simpson scale and Enhanced Fujita scale were designed to help estimate wind speed from the damage caused by high winds related to tropical cyclones and tornadoes, and vice versa. Australia's Barrow Island holds the record for the strongest wind gust, reaching 408 km/h (253 mph) during tropical Cyclone Olivia on 10 April 1996, surpassing the previous record of 372 km/h (231 mph) set on Mount Washington (New Hampshire) on the afternoon of 12 April 1934. Wildfire intensity increases during daytime hours. For example, burn rates of smoldering logs are up to five times greater during the day because of lower humidity, increased temperatures, and increased wind speeds. Sunlight warms the ground during the day and causes air currents to travel uphill, and downhill during the night as the land cools. Wildfires are fanned by these winds and often follow the air currents over hills and through valleys. United States wildfire operations revolve around a 24-hour fire day that begins at 10:00 a.m. because of the predictable increase in intensity resulting from the daytime warmth. In outer space The solar wind is quite different from a terrestrial wind, in that its origin is the Sun, and it is composed of charged particles that have escaped the Sun's atmosphere. Similar to the solar wind, the planetary wind is composed of light gases that escape planetary atmospheres. Over long periods of time, the planetary wind can radically change the composition of planetary atmospheres. The fastest wind ever recorded came from the accretion disc of the IGR J17091-3624 black hole. Its speed is , which is 3% of the speed of light. Planetary wind The hydrodynamic wind within the upper portion of a planet's atmosphere allows light chemical elements such as hydrogen to move up to the exobase, the lower limit of the exosphere, where the gases can then reach escape velocity, entering outer space without impacting other particles of gas. This type of gas loss from a planet into space is known as planetary wind. Such a process over geologic time causes water-rich planets such as the Earth to evolve into planets like Venus. Additionally, planets with hotter lower atmospheres could accelerate the loss rate of hydrogen. Solar wind Rather than air, the solar wind is a stream of charged particles—a plasma—ejected from the upper atmosphere of the Sun at a rate of . It consists mostly of electrons and protons with energies of about 1 keV. The stream of particles varies in temperature and speed with the passage of time. These particles are able to escape the Sun's gravity, in part because of the high temperature of the corona, but also because of high kinetic energy that particles gain through a process that is not well understood. The solar wind creates the Heliosphere, a vast bubble in the interstellar medium surrounding the Solar System. Planets require large magnetic fields in order to reduce the ionization of their upper atmosphere by the solar wind. Other phenomena caused by the solar wind include geomagnetic storms that can knock out power grids on Earth, the aurorae such as the Northern Lights, and the plasma tails of comets that always point away from the Sun. On other planets Strong winds at Venus's cloud tops circle the planet every four to five Earth days. When the poles of Mars are exposed to sunlight after their winter, the frozen CO2 sublimates, creating significant winds that sweep off the poles as fast as , which subsequently transports large amounts of dust and water vapor over its landscape. Other Martian winds have resulted in cleaning events and dust devils. On Jupiter, wind speeds of are common in zonal jet streams. Saturn's winds are among the Solar System's fastest. Cassini–Huygens data indicated peak easterly winds of . On Uranus, northern hemisphere wind speeds reach as high as near 50 degrees north latitude. At the cloud tops of Neptune, prevailing winds range in speed from along the equator to at the poles. At 70° S latitude on Neptune, a high-speed jet stream travels at a speed of . The fastest wind on any known planet is on HD 80606 b located 190 light years away, where it blows at more than 11,000 mph or 5 km/s.
Physical sciences
Earth science
null
6068527
https://en.wikipedia.org/wiki/Spanish%20Fighting%20Bull
Spanish Fighting Bull
The Spanish Fighting Bull (Toro Bravo, toro de lidia, toro lidiado, ganado bravo, Touro de Lide) is an Iberian heterogeneous cattle (Bos taurus) population. It is exclusively bred free-range on extensive estates in Spain, Portugal, France and Latin American countries where bullfighting is organized. Fighting bulls are selected primarily for a certain combination of aggression, energy, strength and stamina. In order to preserve their natural traits, during breeding the bulls rarely encounter humans, and if so, never encounter them on foot. History of the breed Some commentators trace the origins of the fighting bull to wild bulls from the Iberian Peninsula and their use for arena games in the Roman Empire. Although the actual origins are disputed, genetic studies have indicated that the breeding stock have an unusually old genetic pool. The aggression of the bull has been maintained (or augmented, see above) by selective breeding and has come to be popular among the people of Spain and Portugal and the parts of Latin America where it took root during colonial rule, as well as parts of Southern France, where bullfighting spread during the 19th century. In May 2010, Spanish scientists cloned the breed for the first time. The calf, named Got, meaning "glass" in Valencian, was cloned from a bull named Vasito and implanted in a Friesian surrogate cow. Breed characteristics Fighting bulls are characterized by their aggressive behavior, especially when solitary or unable to flee. Many are colored black or dark brown, but other colorations are normal. They reach maturity slower than meat breeds as they were not selected to be heavy, having instead a well-muscled "athletic" look, with a prominent morrillo, a complex of muscles over the shoulder and neck which gives the bull its distinctive profile and strength with its horns. The horns are longer than in most other breeds and are present in both males and females. Mature bulls weigh from . Among fighting cattle, there are several "encastes" or subtypes of the breed. Of the so-called "foundational breeds", only the bloodlines of Vistahermosa, Vázquez, Gallardo and Cabrera remain today. In the cases of the last two only the ranches of Miura and Pablo Romero are deeply influenced by them. The so-called "modern foundational bloodlines" are Saltillo, Murube, Parladé and Santa Coloma, all of which are primarily composed of Vistahermosa blood. Cattle have dichromatic vision, rendering them red-green colorblind and falsifying the idea that the color red makes them angry; they just respond to the movements of the muleta. The red coloring is traditional and is believed to both conceal blood stains and provide a suitable light-dark contrast against the arena floor. Growth Fighting cattle are bred on wide-ranging ranches in Spain's dehesas or in the Portuguese Montados, which are often havens for Iberian wildlife as the farming techniques used are extensive. Both male and female calves spend their first year of life with their mothers; then they are weaned, branded, and kept in single-sex groups. When the cattle reach maturity after two years or so, they are sent to the tienta, or testing. For the males, this establishes if they are suitable for breeding, the bullfight, or slaughter for meat. The testing for the bullfight is only of their aggression towards the horse, as regulations forbid their charging a man on the ground before they enter the bullfighting ring. They learn how to use their horns in tests of strength and dominance with other bulls. Due to their special aggression, these combats can lead to severe injuries and even death of the bulls, at great cost to their breeders. The females are more thoroughly tested, including by a bullfighter with his capes; hence a bull's "courage" is often said to descend from his mother. If fit for bullfighting, bulls will return to their peers. Cows passing the tienta are kept for breeding and will be slaughtered only when they can bear no more calves. At three years old males are no longer considered calves; they are known as novillos and are ready for bullfighting, although novilladas are for training bullfighters, or novilleros. The best bulls are kept for corridas de toros with full matadors. Under Spanish law they must be at least four years old and reach the weight of 460 kg to fight in a first-rank bullring, 435 kg for a second-rank one, and 410 kg for third-rank rings. They must also have fully functional vision and even horns (which have not been tampered with) and be in generally good condition. A very few times each year a bull will be indultado, or "pardoned," meaning his life is spared due to outstanding behavior in the bullring, leading the audience to petition the president of the ring with white handkerchiefs. The bullfighter joins the petition, as it is a great honor to have a bull one has fought pardoned. The president pardons the bull showing an orange handkerchief. The bull, if he survives his injuries, which are usually severe, is then returned to the ranch he was bred at, where he will live out his days in the fields. In most cases, he will become a "seed bull", mated once with some 30 cows. Four years later, his offspring will be tested in the ring. If they fight well, he may be bred again. An "indultado" bull's lifespan can be 20 to 25 years. Miura The Miura is a line within the Spanish Fighting Bull bred at the in the province of Seville, in Andalucia. The ranch () is known for producing large and difficult fighting bulls. A Miura bull debuted in Madrid on April 30, 1849. The Miura derives from five historic lines of Spanish bull: the Gallardo, Cabrera, Navarra, Veragua, and Vistahermosa-Parladé. The bulls were fought under the name of Juan Miura until his death in 1854. Then they were under the name of his widow, Josefa Fernández de Miura. After her death, the livestock bore the name of her eldest son Antonio Miura Fernández from 1869 to 1893 and then the younger brother, Eduardo Miura Fernández until his death in 1917. Reputation Bulls from the Miura lineage have a reputation for being large, fierce, and cunning. It is said to be especially dangerous for a matador to turn his back on a Miura. Miura bulls have been referred to as individualists, each bull seemingly possessing a strong personal character. In Death in the Afternoon, Ernest Hemingway wrote: Famous bulls Murciélago survived 24 jabs with the lance from the picador in a fight on 5 October, 1879, against Rafael "El Lagartijo" Molina Sánchez, at the Coso de los califas bullring in Córdoba, Spain. Islero gored and killed bullfighter Manolete on August 28, 1947.
Biology and health sciences
Cattle
Animals
6073894
https://en.wikipedia.org/wiki/Fermentation
Fermentation
Fermentation is a type of redox metabolism carried out in the absence of oxygen. During fermentation, organic molecules (e.g., glucose) are catabolized and donate electrons to other organic molecules. In the process, ATP and organic end products (e.g., lactate) are formed. Because oxygen is not required, it is an alternative to aerobic respiration. Over 25% of bacteria and archaea carry out fermentation. They live in the gut, sediments, food, and other environments. Eukaryotes, including humans and other animals, also carry out fermentation. Fermentation is important in several areas of human society. Humans have used fermentation in production of food for 13,000 years. Humans and their livestock have microbes in the gut that carry out fermentation, releasing products used by the host for energy. Fermentation is used at an industrial level to produce commodity chemicals, such as ethanol and lactate. In total, fermentation forms more than 50 metabolic end products This process highlights the power of microbial activity. Definition The definition of fermentation has evolved over the years. The most modern definition is catabolism, where organic compounds are both the electron donor and acceptor. A common electron donor is glucose, and pyruvate is a common electron acceptor. This definition distinguishes fermentation from aerobic respiration, where oxygen is the acceptor and types of anaerobic respiration, where an inorganic species is the acceptor. Fermentation had been defined differently in the past. In 1876, Louis Pasteur described it as "la vie sans air" (life without air). This definition came before the discovery of anaerobic respiration. Later, it had been defined as catabolism that forms ATP through only substrate-level phosphorylation. However, several pathways of fermentation have been discovered to form ATP through an electron transport chain and ATP synthase, also. Some sources define fermentation loosely as any large-scale biological manufacturing process. See Industrial fermentation. This definition focuses on the process of manufacturing rather than metabolic details. Biological role and prevalence Fermentation is used by organisms to generate ATP energy for metabolism. One advantage is that it requires no oxygen or other external electron acceptors, and thus it can be carried out when those electron acceptors are absent. A disadvantage is that it produces relatively little ATP, yielding only between 2 and 4.5 per glucose compared to 32 for aerobic respiration. Over 25% of bacteria and archaea carry out fermentation. This type of metabolism is most common in the phylum Bacillota, and it is least common in Actinomycetota. Their most common habitat is host-associated ones, such as the gut. Animals, including humans, also carry out fermentation. The product of fermentation in humans is lactate, and it is formed during anaerobic exercise or in cancerous cells. No animal is known to survive on fermentation alone, even as one parasitic animal (Henneguya zschokkei) is known to survive without oxygen. Substrates and products of fermentation Fermentation uses a range of substrates and forms a variety of metabolic end products. Of the 55 end products formed, the most common are acetate and lactate. Of the 46 chemically-defined substrates that have been reported, the most common are glucose and other sugars. Biochemical overview When an organic compound is fermented, it is broken down to a simpler molecule and releases electrons. The electrons are transferred to a redox cofactor, which in turn transfers them to an organic compound. ATP is generated in the process, and it can be formed by substrate-level phosphorylation or by ATP synthase. When glucose is fermented, it enters glycolysis or the Entner-Doudoroff pathway and is converted to pyruvate. From pyruvate, pathways branch out to form a number of end products (e.g. lactate). At several points, electrons are released and accepted by redox cofactors (NAD and ferredoxin). At later points, these cofactors donate electrons to their final acceptor and become oxidized. ATP is also formed at several points in the pathway. While fermentation is simple in overview, its details are more complex. Across organisms, fermentation of glucose involves over 120 different biochemical reactions. Further, multiple pathways can be responsible for forming the same product. For forming acetate from its immediate precursor (pyruvate or acetyl-CoA), six separate pathways have been found. Biochemistry of individual products Ethanol In ethanol fermentation, one glucose molecule is converted into two ethanol molecules and two carbon dioxide (CO2) molecules. It is used to make bread dough rise: the carbon dioxide forms bubbles, expanding the dough into a foam. The ethanol is the intoxicating agent in alcoholic beverages such as wine, beer and liquor. Fermentation of feedstocks, including sugarcane, maize, and sugar beets, produces ethanol that is added to gasoline. In some species of fish, including goldfish and carp, it provides energy when oxygen is scarce (along with lactic acid fermentation). Before fermentation, a glucose molecule breaks down into two pyruvate molecules (glycolysis). The energy from this exothermic reaction is used to bind inorganic phosphates to ADP, which converts it to ATP, and convert NAD+ to NADH. The pyruvates break down into two acetaldehyde molecules and give off two carbon dioxide molecules as waste products. The acetaldehyde is reduced into ethanol using the energy and hydrogen from NADH, and the NADH is oxidized into NAD+ so that the cycle may repeat. The reaction is catalyzed by the enzymes pyruvate decarboxylase and alcohol dehydrogenase. History of bioethanol fermentation The history of ethanol as a fuel spans several centuries and is marked by a series of significant milestones. Samuel Morey, an American inventor, was the first to produce ethanol by fermenting corn in 1826. However, it was not until the California Gold Rush in the 1850s that ethanol was first used as a fuel in the United States. Rudolf Diesel demonstrated his engine, which could run on vegetable oils and ethanol, in 1895, but the widespread use of petroleum-based diesel engines made ethanol less popular as a fuel. In the 1970s, the oil crisis reignited interest in ethanol, and Brazil became a leader in ethanol production and use. The United States began producing ethanol on a large scale in the 1980s and 1990s as a fuel additive to gasoline, due to government regulations. Today, ethanol continues to be explored as a sustainable and renewable fuel source, with researchers developing new technologies and biomass sources for its production. 1826: Samuel Morey, an American inventor, was the first to produce ethanol by fermenting corn. However, ethanol was not widely used as a fuel until many years later. (1) 1850s: Ethanol was first used as a fuel in the United States during the California gold rush. Miners used ethanol as a fuel for lamps and stoves because it was cheaper than whale oil. (2) 1895: German engineer Rudolf Diesel demonstrated his engine, which was designed to run on vegetable oils, including ethanol. However, the widespread use of diesel engines fueled by petroleum made ethanol less popular as a fuel. (3) 1970s: The oil crisis of the 1970s led to renewed interest in ethanol as a fuel. Brazil became a leader in ethanol production and use, due in part to government policies that encouraged the use of biofuels. (4) 1980s–1990s: The United States began to produce ethanol on a large scale as a fuel additive to gasoline. This was due to the passage of the Clean Air Act in 1990, which required the use of oxygenates, such as ethanol, to reduce emissions. (5) 2000s–present: There has been continued interest in ethanol as a renewable and sustainable fuel. Researchers are exploring new sources of biomass for ethanol production, such as switchgrass and algae, and developing new technologies to improve the efficiency of the fermentation process. (6) Lactic acid Homolactic fermentation (producing only lactic acid) is the simplest type of fermentation. Pyruvate from glycolysis undergoes a simple redox reaction, forming lactic acid. Overall, one molecule of glucose (or any six-carbon sugar) is converted to two molecules of lactic acid: C6H12O6 → 2 CH3CHOHCOOH It occurs in the muscles of animals when they need energy faster than the blood can supply oxygen. It also occurs in some kinds of bacteria (such as lactobacilli) and some fungi. It is the type of bacteria that convert lactose into lactic acid in yogurt, giving it its sour taste. These lactic acid bacteria can carry out either homolactic fermentation, where the end-product is mostly lactic acid, or heterolactic fermentation, where some lactate is further metabolized to ethanol and carbon dioxide (via the phosphoketolase pathway), acetate, or other metabolic products, e.g.: C6H12O6 → CH3CHOHCOOH + C2H5OH + CO2 If lactose is fermented (as in yogurts and cheeses), it is first converted into glucose and galactose (both six-carbon sugars with the same atomic formula): C12H22O11 + H2O → 2 C6H12O6 Heterolactic fermentation is in a sense intermediate between lactic acid fermentation and other types, e.g. alcoholic fermentation. Reasons to go further and convert lactic acid into something else include: The acidity of lactic acid impedes biological processes. This can be beneficial to the fermenting organism as it drives out competitors that are unadapted to the acidity. As a result, the food will have a longer shelf life (one reason foods are purposely fermented in the first place); however, beyond a certain point, the acidity starts affecting the organism that produces it. The high concentration of lactic acid (the final product of fermentation) drives the equilibrium backwards (Le Chatelier's principle), decreasing the rate at which fermentation can occur and slowing down growth. Ethanol, into which lactic acid can be easily converted, is volatile and will readily escape, allowing the reaction to proceed easily. CO2 is also produced, but it is only weakly acidic and even more volatile than ethanol. Acetic acid (another conversion product) is acidic and not as volatile as ethanol; however, in the presence of limited oxygen, its creation from lactic acid releases additional energy. It is a lighter molecule than lactic acid, forming fewer hydrogen bonds with its surroundings (due to having fewer groups that can form such bonds), thus is more volatile and will also allow the reaction to proceed more quickly. If propionic acid, butyric acid, and longer monocarboxylic acids are produced, the amount of acidity produced per glucose consumed will decrease, as with ethanol, allowing faster growth. Hydrogen gas Hydrogen gas is produced in many types of fermentation as a way to regenerate NAD+ from NADH. Electrons are transferred to ferredoxin, which in turn is oxidized by hydrogenase, producing H2. Hydrogen gas is a substrate for methanogens and sulfate reducers, which keep the concentration of hydrogen low and favor the production of such an energy-rich compound, but hydrogen gas at a fairly high concentration can nevertheless be formed, as in flatus. For example, Clostridium pasteurianum ferments glucose to butyrate, acetate, carbon dioxide, and hydrogen gas: The reaction leading to acetate is: C6H12O6 + 4 H2O → 2 CH3COO− + 2 HCO3− + 4 H+ + 4 H2 Glyoxylate Glyoxylate fermentation is a type of fermentation used by microbes that are able to utilize glyoxylate as a nitrogen source. Other Other types of fermentation include mixed acid fermentation, butanediol fermentation, butyrate fermentation, caproate fermentation, and acetone–butanol–ethanol fermentation. In the broader sense In food and industrial contexts, any chemical modification performed by a living being in a controlled container can be termed "fermentation". The following do not fall into the biochemical sense, but are called fermentation in the larger sense: Alternative protein Fermentation can be used to make alternative protein sources. It is commonly used to modify existing protein foods, including plant-based ones such as soy, into more flavorful forms such as tempeh and fermented tofu. More modern "fermentation" makes recombinant protein to help produce meat analogue, milk substitute, cheese analogues, and egg substitutes. Some examples are: Recombinant myoglobin for faux meat (Motif Foodworks) Recombinant leghemoglobin for faux meat (Impossible Foods) Recombinant whey protein for dairy replacement (Perfect Day) Recombinant casein protein for dairy replacements (Those Vegan Cowboys) Recombinant egg white (EVERY) Heme proteins such as myoglobin and hemoglobin give meat its characteristic texture, flavor, color, and aroma. The myoglobin and leghemoglobin ingredients can be used to replicate this property, despite them coming from a vat instead of meat. Enzymes Industrial fermentation can be used for enzyme production, where proteins with catalytic activity are produced and secreted by microorganisms. The development of fermentation processes, microbial strain engineering and recombinant gene technologies has enabled the commercialization of a wide range of enzymes. Enzymes are used in all kinds of industrial segments, such as food (lactose removal, cheese flavor), beverage (juice treatment), baking (bread softness, dough conditioning), animal feed, detergents (protein, starch and lipid stain removal), textile, personal care and pulp and paper industries. Modes of industrial operation Most industrial fermentation uses batch or fed-batch procedures, although continuous fermentation can be more economical if various challenges, particularly the difficulty of maintaining sterility, can be met. Batch In a batch process, all the ingredients are combined and the reactions proceed without any further input. Batch fermentation has been used for millennia to make bread and alcoholic beverages, and it is still a common method, especially when the process is not well understood. However, it can be expensive because the fermentor must be sterilized using high pressure steam between batches. Strictly speaking, there is often addition of small quantities of chemicals to control the pH or suppress foaming. Batch fermentation goes through a series of phases. There is a lag phase in which cells adjust to their environment; then a phase in which exponential growth occurs. Once many of the nutrients have been consumed, the growth slows and becomes non-exponential, but production of secondary metabolites (including commercially important antibiotics and enzymes) accelerates. This continues through a stationary phase after most of the nutrients have been consumed, and then the cells die. Fed-batch Fed-batch fermentation is a variation of batch fermentation where some of the ingredients are added during the fermentation. This allows greater control over the stages of the process. In particular, production of secondary metabolites can be increased by adding a limited quantity of nutrients during the non-exponential growth phase. Fed-batch operations are often sandwiched between batch operations. Open The high cost of sterilizing the fermentor between batches can be avoided using various open fermentation approaches that are able to resist contamination. One is to use a naturally evolved mixed culture. This is particularly favored in wastewater treatment, since mixed populations can adapt to a wide variety of wastes. Thermophilic bacteria can produce lactic acid at temperatures of around 50 °Celsius, sufficient to discourage microbial contamination; and ethanol has been produced at a temperature of 70 °C. This is just below its boiling point (78 °C), making it easy to extract. Halophilic bacteria can produce bioplastics in hypersaline conditions. Solid-state fermentation adds a small amount of water to a solid substrate; it is widely used in the food industry to produce flavors, enzymes and organic acids. Continuous In continuous fermentation, substrates are added and final products removed continuously. There are three varieties: chemostats, which hold nutrient levels constant; turbidostats, which keep cell mass constant; and plug flow reactors in which the culture medium flows steadily through a tube while the cells are recycled from the outlet to the inlet. If the process works well, there is a steady flow of feed and effluent and the costs of repeatedly setting up a batch are avoided. Also, it can prolong the exponential growth phase and avoid byproducts that inhibit the reactions by continuously removing them. However, it is difficult to maintain a steady state and avoid contamination, and the design tends to be complex. Typically the fermentor must run for over 500 hours to be more economical than batch processors. History of the use of fermentation The use of fermentation, particularly for beverages, has existed since the Neolithic and has been documented dating from 7000 to 6600 BCE in Jiahu, China, 5000 BCE in India, Ayurveda mentions many Medicated Wines, 6000 BCE in Georgia, 3150 BCE in ancient Egypt, 3000 BCE in Babylon, 2000 BCE in pre-Hispanic Mexico, and 1500 BC in Sudan. Fermented foods have a religious significance in Judaism and Christianity. The Baltic god Rugutis was worshiped as the agent of fermentation. In alchemy, fermentation ("putrefaction") was symbolized by Capricorn ♑︎. In 1837, Charles Cagniard de la Tour, Theodor Schwann and Friedrich Traugott Kützing independently published papers concluding, as a result of microscopic investigations, that yeast is a living organism that reproduces by budding. Schwann boiled grape juice to kill the yeast and found that no fermentation would occur until new yeast was added. However, a lot of chemists, including Antoine Lavoisier, continued to view fermentation as a simple chemical reaction and rejected the notion that living organisms could be involved. This was seen as a reversion to vitalism and was lampooned in an anonymous publication by Justus von Liebig and Friedrich Wöhler. The turning point came when Louis Pasteur (1822–1895), during the 1850s and 1860s, repeated Schwann's experiments and showed fermentation is initiated by living organisms in a series of investigations. In 1857, Pasteur showed lactic acid fermentation is caused by living organisms. In 1860, he demonstrated how bacteria cause souring in milk, a process formerly thought to be merely a chemical change. His work in identifying the role of microorganisms in food spoilage led to the process of pasteurization. In 1877, working to improve the French brewing industry, Pasteur published his famous paper on fermentation, "Etudes sur la Bière", which was translated into English in 1879 as "Studies on fermentation". He defined fermentation (incorrectly) as "Life without air", yet he correctly showed how specific types of microorganisms cause specific types of fermentations and specific end-products. Although showing fermentation resulted from the action of living microorganisms was a breakthrough, it did not explain the basic nature of fermentation; nor did it prove it is caused by microorganisms which appear to be always present. Many scientists, including Pasteur, had unsuccessfully attempted to extract the fermentation enzyme from yeast. Success came in 1897 when the German chemist Eduard Buechner ground up yeast, extracted a juice from them, then found to his amazement this "dead" liquid would ferment a sugar solution, forming carbon dioxide and alcohol much like living yeasts. Buechner's results are considered to mark the birth of biochemistry. The "unorganized ferments" behaved just like the organized ones. From that time on, the term enzyme came to be applied to all ferments. It was then understood fermentation is caused by enzymes produced by microorganisms. In 1907, Buechner won the Nobel Prize in chemistry for his work. Advances in microbiology and fermentation technology have continued steadily up until the present. For example, in the 1930s, it was discovered microorganisms could be mutated with physical and chemical treatments to be higher-yielding, faster-growing, tolerant of less oxygen, and able to use a more concentrated medium. Strain selection and hybridization developed as well, affecting most modern food fermentations. Post 1930s The field of fermentation has been critical to producing a wide range of consumer goods, from food and drink to industrial chemicals and pharmaceuticals. Since its early beginnings in ancient civilizations, fermentation has continued to evolve and expand, with new techniques and technologies driving advances in product quality, yield, and efficiency. The period from the 1930s onward saw a number of significant advancements in fermentation technology, including the development of new processes for producing high-value products like antibiotics and enzymes, the increasing importance of fermentation in the production of bulk chemicals, and a growing interest in the use of fermentation for the production of functional foods and nutraceuticals. The 1950s and 1960s saw the development of new fermentation technologies, such as immobilized cells and enzymes, which allowed for more precise control over fermentation processes and increased the production of high-value products like antibiotics and enzymes. In the 1970s and 1980s, fermentation became increasingly important in producing bulk chemicals like ethanol, lactic acid, and citric acid. This led to developing new fermentation techniques and genetically engineered microorganisms to improve yields and reduce production costs. In the 1990s and 2000s, there was a growing interest in fermentation to produce functional foods and nutraceuticals, which have potential health benefits beyond basic nutrition. This led to new fermentation processes, probiotics, and other functional ingredients. Overall, the period from 1930 onward saw significant advancements in the use of fermentation for industrial purposes, leading to the production of a wide range of fermented products that are now consumed worldwide.
Biology and health sciences
Cell processes
null
6073980
https://en.wikipedia.org/wiki/Fermentation%20in%20food%20processing
Fermentation in food processing
In food processing, fermentation is the conversion of carbohydrates to alcohol or organic acids using microorganisms—yeasts or bacteria—without an oxidizing agent being used in the reaction. Fermentation usually implies that the action of microorganisms is desired. The science of fermentation is known as zymology or zymurgy. The term "fermentation" sometimes refers specifically to the chemical conversion of sugars into ethanol, producing alcoholic drinks such as wine, beer, and cider. However, similar processes take place in the leavening of bread (CO2 produced by yeast activity), and in the preservation of sour foods with the production of lactic acid, such as in sauerkraut and yogurt. Humans have an enzyme that gives us an enhanced ability to break down ethanol. Other widely consumed fermented foods include vinegar, olives, and cheese. More localized foods prepared by fermentation may also be based on beans, grain, vegetables, fruit, honey, dairy products, and fish. History and prehistory Brewing and winemaking Natural fermentation predates human history. Since ancient times, humans have exploited the fermentation process. They likely began fermenting foods unintentionally. To store excess foods, humans placed the items in a container where they were forgotten. Over time, yeast and bacteria started to grow. This led humans to unveil fermented foods. The earliest archaeological evidence of fermentation is 13,000-year-old residues of a beer, with the consistency of gruel, found in a cave near Haifa in Israel. Another early alcoholic drink, made from fruit, rice, and honey, dates from 7000 to 6600 BC, in the Neolithic Chinese village of Jiahu, and winemaking dates from ca. 6000 BC, in Georgia, in the Caucasus area. Seven-thousand-year-old jars containing the remains of wine, now on display at the University of Pennsylvania, were excavated in the Zagros Mountains in Iran. There is strong evidence that people were fermenting alcoholic drinks in Babylon ca. 3000 BC, ancient Egypt ca. 3150 BC, pre-Hispanic Mexico ca. 2000 BC, and Sudan ca. 1500 BC. Discovery of the role of yeast The French chemist Louis Pasteur founded zymology, when in 1856 he connected yeast to fermentation. When studying the fermentation of sugar to alcohol by yeast, Pasteur concluded that the fermentation was catalyzed by a vital force, called "ferments", within the yeast cells. The "ferments" were thought to function only within living organisms. Pasteur wrote that "Alcoholic fermentation is an act correlated with the life and organization of the yeast cells, not with the death or putrefaction of the cells." "Cell-free fermentation" Nevertheless, it was known that yeast extracts can ferment sugar even in the absence of living yeast cells. While studying this process in 1897, the German chemist and zymologist Eduard Buchner of Humboldt University of Berlin, Germany, found that sugar was fermented even when there were no living yeast cells in the mixture, by an enzyme complex secreted by yeast that he termed zymase. In 1907 he received the Nobel Prize in Chemistry for his research and discovery of "cell-free fermentation". One year earlier, in 1906, ethanol fermentation studies led to the early discovery of oxidized nicotinamide adenine dinucleotide (NAD+). Uses Food fermentation is the conversion of sugars and other carbohydrates into alcohol or preservative organic acids and carbon dioxide. All three products have found human uses. The production of alcohol is made use of when fruit juices are converted to wine, when grains are made into beer, and when foods rich in starch, such as potatoes, are fermented and then distilled to make spirits such as gin and vodka. The production of carbon dioxide is used to leaven bread. The production of organic acids is exploited to preserve and flavor vegetables and dairy products. Food fermentation serves five main purposes: to enrich the diet through development of a diversity of flavors, aromas, and textures in food substrates; to preserve substantial amounts of food through lactic acid, alcohol, acetic acid, and alkaline fermentations; to enrich food substrates with protein, essential amino acids, and vitamins; to eliminate antinutrients; and to reduce cooking time and the associated use of fuel. Fermented foods by region Worldwide: alcohol (beer, wine), vinegar, olives, yogurt, bread, cheese Asia East and Southeast Asia: amazake, atchara, belacan, burong mangga, com ruou, doenjang, douchi, fish sauce, lah pet, lambanog, kimchi, kombucha, leppet-so, narezushi, miso, nata de coco, nattō, ngapi, oncom, padaek, pla ra, prahok, ruou nep, sake, shrimp paste, soju, soy sauce, stinky tofu, tape, tempeh, tempoyak, zha cai Central Asia: kumis, kefir, shubat, qatiq (yogurt) South Asia: achar, appam, dosa, dhokla, dahi (yogurt), idli, mixed pickle, ngari, sinki, tongba, paneer Africa: garri, injera, laxoox, mageu, ogi, ogiri, iru Americas: chicha, chocolate, vanilla, hot sauce, tepache, tibicos, pulque, muktuk, kiviak, parakari Middle East: torshi, boza Europe: sourdough bread, elderberry wine, kombucha, pickling, rakfisk, sauerkraut, pickled cucumber, surströmming, mead, kvass, salami, sucuk, prosciutto, cultured milk products such as quark, kefir, filmjölk, crème fraîche, smetana, skyr, rakı, tupí, żur. Oceania: poi, kānga pirau Fermented foods by type Beans Cheonggukjang, doenjang, fermented bean curd, miso, natto, soy sauce, stinky tofu, tempeh, oncom, soybean paste, Beijing mung bean milk, kinama, iru, thua nao Grain Amazake, beer, bread, choujiu, gamju, injera, kvass, makgeolli, murri, ogi, rejuvelac, sake, sikhye, sourdough, sowans, rice wine, malt whisky, grain whisky, idli, dosa, Bangla (drink) vodka, boza, and chicha, among others. Vegetables Kimchi, mixed pickle, sauerkraut, Indian pickle, gundruk, tursu Fruit Wine, vinegar, cider, perry, brandy, atchara, nata de coco, burong mangga, asinan, pickling, vişinată, chocolate, rakı, aragh sagi, chacha, tempoyak Honey Mead, metheglin, tej Dairy Some kinds of cheese also, kefir, kumis (mare milk), shubat (camel milk), ayran, cultured milk products such as quark, filmjölk, crème fraîche, smetana, skyr, and yogurt Fish Bagoong, faseekh, fish sauce, Garum, Hákarl, jeotgal, ngapi, padaek, pla ra, prahok, rakfisk, shrimp paste, surströmming, shidal Meat Chorizo, salami, sucuk, pepperoni, nem chua, som moo, saucisson, fermented sausage Tea Pu-erh tea, Kombucha, Lahpet, Goishicha Risks Sterilization is an important factor to consider during the fermentation of foods. Failing to completely remove any microbes from equipment and storing vessels may result in the multiplication of harmful organisms within the ferment, potentially increasing the risks of food borne illnesses such as botulism. However, botulism in vegetable ferments is only possible when not properly canned. The production of off smells and discoloration may be indications that harmful bacteria may have been introduced to the food. Alaska has witnessed a steady increase of cases of botulism since 1985. It has more cases of botulism than any other state in the United States of America. This is caused by the traditional Alaska Native practice of allowing animal products such as whole fish, fish heads, walrus, sea lion, and whale flippers, beaver tails, seal oil, and birds, to ferment for an extended period of time before being consumed. The risk is exacerbated when a plastic container is used for this purpose instead of the old-fashioned, traditional method, a grass-lined hole, as the Clostridium botulinum bacteria thrive in the anaerobic conditions created by the air-tight enclosure in plastic. Research has found that fermented food contains a carcinogenic by-product, ethyl carbamate (urethane). "A 2009 review of the existing studies conducted across Asia concluded that regularly eating pickled vegetables roughly doubles a person's risk for esophageal squamous cell carcinoma."
Technology
Food, water and health
null
6074674
https://en.wikipedia.org/wiki/Biological%20engineering
Biological engineering
Biological engineering or bioengineering is the application of principles of biology and the tools of engineering to create usable, tangible, economically viable products. Biological engineering employs knowledge and expertise from a number of pure and applied sciences, such as mass and heat transfer, kinetics, biocatalysts, biomechanics, bioinformatics, separation and purification processes, bioreactor design, surface science, fluid mechanics, thermodynamics, and polymer science. It is used in the design of medical devices, diagnostic equipment, biocompatible materials, renewable energy, ecological engineering, agricultural engineering, process engineering and catalysis, and other areas that improve the living standards of societies. Examples of bioengineering research include bacteria engineered to produce chemicals, new medical imaging technology, portable and rapid disease diagnostic devices, prosthetics, biopharmaceuticals, and tissue-engineered organs. Bioengineering overlaps substantially with biotechnology and the biomedical sciences in a way analogous to how various other forms of engineering and technology relate to various other sciences (such as aerospace engineering and other space technology to kinetics and astrophysics). Generally, biological engineers attempt to mimic biological systems to create products or modify and control biological systems. Working with doctors, clinicians, and researchers, bioengineers use traditional engineering principles and techniques to address biological processes, including ways to replace, augment, sustain, or predict chemical and mechanical processes. History Biological engineering is a science-based discipline founded upon the biological sciences in the same way that chemical engineering, electrical engineering, and mechanical engineering can be based upon chemistry, electricity and magnetism, and classical mechanics, respectively. Before WWII, biological engineering had begun being recognized as a branch of engineering and was a new concept to people. Post-WWII, it grew more rapidly, and the term "bioengineering" was coined by British scientist and broadcaster Heinz Wolff in 1954 at the National Institute for Medical Research. Wolff graduated that year and became the Division of Biological Engineering director at Oxford. This was the first time Bioengineering was recognized as its own branch at a university. The early focus of this discipline was electrical engineering due to the work with medical devices and machinery during this time. When engineers and life scientists started working together, they recognized that the engineers did not know enough about the actual biology behind their work. To resolve this problem, engineers who wanted to get into biological engineering devoted more time to studying the processes of biology, psychology, and medicine. More recently, the term biological engineering has been applied to environmental modifications such as surface soil protection, slope stabilization, watercourse and shoreline protection, windbreaks, vegetation barriers including noise barriers and visual screens, and the ecological enhancement of an area. Because other engineering disciplines also address living organisms, the term biological engineering can be applied more broadly to include agricultural engineering. The first biological engineering program in the United States was started at University of California, San Diego in 1966. More recent programs have been launched at MIT and Utah State University. Many old agricultural engineering departments in universities over the world have re-branded themselves as agricultural and biological engineering or agricultural and biosystems engineering. According to Professor Doug Lauffenburger of MIT, biological engineering has a broad base which applies engineering principles to an enormous range of size and complexities of systems, ranging from the molecular level (molecular biology, biochemistry, microbiology, pharmacology, protein chemistry, cytology, immunology, neurobiology and, neuroscience) to cellular and tissue-based systems (including devices and sensors), to whole macroscopic organisms (plants, animals), and even to biomes and ecosystems. Education The average length of study is three to five years, and the completed degree is signified as a bachelor of engineering (B.S. in engineering). Fundamental courses include thermodynamics, biomechanics, biology, genetic engineering, fluid and mechanical dynamics, chemical and enzyme kinetics, electronics, and materials properties. Sub-disciplines Depending on the institution and particular definitional boundaries employed, some major branches of bioengineering may be categorized as (note these may overlap): Biomedical engineering: application of engineering principles and design concepts to medicine and biology for healthcare purposes. Tissue engineering Neural engineering Pharmaceutical engineering Clinical engineering Biomechanics Biochemical engineering: fermentation engineering, application of engineering principles to microscopic biological systems that are used to create new products by synthesis, including the production of protein from suitable raw materials. Biological systems engineering: application of engineering principles and design concepts to agriculture, food sciences, and ecosystems. Bioprocess engineering: develop technology to monitor the conditions of where a particular process takes place, (Ex: bioprocess design, biocatalysis, bioseparation, bioenergy) Environmental health engineering: application of engineering principles to control the environment for the health, comfort, and safety of human beings. It includes the field of life-support systems for the exploration of outer space and the ocean. Human factors and ergonomics engineering: application of engineering, physiology, and psychology to the optimization of the human-machine relationship. (Ex: physical ergonomics, cognitive ergonomics, human–computer interaction) Biotechnology: the use of living systems and organisms to develop or make products. (Ex: pharmaceuticals, Bioinformatics, Genetic engineering.) Biomimetics: the imitation of models, systems, and elements of nature to solve complex human problems. (Ex: velcro, designed after George de Mestral noticed how easily burs stuck to a dog's hair.) Bioelectrical engineering Biomechanical engineering: is the application of mechanical engineering principles and biology to determine how these areas relate and how they can be integrated to potentially improve human health. Bionics: an integration of Biomedical, focused more on the robotics and assisted technologies. (Ex: prosthetics) Bioprinting: utilizing biomaterials to print organs and new tissues Biorobotics: (Ex: electrical prosthetics) Systems biology: Molecules, cells, organs, and organisms are all investigated in terms of their interactions and behaviors. Organizations Accreditation Board for Engineering and Technology (ABET), the U.S.-based accreditation board for engineering B.S. programs, makes a distinction between biomedical engineering and biological engineering, though there is much overlap (see above). American Institute for Medical and Biological Engineering (AIMBE) is made up of 1,500 members. Their main goal is to educate the public about the value biological engineering has in our world, as well as invest in research and other programs to advance the field. They give out awards to those dedicated to innovation in the field, and awards of achievement in the field. (They do not have a direct contribution to biological engineering; they recognize those who do and encourage the public to continue that forward movement). Institute of Biological Engineering (IBE) is a non-profit organization that runs on donations alone. They aim to encourage the public to learn and to continue advancements in biological engineering. (Like AIMBE, they do not perform research directly; however, they offer scholarships to students who show promise in the field). Society for Biological Engineering (SBE) is a technological community associated with the American Institute of Chemical Engineers (AIChE). SBE hosts international conferences, and is a global organization of leading engineers and scientists dedicated to advancing the integration of biology with engineering. MediUnite Journal is a medical awareness campaign and newspaper that has often published biomedical findings and has cited biomedicine in various research papers.
Technology
Disciplines
null
12636107
https://en.wikipedia.org/wiki/Antihistamine
Antihistamine
Antihistamines are drugs which treat allergic rhinitis, common cold, influenza, and other allergies. Typically, people take antihistamines as an inexpensive, generic (not patented) drug that can be bought without a prescription and provides relief from nasal congestion, sneezing, or hives caused by pollen, dust mites, or animal allergy with few side effects. Antihistamines are usually for short-term treatment. Chronic allergies increase the risk of health problems which antihistamines might not treat, including asthma, sinusitis, and lower respiratory tract infection. Consultation of a medical professional is recommended for those who intend to take antihistamines for longer-term use. Although the general public typically uses the word "antihistamine" to describe drugs for treating allergies, physicians and scientists use the term to describe a class of drug that opposes the activity of histamine receptors in the body. In this sense of the word, antihistamines are subclassified according to the histamine receptor that they act upon. The two largest classes of antihistamines are H1-antihistamines and H2-antihistamines. H1-antihistamines work by binding to histamine H1 receptors in mast cells, smooth muscle, and endothelium in the body as well as in the tuberomammillary nucleus in the brain. Antihistamines that target the histamine H1-receptor are used to treat allergic reactions in the nose (e.g., itching, runny nose, and sneezing). In addition, they may be used to treat insomnia, motion sickness, or vertigo caused by problems with the inner ear. H2-antihistamines bind to histamine H2 receptors in the upper gastrointestinal tract, primarily in the stomach. Antihistamines that target the histamine H2-receptor are used to treat gastric acid conditions (e.g., peptic ulcers and acid reflux). Other antihistamines also target H3 receptors and H4 receptors. Histamine receptors exhibit constitutive activity, so antihistamines can function as either a neutral receptor antagonist or an inverse agonist at histamine receptors. Only a few currently marketed H1-antihistamines are known to function as antagonists. Medical uses Histamine makes blood vessels more permeable (vascular permeability), causing fluid to escape from capillaries into tissues, which leads to the classic symptoms of an allergic reaction—a runny nose and watery eyes. Histamine also promotes angiogenesis. Antihistamines suppress the histamine-induced wheal response (swelling) and flare response (vasodilation) by blocking the binding of histamine to its receptors or reducing histamine receptor activity on nerves, vascular smooth muscle, glandular cells, endothelium, and mast cells. Antihistamines can also help correct Eustachian Tube dysfunction, thereby helping correct problems such as muffled hearing, fullness in the ear and even tinnitus. Itching, sneezing, and inflammatory responses are suppressed by antihistamines that act on H1-receptors. In 2014, antihistamines such as desloratadine were found to be effective to complement standardized treatment of acne due to their anti-inflammatory properties and their ability to suppress sebum production. Types H1-antihistamines H1-antihistamines refer to compounds that inhibit the activity of the H1 receptor. Since the H1 receptor exhibits constitutive activity, H1-antihistamines can be either neutral receptor antagonists or inverse agonists. Normally, histamine binds to the H1 receptor and heightens the receptor's activity; the receptor antagonists work by binding to the receptor and blocking the activation of the receptor by histamine; by comparison, the inverse agonists bind to the receptor and both block the binding of histamine, and reduce its constitutive activity, an effect which is opposite to histamine's. Most antihistamines are inverse agonists at the H1 receptor, but it was previously thought that they were antagonists. Clinically, H1-antihistamines are used to treat allergic reactions and mast cell-related disorders. Sedation is a common side effect of H1-antihistamines that readily cross the blood–brain barrier; some of these drugs, such as diphenhydramine and doxylamine, may therefore be used to treat insomnia. H1-antihistamines can also reduce inflammation, since the expression of NF-κB, the transcription factor the regulates inflammatory processes, is promoted by both the receptor's constitutive activity and agonist (i.e., histamine) binding at the H1 receptor. A combination of these effects, and in some cases metabolic ones as well, lead to most first-generation antihistamines having analgesic-sparing (potentiating) effects on opioid analgesics and to some extent with non-opioid ones as well. The most common antihistamines utilized for this purpose include hydroxyzine, promethazine (enzyme induction especially helps with codeine and similar prodrug opioids), phenyltoloxamine, orphenadrine, and tripelennamine; some may also have intrinsic analgesic properties of their own, orphenadrine being an example. Second-generation antihistamines cross the blood–brain barrier to a much lesser extent than the first-generation antihistamines. They minimize sedatory effects due to their focused effect on peripheral histamine receptors. However, upon high doses second-generation antihistamines will begin to act on the central nervous system and thus can induce drowsiness when ingested in higher quantity. List of H1 antagonists/inverse agonists Acrivastine Alimemazine (a phenothiazine used as antipruritic, antiemetic and sedative) Amitriptyline (tricyclic antidepressant) Amoxapine (tricyclic antidepressant) Aripiprazole (atypical antipsychotic, trade name: Abilify) Brexpiprazole (atypical antipsychotic, trade name: Rexulti) Azelastine Bilastine Bromodiphenhydramine (Bromazine) Brompheniramine Buclizine Carbinoxamine Cetirizine (Zyrtec) Chlophedianol (Clofedanol) Chlorodiphenhydramine Chlorpheniramine Chlorpromazine (low-potency typical antipsychotic, also used as an antiemetic) Chlorprothixene (low-potency typical antipsychotic, trade name: Truxal) Chloropyramine (first generation antihistamine marketed in Eastern Europe) Cinnarizine (also used for motion sickness and vertigo) Clemastine Clomipramine (tricyclic antidepressant) Clozapine (atypical antipsychotic; trade name: Clozaril) Cyclizine Cyproheptadine Desloratadine Dexbrompheniramine Dexchlorpheniramine Dimenhydrinate (used as an antiemetic and for motion sickness) Dimetindene Diphenhydramine (Benadryl) Dosulepin (tricyclic antidepressant) Doxepin (tricyclic antidepressant) Doxylamine (most commonly used as an over-the-counter sedative) Ebastine Embramine Fexofenadine (Allegra/Telfast) Fluoxetine Hydroxyzine (also used as an anxiolytic and for motion sickness; trade names: Atarax, Vistaril) Imipramine (tricyclic antidepressant) Ketotifen Levocabastine (Livostin/Livocab) Levocetirizine (Xyzal) Levomepromazine (low-potency typical antipsychotic) Loratadine (Claritin) Maprotiline (tetracyclic antidepressant) Meclizine (most commonly used as an antiemetic) Mianserin (tetracyclic antidepressant) Mirtazapine (tetracyclic antidepressant, also has antiemetic and appetite-stimulating effects; trade name: Remeron) Olanzapine (atypical antipsychotic; trade name: Zyprexa) Olopatadine (used locally) Orphenadrine (a close relative of diphenhydramine used mainly as a skeletal muscle relaxant and anti-Parkinsons agent) Periciazine (low-potency typical antipsychotic) Phenindamine Pheniramine Phenyltoloxamine Promethazine (Phenergan) Pyrilamine (crosses the blood–brain barrier; produces drowsiness) Quetiapine (atypical antipsychotic; trade name: Seroquel) Rupatadine (Alergoliber) Setastine (Loderix) Setiptiline (or teciptiline, a tetracyclic antidepressant, trade name: Tecipul) Trazodone (SARI antidepressant/anxiolytic/hypnotic with mild H1 blockade action) Tripelennamine Triprolidine H2-antihistamines H2-antihistamines, like H1-antihistamines, exist as inverse agonists and neutral antagonists. They act on H2 histamine receptors found mainly in the parietal cells of the gastric mucosa, which are part of the endogenous signaling pathway for gastric acid secretion. Normally, histamine acts on H2 to stimulate acid secretion; drugs that inhibit H2 signaling thus reduce the secretion of gastric acid. H2-antihistamines are among first-line therapy to treat gastrointestinal conditions including peptic ulcers and gastroesophageal reflux disease. Some formulations are available over the counter. Most side effects are due to cross-reactivity with unintended receptors. Cimetidine, for example, is notorious for antagonizing androgenic testosterone and DHT receptors at high doses. Examples include: Cimetidine Famotidine Lafutidine Nizatidine Ranitidine Roxatidine Tiotidine H3-antihistamines An H3-antihistamine is a classification of drugs used to inhibit the action of histamine at the H3 receptor. H3 receptors are primarily found in the brain and are inhibitory autoreceptors located on histaminergic nerve terminals, which modulate the release of histamine. Histamine release in the brain triggers secondary release of excitatory neurotransmitters such as glutamate and acetylcholine via stimulation of H1 receptors in the cerebral cortex. Consequently, unlike the H1-antihistamines which are sedating, H3-antihistamines have stimulant and cognition-modulating effects. Examples of selective H3-antihistamines include: Clobenpropit ABT-239 Ciproxifan Conessine A-349,821. Thioperamide H4-antihistamines H4-antihistamines inhibit the activity of the H4 receptor. Examples include: Thioperamide JNJ 7777120 VUF-6002 Atypical antihistamines Histidine decarboxylase inhibitors Inhibit the action of histidine decarboxylase: Tritoqualine Catechin Mast cell stabilizers Mast cell stabilizers are drugs which prevent mast cell degranulation. Examples include: Cromolyn sodium Nedocromil β-agonists History The first H1 receptor antagonists were discovered in the 1930s and were marketed in the 1940s. Piperoxan was discovered in 1933 and was the first compound with antihistamine effects to be identified. Piperoxan and its analogues were too toxic to be used in humans. Phenbenzamine (Antergan) was the first clinically useful antihistamine and was introduced for medical use in 1942. Subsequently, many other antihistamines were developed and marketed. Diphenhydramine (Benadryl) was synthesized in 1943, tripelennamine (Pyribenzamine) was patented in 1946, and promethazine (Phenergan) was synthesized in 1947 and launched in 1949. By 1950, at least 20 antihistamines had been marketed. Chlorphenamine (Piriton), a less sedating antihistamine, was synthesized in 1951, and hydroxyzine (Atarax, Vistaril), an antihistamine used specifically as a sedative and tranquilizer, was developed in 1956. The first non-sedating antihistamine was terfenadine (Seldane) and was developed in 1973. Subsequently, other non-sedating antihistamines like loratadine (Claritin), cetirizine (Zyrtec), and fexofenadine (Allegra) were developed and introduced. The introduction of the first-generation antihistamines marked the beginning of medical treatment of nasal allergies. Research into these drugs led to the discovery that they were H1 receptor antagonists and also to the development of H2 receptor antagonists, where H1-antihistamines affected the nose and the H2-antihistamines affected the stomach. This history has led to contemporary research into drugs which are H3 receptor antagonists and which affect the H4 receptor antagonists. Most people who use an H1 receptor antagonist to treat allergies use a second-generation drug. Society and culture The United States government removed two second generation antihistamines, terfenadine and astemizole, from the market based on evidence that they could cause heart problems. Research Not much published research exists which compares the efficacy and safety of the various antihistamines available. The research which does exist is mostly short-term studies or studies which look at too few people to make general assumptions. Another gap in the research is in information reporting the health effects for individuals with long-term allergies who take antihistamines for a long period of time. Newer antihistamines have been demonstrated to be effective in treating hives. However, there is no research comparing the relative efficacy of these drugs. Special populations In 2020, the UK National Health Service wrote that "[m]ost people can safely take antihistamines" but that "[s]ome antihistamines may not be suitable" for young children, the pregnant or breastfeeding, for those taking other medicines, or people with conditions "such as heart disease, liver disease, kidney disease or epilepsy". Most studies of antihistamines reported on people who are younger, so the effects on people over age 65 are not as well understood. Older people are more likely to experience drowsiness from antihistamine use than younger people. Continuous and/or cumulative use of anticholinergic medications, including first-generation antihistamines, is associated with higher risk for cognitive decline and dementia in older people. Also, most of the research has been on caucasians and other ethnic groups are not as represented in the research. The evidence does not report how antihistamines affect women differently than men. Different studies have reported on antihistamine use in children, with various studies finding evidence that certain antihistamines could be used by children 2 years of age, and other drugs being safer for younger or older children. Potential uses studied Research regarding the effects of commonly used medications upon certain cancer therapies has suggested that when consumed in conjunction with immune checkpoint inhibitors some may influence the response of subjects to that particular treatment whose T-cell functions were failing in anti-tumor activity. Upon study of records in mouse studies associated with 40 common medications ranging from antibiotics, antihistamines, aspirin, and hydrocortisone, that for subjects with melanoma and lung cancers, fexofenadine, one of three medications, along with loratadine, and cetirizine, that target histamine receptor H1 (HRH1), demonstrated significantly higher survival rates and had experienced restored T-cell anti-tumor activity, ultimately inhibiting tumor growth in the subject animals. Such results encourage further study in order to see whether results in humans is similar in combating resistance to immunotherapy.
Biology and health sciences
Antihistamines
Health
12638410
https://en.wikipedia.org/wiki/Muscle
Muscle
Muscle is a soft tissue, one of the four basic types of animal tissue. There are three types of muscle tissue in vertebrates: skeletal muscle, cardiac muscle, and smooth muscle. Muscle tissue gives skeletal muscles the ability to contract. Muscle tissue contains special contractile proteins called actin and myosin which interact to cause movement. Among many other muscle proteins, present are two regulatory proteins, troponin and tropomyosin. Muscle is formed during embryonic development, in a process known as myogenesis. Skeletal muscle tissue is striated consisting of elongated, multinucleate muscle cells called muscle fibers, and is responsible for movements of the body. Other tissues in skeletal muscle include tendons and perimysium. Smooth and cardiac muscle contract involuntarily, without conscious intervention. These muscle types may be activated both through the interaction of the central nervous system as well as by innervation from peripheral plexus or endocrine (hormonal) activation. Skeletal muscle only contracts voluntarily, under the influence of the central nervous system. Reflexes are a form of non-conscious activation of skeletal muscles, but nonetheless arise through activation of the central nervous system, albeit not engaging cortical structures until after the contraction has occurred. The different muscle types vary in their response to neurotransmitters and hormones such as acetylcholine, noradrenaline, adrenaline, and nitric oxide which depends on muscle type and the exact location of the muscle. Sub-categorization of muscle tissue is also possible, depending on among other things the content of myoglobin, mitochondria, and myosin ATPase etc. Etymology The word muscle comes from Latin musculus, diminutive of mus meaning mouse, because the appearance of the flexed biceps resembles the back of a mouse. The same phenomenon occurred in Greek, in which μῦς, mȳs, means both "mouse" and "muscle". Structure There are three types of muscle tissue in vertebrates: skeletal, cardiac, and smooth. Skeletal and cardiac muscle are types of striated muscle tissue. Smooth muscle is non-striated. There are three types of muscle tissue in invertebrates that are based on their pattern of striation: transversely striated, obliquely striated, and smooth muscle. In arthropods there is no smooth muscle. The transversely striated type is the most similar to the skeletal muscle in vertebrates. Vertebrate skeletal muscle tissue is an elongated, striated muscle tissue, with the fibres ranging from 3-8 micrometers in width and from 18 to 200 micrometers in breadth. In the uterine wall, during pregnancy, they enlarge in length from 70 to 500 micrometers. Skeletal striated muscle tissue is arranged in regular, parallel bundles of myofibrils, which contain many contractile units known as sarcomeres, which give the tissue its striated (striped) appearance. Skeletal muscle is voluntary muscle, anchored by tendons or sometimes by aponeuroses to bones, and is used to effect skeletal movement such as locomotion and to maintain posture. Postural control is generally maintained as an unconscious reflex, but the responsible muscles can also react to conscious control. The body mass of an average adult man is made up of 42% of skeletal muscle, and an average adult woman is made up of 36%. Cardiac muscle tissue is found only in the walls of the heart as myocardium, and it is an involuntary muscle controlled by the autonomic nervous system. Cardiac muscle tissue is striated like skeletal muscle, containing sarcomeres in highly regular arrangements of bundles. While skeletal muscles are arranged in regular, parallel bundles, cardiac muscle connects at branching, irregular angles known as intercalated discs. Smooth muscle tissue is non-striated and involuntary. Smooth muscle is found within the walls of organs and structures such as the esophagus, stomach, intestines, bronchi, uterus, urethra, bladder, blood vessels, and the arrector pili in the skin that control the erection of body hair. Comparison of types Skeletal muscle Skeletal muscle is broadly classified into two fiber types: type I (slow-twitch) and type II (fast-twitch). Type I, slow-twitch, slow oxidative, or red muscle is dense with capillaries and is rich in mitochondria and myoglobin, giving the muscle tissue its characteristic red color. It can carry more oxygen and sustain aerobic activity. Type II, fast-twitch muscle, has three major kinds that are, in order of increasing contractile speed: Type IIa, which, like a slow muscle, is aerobic, rich in mitochondria and capillaries and appears red when deoxygenated. Type IIx (also known as type IId), which is less dense in mitochondria and myoglobin. This is the fastest muscle type in humans. It can contract more quickly and with a greater amount of force than oxidative muscle, but can sustain only short, anaerobic bursts of activity before muscle contraction becomes painful (often incorrectly attributed to a build-up of lactic acid). N.B. in some books and articles this muscle in humans was, confusingly, called type IIB. Type IIb, which is anaerobic, glycolytic, "white" muscle that is even less dense in mitochondria and myoglobin. In small animals like rodents, this is the major fast muscle type, explaining the pale color of their flesh. In laboratory house mice, an intronic single nucleotide polymorphism in the Myosin heavy polypeptide 4 gene causes a great reduction in the amount of Type IIb muscle, yielding the "Mini-Muscle" phenotype, which was discovered based on its greatly reduced (~50%) hind-limb muscle mass. The density of mammalian skeletal muscle tissue is about 1.06 kg/liter. This can be contrasted with the density of adipose tissue (fat), which is 0.9196 kg/liter. This makes muscle tissue approximately 15% denser than fat tissue. Skeletal muscle is a highly oxygen-consuming tissue, and oxidative DNA damage that is induced by reactive oxygen species tends to accumulate with age. The oxidative DNA damage 8-OHdG accumulates in heart and skeletal muscle of both mouse and rat with age. Also, DNA double-strand breaks accumulate with age in the skeletal muscle of mice. Smooth muscle Smooth muscle is involuntary and non-striated. It is divided into two subgroups: the single-unit (unitary) and multiunit smooth muscle. Within single-unit cells, the whole bundle or sheet contracts as a syncytium (i.e. a multinucleate mass of cytoplasm that is not separated into cells). Multiunit smooth muscle tissues innervate individual cells; as such, they allow for fine control and gradual responses, much like motor unit recruitment in skeletal muscle. Smooth muscle is found within the walls of blood vessels (such smooth muscle specifically being termed vascular smooth muscle) such as in the tunica media layer of the large (aorta) and small arteries, arterioles and veins. Smooth muscle is also found in lymphatic vessels, the urinary bladder, uterus (termed uterine smooth muscle), male and female reproductive tracts, the gastrointestinal tract, the respiratory tract, the arrector pili of skin, the ciliary muscle, and the iris of the eye. The structure and function is basically the same in smooth muscle cells in different organs, but the inducing stimuli differ substantially, in order to perform individual actions in the body at individual times. In addition, the glomeruli of the kidneys contain smooth muscle-like cells called mesangial cells. Cardiac muscle Cardiac muscle is involuntary, striated muscle that is found in the walls and the histological foundation of the heart, specifically the myocardium. The cardiac muscle cells, (also called cardiomyocytes or myocardiocytes), predominantly contain only one nucleus, although populations with two to four nuclei do exist. The myocardium is the muscle tissue of the heart and forms a thick middle layer between the outer epicardium layer and the inner endocardium layer. Coordinated contractions of cardiac muscle cells in the heart propel blood out of the atria and ventricles to the blood vessels of the left/body/systemic and right/lungs/pulmonary circulatory systems. This complex mechanism illustrates systole of the heart. Cardiac muscle cells, unlike most other tissues in the body, rely on an available blood and electrical supply to deliver oxygen and nutrients and to remove waste products such as carbon dioxide. The coronary arteries help fulfill this function. Development All muscles are derived from paraxial mesoderm. The paraxial mesoderm is divided along the embryo's length into somites, corresponding to the segmentation of the body (most obviously seen in the vertebral column. Each somite has three divisions, sclerotome (which forms vertebrae), dermatome (which forms skin), and myotome (which forms muscle). The myotome is divided into two sections, the epimere and hypomere, which form epaxial and hypaxial muscles, respectively. The only epaxial muscles in humans are the erector spinae and small intervertebral muscles, and are innervated by the dorsal rami of the spinal nerves. All other muscles, including those of the limbs are hypaxial, and innervated by the ventral rami of the spinal nerves. During development, myoblasts (muscle progenitor cells) either remain in the somite to form muscles associated with the vertebral column or migrate out into the body to form all other muscles. Myoblast migration is preceded by the formation of connective tissue frameworks, usually formed from the somatic lateral plate mesoderm. Myoblasts follow chemical signals to the appropriate locations, where they fuse into elongate skeletal muscle cells. Function The primary function of muscle tissue is contraction. The three types of muscle tissue (skeletal, cardiac and smooth) have significant differences. However, all three use the movement of actin against myosin to create contraction. Skeletal muscle In skeletal muscle, contraction is stimulated by electrical impulses transmitted by the motor nerves. Cardiac and smooth muscle contractions are stimulated by internal pacemaker cells which regularly contract, and propagate contractions to other muscle cells they are in contact with. All skeletal muscle and many smooth muscle contractions are facilitated by the neurotransmitter acetylcholine. Smooth muscle Smooth muscle is found in almost all organ systems such as hollow organs including the stomach, and bladder; in tubular structures such as blood and lymph vessels, and bile ducts; in sphincters such as in the uterus, and the eye. In addition, it plays an important role in the ducts of exocrine glands. It fulfills various tasks such as sealing orifices (e.g. pylorus, uterine os) or the transport of the chyme through wavelike contractions of the intestinal tube. Smooth muscle cells contract more slowly than skeletal muscle cells, but they are stronger, more sustained and require less energy. Smooth muscle is also involuntary, unlike skeletal muscle, which requires a stimulus. Cardiac muscle Cardiac muscle is the muscle of the heart. It is self-contracting, autonomically regulated and must continue to contract in a rhythmic fashion for the whole life of the organism. Hence it has special features. Invertebrate muscle There are three types of muscle tissue in invertebrates that are based on their pattern of striation: transversely striated, obliquely striated, and smooth muscle. In arthropods there is no smooth muscle. The transversely striated type is the most similar to the skeletal muscle in vertebrates.
Biology and health sciences
Tissues
null
2499027
https://en.wikipedia.org/wiki/Cockroach
Cockroach
Cockroaches (or roaches) are insects belonging to the order Blattodea (Blattaria). About 30 cockroach species out of 4,600 are associated with human habitats. Some species are well-known pests. Modern cockroaches are an ancient group that first appeared during the Late Jurassic, with their ancestors, known as "roachoids", likely originating during the Carboniferous period around 320 million years ago. Those early ancestors, however, lacked the internal ovipositors of modern roaches. Cockroaches are somewhat generalized insects lacking special adaptations (such as the sucking mouthparts of aphids and other true bugs); they have chewing mouthparts and are probably among the most primitive of living Neopteran insects. They are common and hardy insects capable of tolerating a wide range of climates, from Arctic cold to tropical heat. Tropical cockroaches are often much larger than temperate species. Modern cockroaches are not considered to be a monophyletic group, as it has been found based on genetics that termites are deeply nested within the group, with some groups of cockroaches more closely related to termites than they are to other cockroaches, thus rendering Blattaria paraphyletic. Both cockroaches and termites are included into Blattodea. Some species, such as the gregarious German cockroach, have an elaborate social structure involving common shelter, social dependence, information transfer and kin recognition. Cockroaches have appeared in human culture since classical antiquity. They are popularly depicted as large, dirty pests, although the majority of species are small and inoffensive and live in a wide range of habitats around the world. Taxonomy and evolution Cockroaches are members of the superorder Dictyoptera, which includes the termites and mantids, a group of insects once thought to be separate from cockroaches. Currently, 4,600 species and over 460 genera are described worldwide. The name "cockroach" comes from the Spanish word for cockroach, cucaracha, transformed by 1620s English folk etymology into "cock" and "roach". The scientific name derives from the Latin blatta, "an insect that shuns the light", which in classical Latin was applied not only to cockroaches, but also to mantids. Historically, the name Blattaria was used largely interchangeably with the name Blattodea, but whilst Blattaria was used to refer to 'true' cockroaches exclusively, the Blattodea also includes the termites. The current catalogue of world cockroach species uses the name Blattodea for the group. Another name, Blattoptera, is also sometimes used to refer to extinct cockroach relatives. The earliest cockroach-like fossils ("blattopterans" or "roachoids") are from the Carboniferous period 320 million years ago. Fossil roachoids are considered the common ancestor of both mantises and modern cockroaches, and are distinguished from the latter by the presence of a long external ovipositor. As the body, hind wings and mouthparts are not preserved in fossils frequently, the relationship of these roachoids and modern cockroaches remains disputed. The earliest definitive fossils of modern crown group cockroaches, specifically Corydiidae, are known from the Late Jurassic rocks of Russia. The evolutionary relationships of the Blattodea (cockroaches and termites) shown in the cladogram are based on Inward, Beccaloni and Eggleton (2007). The cockroach families Anaplectidae, Lamproblattidae, and Tryonicidae are not shown but are placed within the superfamily Blattoidea. The cockroach families Corydiidae and Ectobiidae were previously known as the Polyphagidae and Blattellidae. Termites were previously regarded as a separate order Isoptera to cockroaches. However, recent genetic evidence strongly suggests that they evolved directly from 'true' cockroaches, and many authors now place them as an "epifamily" of Blattodea. This evidence supported a hypothesis suggested in 1934 that termites are closely related to the wood-eating cockroaches (genus Cryptocercus). This hypothesis was originally based on similarity of the symbiotic gut flagellates in termites regarded as living fossils and wood-eating cockroaches. Additional evidence emerged when F. A. McKittrick (1965) noted similar morphological characteristics between some termites and cockroach nymphs. The similarities among these cockroaches and termites have led some scientists to reclassify termites as a single family, the Termitidae, within the order Blattodea. Other scientists have taken a more conservative approach, proposing to retain the termites as the Termitoidae, an epifamily within the order. Such a measure preserves the classification of termites at family level and below. Description Most species of cockroach are about the size of a thumbnail, but several species are notably larger. The world's heaviest cockroach is the Australian giant burrowing cockroach Macropanesthia rhinoceros, which can reach in length and weigh up to . Comparable in size is the Central American giant cockroach Blaberus giganteus. The longest cockroach species is Megaloblatta longipennis, which can reach in length and across. A Central and South American species, Megaloblatta blaberoides, has the largest wingspan of up to . At the other end of the size scale, Attaphila cockroaches that live with leaf-cutter ants include some of the world's smallest species, growing to about 3.5 mm in length. Cockroaches are generalized insects with few special adaptations, and may be among the most primitive living Neopteran insects. They have a relatively small head and a broad, flattened body, and most species are reddish-brown to dark brown. They have large compound eyes, two ocelli, and long, flexible antennae. The mouthparts are on the underside of the head and include generalized chewing mandibles, salivary glands and various touch and taste receptors. The body is divided into a thorax of three segments and a ten-segmented abdomen. The external surface has a tough exoskeleton which contains calcium carbonate; this protects the inner organs and provides attachment to muscles. This external exoskeleton is coated with wax to repel water. The wings are attached to the second and third thoracic segments. The tegmina, or first pair of wings, are tough and protective; these lay as a shield on top of the membranous hind wings, which are used in flight. All four wings have branching longitudinal veins, as well as multiple cross-veins. The three pairs of legs are sturdy, with large coxae and five claws each. They are attached to each of the three thoracic segments. Of these, the front legs are the shortest and the hind legs the longest, providing the main propulsive power when the insect runs. The spines on the legs were earlier considered to be sensory, but observations of the insect's gait on sand and wire meshes have demonstrated that they help in locomotion on difficult terrain. The structures have been used as inspiration for robotic legs. The abdomen has ten segments, each having a pair of spiracles for respiration. In addition to the spiracles, the final segment consists of a pair of cerci, a pair of anal styles, the anus and the external genitalia. Males have an aedeagus through which they secrete sperm during copulation, while females have spermatheca for storing sperm and an ovipositor through which the oothecae are laid. Distribution and habitat Cockroaches are abundant throughout the world and live in a wide range of environments, especially in the tropics and subtropics. Cockroaches can withstand extremely low temperatures, allowing them to live in the Arctic. Some species are capable of surviving temperatures of by manufacturing an antifreeze made out of glycerol. In North America, 50 species separated into five families are found throughout the continent. 450 species are found in Australia. Only about four widespread species are commonly regarded as pests. Cockroaches occupy a wide range of habitats. Many live in leaf litter, among the stems of matted vegetation, in rotting wood, in holes in stumps, in cavities under bark, under log piles and among debris. Some live in arid regions and have developed mechanisms to survive without access to water sources. Others are aquatic, living near the surface of water bodies, including bromeliad phytotelmata, and diving to forage for food. Most of these respire by piercing the water surface with the tip of the abdomen which acts as a snorkel, but some carry a bubble of air under their thoracic shield when they submerge. By doing this, cockroaches can remain submerged for up to 40 minutes. Others live in the forest canopy where they may be one of the main types of invertebrate present. Here they may hide during the day in crevices, among dead leaves, in bird and insect nests or among epiphytes, emerging at night to feed. Behavior Cockroaches are social insects; a large number of species are either gregarious or inclined to aggregate, and a slightly smaller number exhibit parental care. It used to be thought that cockroaches aggregated because they were reacting to environmental cues, but it is now believed that pheromones are involved in these behaviors. Some species secrete these in their feces with gut microbial symbionts being involved, while others use glands located on their mandibles. Pheromones produced by the cuticle may enable cockroaches to distinguish between different populations of cockroach by odor. The behaviors involved have been studied in only a few species, but German cockroaches leave fecal trails with an odor gradient. Other cockroaches follow such trails to discover sources of food and water, and where other cockroaches are hiding. Thus, cockroaches have emergent behavior, in which group or swarm behavior emerges from a simple set of individual interactions. Daily rhythms may also be regulated by a complex set of hormonal controls of which only a small subset have been understood. In 2005, the role of one of these proteins, pigment dispersing factor (PDF), was isolated and found to be a key mediator in the circadian rhythms of the cockroach. Pest species adapt readily to a variety of environments, but prefer warm conditions found within buildings. Many tropical species prefer even warmer environments. Cockroaches are mainly nocturnal and run away when exposed to light. An exception to this is the Asian cockroach, which flies mostly at night but is attracted to brightly lit surfaces and pale colors. Collective decision-making Gregarious cockroaches display collective decision-making when choosing food sources. When a sufficient number of individuals (a "quorum") exploits a food source, this signals to newcomer cockroaches that they should stay there longer rather than leave for elsewhere. Other mathematical models have been developed to explain aggregation dynamics and conspecific recognition. Cooperation and competition are balanced in cockroach group decision-making behavior. Cockroaches appear to use just two pieces of information to decide where to go, namely how dark it is and how many other cockroaches there are. A study used specially scented roach-sized robots that seem real to the roaches to demonstrate that once there are enough insects in a place to form a critical mass, the roaches accepted the collective decision on where to hide, even if this was an unusually lit place. Social behavior When reared in isolation, German cockroaches show behavior that is different from behavior when reared in a group. In one study, isolated cockroaches were less likely to leave their shelters and explore, spent less time eating, interacted less with conspecifics when exposed to them, and, among males, took longer to recognize receptive females. Because these changes occurred in many contexts, the authors suggested them as constituting a behavioral syndrome. These effects might have been due either to reduced metabolic and developmental rates in isolated individuals or the fact that the isolated individuals had not had a training period to learn about what others were like via their antennae. Individual American cockroaches appear to have consistently different "personalities" regarding how they seek shelter. In addition, group personality is not simply the sum of individual choices, but reflects conformity and collective decision-making. The gregarious German and American cockroaches have elaborate social structure, chemical signaling, and "social herd" characteristics. Lihoreau and his fellow researchers stated: There is evidence that a few species of group-living roaches in the genera Melyroidea and Aclavoidea may exhibit a reproductive division of labor, which, if confirmed, would make these the only genuinely eusocial lineage known among roaches, in contrast to the subsocial members of the genus Cryptocercus. Sounds Some species make a buzzing noise while other cockroaches make a chirping noise. Gromphadorhina species and Archiblatta hoeveni produce sound through the modified spiracles on the fourth abdominal segment. In the former species, several different hisses are produced, including disturbance sounds, produced by adults and larger nymphs; and aggressive, courtship and copulatory sounds produced by adult males. Henschoutedenia epilamproides has a stridulatory organ between its thorax and abdomen, but the purpose of the sound produced is unclear. Several Australian species practice acoustic and vibration behaviour as an aspect of courtship. They have been observed producing hisses and whistles from air forced through the spiracles. Furthermore, in the presence of a potential mate, some cockroaches tap the substrate in a rhythmic, repetitive manner. Acoustic signals may be of greater prevalence amongst perching species, particularly those that live on low vegetation in Australia's tropics. Biology Digestive tract Cockroaches are generally omnivorous; the American cockroach (Periplaneta americana), for example, feeds on a great variety of foodstuffs including bread, fruit, leather, starch in book bindings, paper, glue, skin flakes, hair, dead insects and soiled clothing. Many species of cockroach harbor in their gut symbiotic protozoans and bacteria which are able to digest cellulose. In many species, these symbionts may be essential if the insect is to utilize cellulose; however, some species secrete cellulase in their saliva, and the wood-eating cockroach, Panesthia cribrata, is able to survive indefinitely on a diet of crystallized cellulose while being free of microorganisms. The similarity of these symbionts in the genus Cryptocercus to those in termites are such that these cockroaches have been suggested to be more closely related to termites than to other cockroaches, and current research strongly supports this hypothesis about their relationships. All species studied so far carry the obligate mutualistic endosymbiont bacterium Blattabacterium, with the exception of Nocticola, an Australian cave-dwelling genus without eyes, pigment or wings, which recent genetic studies indicate is a very primitive cockroach. It had previously been thought that all five families of cockroach were descended from a common ancestor that was infected with B. cuenoti. It may be that N. australiensis subsequently lost its symbionts, or alternatively this hypothesis will need to be re-examined. Tracheae and breathing Like other insects, cockroaches breathe through a system of tubes called tracheae which are attached to openings called spiracles on all body segments. When the carbon dioxide level in the insect rises high enough, valves on the spiracles open and carbon dioxide diffuses out and oxygen diffuses in. The tracheal system branches repeatedly, the finest tracheoles bringing air directly to each cell, allowing gaseous exchange to take place. While cockroaches do not have lungs as do vertebrates, and can continue to respire if their heads are removed, in some very large species, the body musculature may contract rhythmically to forcibly move air in and out of the spiracles; this may be considered a form of breathing. Reproduction Cockroaches use pheromones to attract mates, and the males practice courtship rituals, such as posturing and stridulation. Like many insects, cockroaches mate facing away from each other with their genitalia in contact, and copulation can be prolonged. A few species are known to be parthenogenetic, reproducing without the need for males. Female cockroaches are sometimes seen carrying egg cases on the end of their abdomens; the German cockroach holds about 30 to 40 long, thin eggs in a case called an ootheca. She drops the capsule prior to hatching, though live births do occur in rare instances. The egg capsule may take more than five hours to lay and is initially bright white in color. The eggs are hatched from the combined pressure of the hatchlings gulping air. The hatchlings are initially bright white nymphs and continue inflating themselves with air, becoming harder and darker within about four hours. Their transient white stage while hatching and later while molting has led to claims of albino cockroaches. Development from eggs to adults takes three to four months. Cockroaches live up to a year, and the female may produce up to eight egg cases in a lifetime; in favorable conditions, she can produce 300 to 400 offspring. Other species of cockroaches, however, can produce far more eggs; in some cases a female needs to be impregnated only once to be able to lay eggs for the rest of her life. The female usually attaches the egg case to a substrate, inserts it into a suitably protective crevice, or carries it about until just before the eggs hatch. Some species, however, are ovoviviparous, keeping the eggs inside their body, with or without an egg case, until they hatch. At least one genus, Diploptera, is fully viviparous. Cockroaches have incomplete metamorphosis, meaning that the nymphs are generally similar to the adults, except for undeveloped wings and genitalia. Development is generally slow, and may take a few months to over a year. The adults are also long-lived; some have survived for as many as four years in the laboratory. Parthenogenesis When female American cockroaches (Periplaneta americana) are housed in groups, this close association promotes parthenogenic reproduction. Oothecae, a type of egg mass, are produced asexually. The parthenogenetic process by which eggs are produced in P. americana is automixis. During automixis, meiosis occurs, but instead of giving rise to haploid gametes as ordinarily occurs, diploid gametes are produced (probably by terminal fusion) that can then develop into female cockroaches. Hardiness Cockroaches are among the hardiest insects. Some species are capable of remaining active for a month without food and are able to survive on limited resources, such as the glue from the back of postage stamps. Some can go without air for 45 minutes. Japanese cockroach (Periplaneta japonica) nymphs, which hibernate in cold winters, have survived twelve hours at in laboratory experiments. Experiments on decapitated specimens of several species of cockroach found a variety of behavioral functionality remained, including shock avoidance and escape behavior, although many insects other than cockroaches are also able to survive decapitation, and popular claims of the longevity of headless cockroaches do not appear to be based on published research. The severed head is able to survive and wave its antennae for several hours, or longer when refrigerated and given nutrients. It is popularly suggested that cockroaches will "inherit the earth" if humanity destroys itself in a nuclear war. While cockroaches do, indeed, have a much higher radiation resistance than vertebrates, with a lethal dose perhaps six to 15 times that for humans, they are not exceptionally radiation-resistant compared to other insects, such as the fruit fly. The cockroach's ability to withstand radiation has been explained through the cell cycle. Cells are most vulnerable to the effects of radiation while they are dividing. A cockroach's cells divide only once each molting cycle (which is weekly, for the juvenile German cockroach). Since not all cockroaches would be molting at the same time, many would be unaffected by an acute burst of radiation, although lingering and more acute radiation would still be harmful. Relationship with humans In research and education Because of their ease of rearing and resilience, cockroaches have been used as insect models in the laboratory, particularly in the fields of neurobiology, reproductive physiology and social behavior. The cockroach is a convenient insect to study as it is large and simple to raise in a laboratory environment. This makes it suitable both for research and for school and undergraduate biology studies. It can be used in experiments on topics such as learning, sexual pheromones, spatial orientation, aggression, activity rhythms and the biological clock, and behavioral ecology. Research conducted in 2014 suggests that humans fear cockroaches the most, even more than mosquitoes, due to an evolutionary aversion. As pests The Blattodea include some thirty species of cockroaches associated with humans; these species are atypical of the thousands of species in the order. Of those thirty species, four are most commonly encountered as pests: the German cockroach (Blattella germanica), American cockroach (Periplaneta americana), oriental cockroach (Blatta orientalis), and brown-banded cockroach (Supella longipalpa). Pest cockroaches feed on human and pet food and can leave an offensive odor. They can passively transport pathogenic microbes on their body surfaces, particularly in environments such as hospitals. Cockroaches are linked with allergic reactions in humans. One of the proteins that trigger allergic reactions is tropomyosin, which can cause cross-reactive allergy to dust mites and shrimp. These allergens are also linked with asthma. Some species of cockroach can live for up to a month without food, so just because no cockroaches are visible in a home does not mean that they are not there. Approximately 20–48% of homes with no visible sign of cockroaches have detectable cockroach allergens in dust. Control Many remedies have been tried in the search for control of the major pest species of cockroaches, which are resilient and fast-breeding. Household chemicals like sodium bicarbonate (baking soda) have been suggested, without evidence for their effectiveness. Garden herbs including bay, catnip, mint, cucumber, and garlic have been proposed as repellents. Poisoned bait containing hydramethylnon or fipronil, and boric acid powder is effective on adults. Baits with egg killers are also quite effective at reducing the cockroach population. Alternatively, insecticides containing deltamethrin or pyrethrin are very effective. In Singapore and Malaysia, taxi drivers use pandan leaves to repel cockroaches in their vehicles. Natural methods of cockroach control have been advanced by several published studies especially by Metarhizium robertsii (syn. M. anisopliae). Some parasites and predators are effective for biological control of cockroaches. Parasitoidal wasps such as Ampulex wasps sting nerve ganglia in the cockroach's thorax, causing temporary paralysis and allowing the wasp to deliver an incapacitating sting into the cockroach's brain. The wasp clips the antennae with its mandibles and drinks some hemolymph before dragging the prey to a burrow, where an egg (rarely two) is laid on it. The wasp larva feeds on the subdued living cockroach. Another wasp considered to be a promising candidate for biological control is the ensign wasp Evania appendigaster, which attacks cockroach oothecae to lay a single egg inside. Ongoing research is still developing technologies allowing for mass-rearing these wasps for application releases. Widow spiders commonly prey on cockroaches. Cockroaches can be trapped in a deep, smooth-walled jar baited with food inside, placed so that cockroaches can reach the opening, for example with a ramp of card or twigs on the outside. An inch or so of water or stale beer (by itself a cockroach attractant) in the jar can be used to drown any insects thus captured. The method works well with the American cockroach, but less so with the German cockroach. A study conducted by scientists at Purdue University concluded that the most common cockroaches in the US, Australia and Europe were able to develop a "cross resistance" to multiple types of pesticide. This contradicted previous understanding that the animals can develop resistance against one pesticide at a time. The scientists suggested that cockroaches will no longer be easily controlled using a diverse spectrum of chemical pesticides and that a mix of other means, such as traps and better sanitation, will need to be employed. Researchers from Heriot-Watt University demonstrated that a power laser can, with high effectiveness, neutralise cockroaches in a home, and suggest it might be an alternative to pesticides. As food Although considered disgusting in Western culture, cockroaches are eaten in many places around the world. Whereas household pest cockroaches may carry bacteria and viruses, cockroaches bred under laboratory conditions can be used to prepare nutritious food. In Thailand and Mexico, the heads and legs are removed, and the remainder may be boiled, sautéed, grilled, dried, or diced. Frying makes the insect crispy with soft innards that taste like cottage cheese. Recipes from Taiwan also call for its use in omelets. It can be a feeder insect for pet reptiles. Medicinal use Cockroaches are raised in large quantities in China for the production of traditional medicine and cosmetics. There are about 100 cockroach farms in the country. Running a farm involves relatively low starting and operating costs due to how hardy and easy to process the insects are. Chinese and South Korean researchers are investigating cockroaches for treating baldness, AIDS, cancer, and as a dietary supplement. Other uses Recent experiments have shown that some species of cockroaches may be used as a plastic scavenger. Conservation While a small minority of cockroaches are associated with human habitats and viewed as repugnant by many people, a few species are of conservation concern. The Lord Howe Island wood-feeding cockroach (Panesthia lata) is listed as endangered by the New South Wales Scientific Committee, but the cockroach may be extinct on Lord Howe Island itself. The introduction of rats, the spread of Rhodes grass (Chloris gayana) and fires are possible reasons for their scarcity. Two species are currently listed as endangered and critically endangered by the IUCN Red List, Delosia ornata and Nocticola gerlachi. Both cockroaches have a restricted distribution and are threatened by habitat loss and rising sea levels. Only 600 Delosia ornata adults and 300 nymphs are known to exist, and these are threatened by a hotel development. No action has been taken to save the two cockroach species, but protecting their natural habitats may prevent their extinction. In the former Soviet Union, cockroach populations have been declining at an alarming rate; this may be exaggerated, or the phenomenon may be temporary or cyclic. One species of roach, Simandoa conserfariam, is considered extinct in the wild. Cultural depictions Cockroaches were known and considered repellent but medicinally useful in Classical times. An insect named in Greek "σίλφη" (silphe) has been identified with the cockroach, though the scientific name Silpha refers to a genus of carrion beetles. It is mentioned by Aristotle, saying that it sheds its skin; it is described as foul-smelling in Aristophanes' play Peace; Euenus called it a pest of book collections, being "page-eating, destructive, black-bodied" in his Analect. Virgil named the cockroach "Lucifuga" ("one that avoids light"). Pliny the Elder recorded the use of "Blatta" in various medicines; he describes the insect as disgusting, and as seeking out dark corners to avoid the light. Dioscorides recorded the use of the "Silphe", ground up with oil, as a remedy for earache. Lafcadio Hearn (1850–1904) asserted that "For tetanus cockroach tea is given. I do not know how many cockroaches go to make up the cup; but I find that faith in this remedy is strong among many of the American population of New Orleans. A poultice of boiled cockroaches is placed over the wound." He adds that cockroaches are eaten, fried with garlic, for indigestion. Several cockroach species, such as Blaptica dubia, are raised as food for insectivorous pets. A few cockroach species are raised as pets, most commonly the giant Madagascar hissing cockroach, Gromphadorhina portentosa. Whilst the hissing cockroaches may be the most commonly kept species, there are many species that are kept by cockroach enthusiasts; there is even a specialist society: the Blattodea Culture Group (BCG), which was a thriving organisation for about 15 years although now appears to be dormant. The BCG provided a source of literature for people interested in rearing cockroaches, which was otherwise limited to either scientific papers, general insect books, or books covering a variety of exotic pets; in the absence of an inclusive book, one member published Introduction to Rearing Cockroaches, which still appears to be the only book dedicated to rearing cockroaches. Cockroaches have been used for space tests. A cockroach given the name Nadezhda was sent into space by Russian scientists as part of a Foton-M mission, during which she mated, and produced 33 offspring after returning to Earth. Because of their long association with humans, cockroaches are frequently referred to in popular culture. In Western culture, cockroaches are often depicted as dirty pests. In a 1750–1752 journal, Pehr Osbeck noted that cockroaches were frequently seen and found their way to the bakeries, after the sailing ship Gothenburg ran aground and was destroyed by rocks. Donald Harington's satirical novel The Cockroaches of Stay More (Harcourt, 1989) imagines a community of "roosterroaches" in a mythical Ozark town where the insects are named after their human counterparts. Madonna has famously quoted, "I am a survivor. I am like a cockroach, you just can't get rid of me". An urban legend maintains that cockroaches are radiation-resistant, and thus would survive a nuclear war.
Biology and health sciences
Insects and other hexapods
null
2499459
https://en.wikipedia.org/wiki/Beech%20marten
Beech marten
The beech marten (Martes foina), also known as the stone marten, house marten or white breasted marten, is a species of marten native to much of Europe and Central Asia, though it has established a feral population in North America. It is listed as Least Concern on the IUCN Red List on account of its wide distribution, its large population, and its presence in a number of protected areas. It is superficially similar to the European pine marten, but differs from it by its smaller size and habitat preferences. While the pine marten is a forest specialist, the beech marten is a more generalist and adaptable species, occurring in a number of open and forest habitats. Evolution Its most likely ancestor is Martes vetus, which also gave rise to the pine marten. The earliest M. vetus fossils were found in deposits dated to the Würm glaciation in Lebanon and Israel. The beech marten likely originated in the Near East or southwestern Asia, and may have arrived in Europe by the Late Pleistocene or the early Holocene. Thus, the beech marten differs from most other European mustelids of the Quaternary, as all other species (save for the European mink) appeared during the Middle Pleistocene. Comparisons between fossil animals and their descendants indicate that the beech marten underwent a decrease in size beginning in the Würm period. Beech martens indigenous to the Aegean Islands represent a relic population with primitive Asiatic affinities. The skull of the beech marten suggests a higher adaptation than the pine marten toward hypercarnivory, as indicated by its smaller head, shorter snout and its narrower post-orbital constriction and lesser emphasis on cheek teeth. Selective pressures must have acted to increase the beech marten's bite force at the expense of gape. These traits probably acted on male beech martens as a mechanism to avoid both intraspecific competition with females and interspecific competition with the ecologically overlapping pine marten. Subspecies , eleven subspecies are recognised. Description The beech marten is superficially similar to the pine marten, but has a somewhat longer tail, a more elongated and angular head and has shorter, more rounded and widely spaced ears. Its nose is also of a light peach or grey colour, whereas that of the pine marten is dark black or greyish-black. Its feet are not as densely furred as those of the pine marten, thus making them look less broad, with the paw pads remaining visible even in winter. Because of its shorter limbs, the beech marten's manner of locomotion differs from that of the pine marten; the beech marten moves by creeping in a polecat-like manner, whereas the pine marten and sable move by bounds. The load per 1 cm2 of the supporting surface of the beech marten's foot (30.9 g) is double that of the pine marten (15.2 g), thus it is obliged to avoid snowy regions. Its skull is similar to that of the pine marten, but differs in its shorter facial region, more convex profile, its larger carnassials and smaller molars. The beech marten's penis is larger than the pine marten's, with the bacula of young beech martens often outsizing those of old pine martens. Males measure 430–590 mm in body length, while females measure 380–470 mm. The tail measures 250–320 mm in males and 230–275 mm in females. Males weigh 1.7–1.8 kg in winter and 2–2.1 kg in summer, while females weigh 1.1–1.3 kg in winter and 1.4–1.5 kg in summer. The beech marten's fur is coarser than the pine marten's, with elastic guard hairs and less dense underfur. Its summer coat is short, sparse and coarse, and the tail is sparsely furred. The colour tone is lighter than the pine marten's. Unlike the pine marten, its underfur is whitish, rather than greyish. The tail is dark-brown, while the back is darker than that of the pine marten. The throat patch of the beech marten is always white. The patch is large and generally has two projections extending backwards to the base of the forelegs and upward on the legs. The dark colour of the belly juts out between the forelegs as a line into the white colour of the chest and sometimes into the neck. In the pine marten, by contrast, the white colour between the forelegs juts backwards as a protrusion into the belly colour. Behaviour and ecology The beech marten is mainly a crepuscular and nocturnal animal, though to a much lesser extent than the European polecat. It is especially active during moonlit nights. Being a more terrestrial animal than the pine marten, the beech marten is less arboreal in its habits, though it can be a skilled climber in heavily forested areas. It is a skilled swimmer, and may occasionally be active during daytime hours, particularly in the summer, when nights are short. It typically hunts on the ground. During heavy snowfalls, the beech marten moves through paths made by hares or skis. Social and territorial behaviours In an area of northeastern Spain, where the beech marten still lives in relatively unmodified habitats, one specimen was recorded to have had a home range of with two centres of activity. Its period of maximum activity occurred between 6 PM and midnight. Between 9 AM and 6 PM, the animal was found to be largely inactive. In urban areas, beech marten's dens are almost entirely in buildings, particularly during winter. The beech marten does not dig burrows, nor does it occupy those of other animals. Instead, it nests in naturally occurring fissures and clefts in rocks, spaces between stones in rock slides and inhabited or uninhabited stone structures. It may live in tree holes at a height of up to 9 metres. Reproduction and development Estrus and copulation occur at the same time as in the pine marten. Copulation can last longer than 1 hour. Mating occurs in the June–July period, and takes place in the morning or in moonlit nights on the ground or on the roofs of houses. The gestation period lasts as long as the pine marten's, lasting 236–237 days in the wild, and 254–275 days in fur farms. Parturition takes place in late March-early April, with the average litter consisting of 3–7 kits. The kits are born blind, and begin to see at the age of 30–36 days. The lactation period lasts 40–45 days. In early July, the young are indistinguishable from the adults. Diet The beech marten's diet includes a much higher quantity of plant food than that of the pine marten and sable. Plant foods eaten by the beech marten include cherries, apples, pears, plums, black nightshade, tomatoes, grapes, raspberries and mountain ash. Plant food typically predominates during the winter months. Rats, mice and chickens are also eaten. Among bird species preyed upon by the beech marten, sparrow-like birds predominate, though snowcocks and partridges may also be taken. The marten likes to plunder nests of birds including passerines, galliformes and small owls, preferring to kill the parents in addition to the fledglings. Although it rarely attacks poultry, some specimens may become specialized poultry raiders, even when wild prey is abundant. Males tend to target large, live prey more than females, who feed on small prey and carrion with greater frequency. Relationships with other predators In areas where the beech marten is sympatric with the pine marten, the two species avoid competing with one another by assuming different ecological niches; the pine marten feeds on birds and rodents more frequently, while the beech marten feeds on fruits and insects. However, in one known case, a subadult beech marten was killed by a pine marten. The beech marten has been known to kill European polecats on rare occasions. Red foxes, lynxes, mountain lions, golden eagles, and Eurasian eagle-owls may prey on adults, and juveniles are vulnerable to attack by birds of prey and wildcats. There is, however, one case, from Germany, of a beech marten killing a domestic cat. Range The beech marten is a widespread species which occurs throughout much of Europe and Central Asia. It occurs from Spain and Portugal in the west, through Central and Southern Europe, the Middle East and Central Asia, extending as far east as the Altai and Tien Shan mountains and northwest China. Within Europe, the species is absent in the British Isles, Scandinavian peninsula, Finland, the northern Baltic and northern European Russia. It occurs in Afghanistan, Pakistan, India, Nepal, Bhutan and was recently confirmed to inhabit northern Burma. Introduction in North America The beech marten is present in Wisconsin, particularly near the urban centres surrounding Milwaukee. It is also present in several wooded, upland areas in the Kettle Moraine State Forest, and in nearby woodlands of Walworth, Racine, Waukesha and probably Jefferson Counties. North American beech martens are likely descended from feral animals that escaped a private fur farm in Burlington during the 1940s. They have also been listed as being released or having escaped in 1972. Relationships with humans Tameability British zoologist George Rolleston theorised that the "domestic cat" of the Ancient Greeks and Romans was in fact the beech marten. Pioneering marine biologist Jeanne Villepreux-Power kept two tame beech martens. Hunting and fur use Although the beech marten is a valuable animal to the fur trade, its pelt is inferior in quality to that of the pine marten and sable. Beech marten skins on the fur markets of the Soviet Union accounted for only 10–12% of the market presence of pine marten skins. Beech martens were caught only in the Caucasus, in the Montane part of Crimea and (in very small numbers) in the rest of Ukraine, and in the republics of Middle Asia. Because animals with more valuable pelts are rare in those areas, the beech marten is of value to hunters on the local market. Beech martens are captured with jaw traps, or, for live capture, with cage traps. The shooting of beech martens is inefficient, and trailing them with dogs is only successful when the animal can be trapped in a tree hollow. Car damage Since the mid-1970s, the beech marten has been known to occasionally cause damage to cars. Cars attacked by martens typically have cut tubes and cables. The reason for this behaviour is not fully known, as the damaged items are not eaten. There is, however, a seasonal peak in marten attacks on cars in spring, when young martens explore their surroundings more often and have yet to learn which items in their habitat are edible or not. Large Hadron Collider On 29 April and 21 November 2016, two beech martens shut down the Large Hadron Collider, the world's most powerful particle accelerator, by climbing on 18–66 kV electrical transformers located above ground near the LHCb and ALICE experiments, respectively. The second marten was stuffed and put on display in the Rotterdam Natural History Museum.
Biology and health sciences
Mustelidae
Animals
2499819
https://en.wikipedia.org/wiki/Polypropylene%20glycol
Polypropylene glycol
Polypropylene glycol or polypropylene oxide is the polymer (or macromolecule) of propylene glycol. Chemically it is a polyether, and, more generally speaking, it's a polyalkylene glycol (PAG) H S Code 3907.2000. The term polypropylene glycol or PPG is reserved for polymer of low- to medium-range molar mass when the nature of the end-group, which is usually a hydroxyl group, still matters. The term "oxide" is used for high-molar-mass polymer when end-groups no longer affect polymer properties. Between 60 and 70% of propylene oxide is converted to polyether polyols by the process called alkoxylation. Polymerization Polypropylene glycol is produced by ring-opening polymerization of propylene oxide. The initiator is an alcohol and the catalyst a base, usually potassium hydroxide. When the initiator is ethylene glycol or water the polymer is linear. With a multifunctional initiator like glycerine, pentaerythritol or sorbitol the polymer branches out. Conventional polymerization of propylene oxide results in an atactic polymer. The isotactic polymer can be produced from optically active propylene oxide, but at a high cost. A salen cobalt catalyst was reported in 2005 to provide isotactic polymerization of the prochiral propylene oxide Properties PPG has many properties in common with polyethylene glycol. The polymer is a liquid at room temperature. Solubility in water decreases rapidly with increasing molar mass. Secondary hydroxyl groups in PPG are less reactive than primary hydroxyl groups in polyethylene glycol. PPG is less toxic than PEG, so biotechnologicals are now mainly produced with PPG. Uses PPG is used in many polyurethane formulations. Synthesis of waterborne polymers has been a feature with this substance. As the basic building block is propylene oxide, there are 3 carbons per oxygen on the backbone. This confers some degree of water miscibility though not as good as ethylene oxide based molecules. It is used to synthesize the epoxy reactive diluent and flexibilizer, Poly(propylene glycol) diglycidyl ether. Another use of PPG is as a surfactant, wetting agent and dispersant in leather finishing. PPG is also employed as a reference and calibrant in mass spectrometry and HPLC. PPG and derivatives may be used as defoamers in drilling and other applications. It is also used as a primary ingredient in the making of paintballs. It has been evaluated as a corrosion inhibitor.
Physical sciences
Polymers
Chemistry
2500686
https://en.wikipedia.org/wiki/Magnitude%20%28astronomy%29
Magnitude (astronomy)
In astronomy, magnitude is a measure of the brightness of an object, usually in a defined passband. An imprecise but systematic determination of the magnitude of objects was introduced in ancient times by Hipparchus. Magnitude values do not have a unit. The scale is logarithmic and defined such that a magnitude 1 star is exactly 100 times brighter than a magnitude 6 star. Thus each step of one magnitude is times brighter than the magnitude 1 higher. The brighter an object appears, the lower the value of its magnitude, with the brightest objects reaching negative values. Astronomers use two different definitions of magnitude: apparent magnitude and absolute magnitude. The apparent magnitude () is the brightness of an object and depends on an object's intrinsic luminosity, its distance, and the extinction reducing its brightness. The absolute magnitude () describes the intrinsic luminosity emitted by an object and is defined to be equal to the apparent magnitude that the object would have if it were placed at a certain distance, 10 parsecs for stars. A more complex definition of absolute magnitude is used for planets and small Solar System bodies, based on its brightness at one astronomical unit from the observer and the Sun. The Sun has an apparent magnitude of −27 and Sirius, the brightest visible star in the night sky, −1.46. Venus at its brightest is -5. The International Space Station (ISS) sometimes reaches a magnitude of −6. Amateur astronomers commonly express the darkness of the sky in terms of limiting magnitude, i.e. the apparent magnitude of the faintest star they can see with the naked eye. At a dark site, it is usual for people to see stars of 6th magnitude or fainter. Apparent magnitude is really a measure of illuminance, which can also be measured in photometric units such as lux. History The Greek astronomer Hipparchus produced a catalogue which noted the apparent brightness of stars in the second century BCE. In the second century CE the Alexandrian astronomer Ptolemy classified stars on a six-point scale, and originated the term magnitude. To the unaided eye, a more prominent star such as Sirius or Arcturus appears larger than a less prominent star such as Mizar, which in turn appears larger than a truly faint star such as Alcor. In 1736, the mathematician John Keill described the ancient naked-eye magnitude system in this way: The fixed Stars appear to be of different Bignesses, not because they really are so, but because they are not all equally distant from us. Those that are nearest will excel in Lustre and Bigness; the more remote Stars will give a fainter Light, and appear smaller to the Eye. Hence arise the Distribution of Stars, according to their Order and Dignity, into Classes; the first Class containing those which are nearest to us, are called Stars of the first Magnitude; those that are next to them, are Stars of the second Magnitude ... and so forth, 'till we come to the Stars of the sixth Magnitude, which comprehend the smallest Stars that can be discerned with the bare Eye. For all the other Stars, which are only seen by the Help of a Telescope, and which are called Telescopical, are not reckoned among these six Orders. Altho' the Distinction of Stars into six Degrees of Magnitude is commonly received by Astronomers; yet we are not to judge, that every particular Star is exactly to be ranked according to a certain Bigness, which is one of the Six; but rather in reality there are almost as many Orders of Stars, as there are Stars, few of them being exactly of the same Bigness and Lustre. And even among those Stars which are reckoned of the brightest Class, there appears a Variety of Magnitude; for Sirius or Arcturus are each of them brighter than Aldebaran or the Bull's Eye, or even than the Star in Spica; and yet all these Stars are reckoned among the Stars of the first Order: And there are some Stars of such an intermedial Order, that the Astronomers have differed in classing of them; some putting the same Stars in one Class, others in another. For Example: The little Dog was by Tycho placed among the Stars of the second Magnitude, which Ptolemy reckoned among the Stars of the first Class: And therefore it is not truly either of the first or second Order, but ought to be ranked in a Place between both. Note that the brighter the star, the smaller the magnitude: Bright "first magnitude" stars are "1st-class" stars, while stars barely visible to the naked eye are "sixth magnitude" or "6th-class". The system was a simple delineation of stellar brightness into six distinct groups but made no allowance for the variations in brightness within a group. Tycho Brahe attempted to directly measure the "bigness" of the stars in terms of angular size, which in theory meant that a star's magnitude could be determined by more than just the subjective judgment described in the above quote. He concluded that first magnitude stars measured 2 arc minutes (2′) in apparent diameter ( of a degree, or the diameter of the full moon), with second through sixth magnitude stars measuring ′, ′, ′, ′, and ′, respectively. The development of the telescope showed that these large sizes were illusory—stars appeared much smaller through the telescope. However, early telescopes produced a spurious disk-like image of a star that was larger for brighter stars and smaller for fainter ones. Astronomers from Galileo to Jaques Cassini mistook these spurious disks for the physical bodies of stars, and thus into the eighteenth century continued to think of magnitude in terms of the physical size of a star. Johannes Hevelius produced a very precise table of star sizes measured telescopically, but now the measured diameters ranged from just over six seconds of arc for first magnitude down to just under 2 seconds for sixth magnitude. By the time of William Herschel astronomers recognized that the telescopic disks of stars were spurious and a function of the telescope as well as the brightness of the stars, but still spoke in terms of a star's size more than its brightness. Even into the early nineteenth century, the magnitude system continued to be described in terms of six classes determined by apparent size. However, by the mid-nineteenth century astronomers had measured the distances to stars via stellar parallax, and so understood that stars are so far away as to essentially appear as point sources of light. Following advances in understanding the diffraction of light and astronomical seeing, astronomers fully understood both that the apparent sizes of stars were spurious and how those sizes depended on the intensity of light coming from a star (this is the star's apparent brightness, which can be measured in units such as watts per square metre) so that brighter stars appeared larger. Modern definition Early photometric measurements (made, for example, by using a light to project an artificial “star” into a telescope's field of view and adjusting it to match real stars in brightness) demonstrated that first magnitude stars are about 100 times brighter than sixth magnitude stars. Thus in 1856 Norman Pogson of Oxford proposed that a logarithmic scale of ≈ 2.512 be adopted between magnitudes, so five magnitude steps corresponded precisely to a factor of 100 in brightness. Every interval of one magnitude equates to a variation in brightness of or roughly 2.512 times. Consequently, a magnitude 1 star is about 2.5 times brighter than a magnitude 2 star, about 2.52 times brighter than a magnitude 3 star, about 2.53 times brighter than a magnitude 4 star, and so on. This is the modern magnitude system, which measures the brightness, not the apparent size, of stars. Using this logarithmic scale, it is possible for a star to be brighter than “first class”, so Arcturus or Vega are magnitude 0, and Sirius is magnitude −1.46. Scale As mentioned above, the scale appears to work 'in reverse', with objects with a negative magnitude being brighter than those with a positive magnitude. The more negative the value, the brighter the object. Objects appearing farther to the left on this line are brighter, while objects appearing farther to the right are dimmer. Thus zero appears in the middle, with the brightest objects on the far left, and the dimmest objects on the far right. Apparent and absolute magnitude Two of the main types of magnitudes distinguished by astronomers are: Apparent magnitude, the brightness of an object as it appears in the night sky. Absolute magnitude, which measures the luminosity of an object (or reflected light for non-luminous objects like asteroids); it is the object's apparent magnitude as seen from a specific distance, conventionally 10 parsecs (32.6 light years). The difference between these concepts can be seen by comparing two stars. Betelgeuse (apparent magnitude 0.5, absolute magnitude −5.8) appears slightly dimmer in the sky than Alpha Centauri A (apparent magnitude 0.0, absolute magnitude 4.4) even though it emits thousands of times more light, because Betelgeuse is much farther away. Apparent magnitude Under the modern logarithmic magnitude scale, two objects, one of which is used as a reference or baseline, whose flux (i.e., brightness, a measure of power per unit area) in units such as watts per square metre (W m−2) are and , will have magnitudes and related by Astronomers use the term "flux" for what is often called "intensity" in physics, in order to avoid confusion with the specific intensity. Using this formula, the magnitude scale can be extended beyond the ancient magnitude 1–6 range, and it becomes a precise measure of brightness rather than simply a classification system. Astronomers now measure differences as small as one-hundredth of a magnitude. Stars that have magnitudes between 1.5 and 2.5 are called second-magnitude; there are some 20 stars brighter than 1.5, which are first-magnitude stars (see the list of brightest stars). For example, Sirius is magnitude −1.46, Arcturus is −0.04, Aldebaran is 0.85, Spica is 1.04, and Procyon is 0.34. Under the ancient magnitude system, all of these stars might have been classified as "stars of the first magnitude". Magnitudes can also be calculated for objects far brighter than stars (such as the Sun and Moon), and for objects too faint for the human eye to see (such as Pluto). Absolute magnitude Often, only apparent magnitude is mentioned since it can be measured directly. Absolute magnitude can be calculated from apparent magnitude and distance from: because intensity falls off proportionally to distance squared. This is known as the distance modulus, where is the distance to the star measured in parsecs, is the apparent magnitude, and is the absolute magnitude. If the line of sight between the object and observer is affected by extinction due to absorption of light by interstellar dust particles, then the object's apparent magnitude will be correspondingly fainter. For magnitudes of extinction, the relationship between apparent and absolute magnitudes becomes Stellar absolute magnitudes are usually designated with a capital M with a subscript to indicate the passband. For example, MV is the magnitude at 10 parsecs in the V passband. A bolometric magnitude (Mbol) is an absolute magnitude adjusted to take account of radiation across all wavelengths; it is typically smaller (i.e. brighter) than an absolute magnitude in a particular passband, especially for very hot or very cool objects. Bolometric magnitudes are formally defined based on stellar luminosity in watts, and are normalised to be approximately equal to MV for yellow stars. Absolute magnitudes for Solar System objects are frequently quoted based on a distance of 1 AU. These are referred to with a capital H symbol. Since these objects are lit primarily by reflected light from the Sun, an H magnitude is defined as the apparent magnitude of the object at 1 AU from the Sun and 1 AU from the observer. Examples The following is a table giving apparent magnitudes for celestial objects and artificial satellites ranging from the Sun to the faintest object visible with the James Webb Space Telescope (JWST): Other scales Any magnitude systems must be calibrated to define the brightness of magnitude zero. Many magnitude systems, such as the Johnson UBV system, assign the average brightness of several stars to a certain number to by definition, and all other magnitude measurements are compared to that reference point. Other magnitude systems calibrate by measuring energy directly, without a reference point, and these are called "absolute" reference systems. Current absolute reference systems include the AB magnitude system, in which the reference is a source with a constant flux density per unit frequency, and the STMAG system, in which the reference source is instead defined to have constant flux density per unit wavelength. Decibel Another logarithmic measure for intensity is the level, in decibel. Although it is more commonly used for sound intensity, it is also used for light intensity. It is a parameter for photomultiplier tubes and similar camera optics for telescopes and microscopes. Each factor of 10 in intensity corresponds to 10 decibels. In particular, a multiplier of 100 in intensity corresponds to an increase of 20 decibels and also corresponds to a decrease in magnitude by 5. Generally, the change in level is related to a change in magnitude by dB For example, an object that is 1 magnitude larger (fainter) than a reference would produce a signal that is smaller (weaker) than the reference, which might need to be compensated by an increase in the capability of the camera by as many decibels.
Physical sciences
Observational astronomy
null
20556798
https://en.wikipedia.org/wiki/Myocardial%20infarction
Myocardial infarction
A myocardial infarction (MI), commonly known as a heart attack, occurs when blood flow decreases or stops in one of the coronary arteries of the heart, causing infarction (tissue death) to the heart muscle. The most common symptom is retrosternal chest pain or discomfort that classically radiates to the left shoulder, arm, or jaw. The pain may occasionally feel like heartburn. This is the dangerous type of Acute coronary syndrome. Other symptoms may include shortness of breath, nausea, feeling faint, a cold sweat, feeling tired, and decreased level of consciousness. About 30% of people have atypical symptoms. Women more often present without chest pain and instead have neck pain, arm pain or feel tired. Among those over 75 years old, about 5% have had an MI with little or no history of symptoms. An MI may cause heart failure, an irregular heartbeat, cardiogenic shock or cardiac arrest. Most MIs occur due to coronary artery disease. Risk factors include high blood pressure, smoking, diabetes, lack of exercise, obesity, high blood cholesterol, poor diet, and excessive alcohol intake. The complete blockage of a coronary artery caused by a rupture of an atherosclerotic plaque is usually the underlying mechanism of an MI. MIs are less commonly caused by coronary artery spasms, which may be due to cocaine, significant emotional stress (often known as Takotsubo syndrome or broken heart syndrome) and extreme cold, among others. Many tests are helpful with diagnosis, including electrocardiograms (ECGs), blood tests and coronary angiography. An ECG, which is a recording of the heart's electrical activity, may confirm an ST elevation MI (STEMI), if ST elevation is present. Commonly used blood tests include troponin and less often creatine kinase MB. Treatment of an MI is time-critical. Aspirin is an appropriate immediate treatment for a suspected MI. Nitroglycerin or opioids may be used to help with chest pain; however, they do not improve overall outcomes. Supplemental oxygen is recommended in those with low oxygen levels or shortness of breath. In a STEMI, treatments attempt to restore blood flow to the heart and include percutaneous coronary intervention (PCI), where the arteries are pushed open and may be stented, or thrombolysis, where the blockage is removed using medications. People who have a non-ST elevation myocardial infarction (NSTEMI) are often managed with the blood thinner heparin, with the additional use of PCI in those at high risk. In people with blockages of multiple coronary arteries and diabetes, coronary artery bypass surgery (CABG) may be recommended rather than angioplasty. After an MI, lifestyle modifications, along with long-term treatment with aspirin, beta blockers and statins, are typically recommended. Worldwide, about 15.9 million myocardial infarctions occurred in 2015. More than 3 million people had an ST elevation MI, and more than 4 million had an NSTEMI. STEMIs occur about twice as often in men as women. About one million people have an MI each year in the United States. In the developed world, the risk of death in those who have had a STEMI is about 10%. Rates of MI for a given age have decreased globally between 1990 and 2010. In 2011, an MI was one of the top five most expensive conditions during inpatient hospitalizations in the US, with a cost of about $11.5 billion for 612,000 hospital stays. Terminology Myocardial infarction (MI) refers to tissue death (infarction) of the heart muscle (myocardium) caused by ischemia, the lack of oxygen delivery to myocardial tissue. It is a type of acute coronary syndrome, which describes a sudden or short-term change in symptoms related to blood flow to the heart. Unlike the other type of acute coronary syndrome, unstable angina, a myocardial infarction occurs when there is cell death, which can be estimated by measuring by a blood test for biomarkers (the cardiac protein troponin). When there is evidence of an MI, it may be classified as an ST elevation myocardial infarction (STEMI) or Non-ST elevation myocardial infarction (NSTEMI) based on the results of an ECG. The phrase "heart attack" is often used non-specifically to refer to myocardial infarction. An MI is different from—but can cause—cardiac arrest, where the heart is not contracting at all or so poorly that all vital organs cease to function, thus leading to death. It is also distinct from heart failure, in which the pumping action of the heart is impaired. However, an MI may lead to heart failure. Signs and symptoms Chest pain that may or may not radiate to other parts of the body is the most typical and significant symptom of myocardial infarction. It might be accompanied by other symptoms such as sweating. Pain Chest pain is one of the most common symptoms of acute myocardial infarction and is often described as a sensation of tightness, pressure, or squeezing. Pain radiates most often to the left arm, but may also radiate to the lower jaw, neck, right arm, back, and upper abdomen. The pain most suggestive of an acute MI, with the highest likelihood ratio, is pain radiating to the right arm and shoulder. Similarly, chest pain similar to a previous heart attack is also suggestive. The pain associated with MI is usually diffuse, does not change with position, and lasts for more than 20 minutes. It might be described as pressure, tightness, knifelike, tearing, burning sensation (all these are also manifested during other diseases). It could be felt as an unexplained anxiety, and pain might be absent altogether. Levine's sign, in which a person localizes the chest pain by clenching one or both fists over their sternum, has classically been thought to be predictive of cardiac chest pain, although a prospective observational study showed it had a poor positive predictive value. Typically, chest pain because of ischemia, be it unstable angina or myocardial infarction, lessens with the use of nitroglycerin, but nitroglycerin may also relieve chest pain arising from non-cardiac causes. Other Chest pain may be accompanied by sweating, nausea or vomiting, and fainting, and these symptoms may also occur without any pain at all. Dizziness or lightheadedness is common and occurs due to reduction in oxygen and blood to the brain. In females, the most common symptoms of myocardial infarction include shortness of breath, weakness, and fatigue. Females are more likely to have unusual or unexplained tiredness and nausea or vomiting as symptoms. Females having heart attacks are more likely to have palpitations, back pain, labored breath, vomiting, and left arm pain than males, although the studies showing these differences had high variability. Females are less likely to report chest pain during a heart attack and more likely to report nausea, jaw pain, neck pain, cough, and fatigue, although these findings are inconsistent across studies. Females with heart attacks also had more indigestion, dizziness, loss of appetite, and loss of consciousness. Shortness of breath is a common, and sometimes the only symptom, occurring when damage to the heart limits the output of the left ventricle, with breathlessness arising either from low oxygen in the blood or pulmonary edema. Other less common symptoms include weakness, light-headedness, palpitations, and abnormalities in heart rate or blood pressure. These symptoms are likely induced by a massive surge of catecholamines from the sympathetic nervous system, which occurs in response to pain and, where present, low blood pressure. Loss of consciousness can occur in myocardial infarctions due to inadequate blood flow to the brain and cardiogenic shock, and sudden death, frequently due to the development of ventricular fibrillation. When the brain was without oxygen for too long due to a myocardial infarction, coma and persistent vegetative state can occur. Cardiac arrest, and atypical symptoms such as palpitations, occur more frequently in females, the elderly, those with diabetes, in people who have just had surgery, and in critically ill patients. Absence "Silent" myocardial infarctions can happen without any symptoms at all. These cases can be discovered later on electrocardiograms, using blood enzyme tests, or at autopsy after a person has died. Such silent myocardial infarctions represent between 22 and 64% of all infarctions, and are more common in the elderly, in those with diabetes mellitus and after heart transplantation. In people with diabetes, differences in pain threshold, autonomic neuropathy, and psychological factors have been cited as possible explanations for the lack of symptoms. In heart transplantation, the donor heart is not fully innervated by the nervous system of the recipient. Risk factors The most prominent risk factors for myocardial infarction are older age, actively smoking, high blood pressure, diabetes mellitus, and total cholesterol and high-density lipoprotein levels. Many risk factors of myocardial infarction are shared with coronary artery disease, the primary cause of myocardial infarction, with other risk factors including male sex, low levels of physical activity, a past family history, obesity, and alcohol use. Risk factors for myocardial disease are often included in risk factor stratification scores, such as the Framingham Risk Score. At any given age, men are more at risk than women for the development of cardiovascular disease. High levels of blood cholesterol is a known risk factor, particularly high low-density lipoprotein, low high-density lipoprotein, and high triglycerides. Many risk factors for myocardial infarction are potentially modifiable, with the most important being tobacco smoking (including secondhand smoke). Smoking appears to be the cause of about 36% and obesity the cause of 20% of coronary artery disease. Lack of physical activity has been linked to 7–12% of cases. Less common causes include stress-related causes such as job stress, which accounts for about 3% of cases, and chronic high stress levels. Diet There is varying evidence about the importance of saturated fat in the development of myocardial infarctions. Eating polyunsaturated fat instead of saturated fats has been shown in studies to be associated with a decreased risk of myocardial infarction, while other studies find little evidence that reducing dietary saturated fat or increasing polyunsaturated fat intake affects heart attack risk. Dietary cholesterol does not appear to have a significant effect on blood cholesterol and thus recommendations about its consumption may not be needed. Trans fats do appear to increase risk. Acute and prolonged intake of high quantities of alcoholic drinks (3–4 or more daily) increases the risk of a heart attack. Genetics Family history of ischemic heart disease or MI, particularly if one has a male first-degree relative (father, brother) who had a myocardial infarction before age 55 years, or a female first-degree relative (mother, sister) less than age 65 increases a person's risk of MI. Genome-wide association studies have found 27 genetic variants that are associated with an increased risk of myocardial infarction. The strongest association of MI has been found with chromosome 9 on the short arm p at locus 21, which contains genes CDKN2A and 2B, although the single nucleotide polymorphisms that are implicated are within a non-coding region. The majority of these variants are in regions that have not been previously implicated in coronary artery disease. The following genes have an association with MI: PCSK9, SORT1, MIA3, WDR12, MRAS, PHACTR1, LPA, TCF21, MTHFDSL, ZC3HC1, CDKN2A, 2B, ABO, PDGF0, APOA5, MNF1ASM283, COL4A1, HHIPC1, SMAD3, ADAMTS7, RAS1, SMG6, SNF8, LDLR, SLC5A3, MRPS6, KCNE2. Other The risk of having a myocardial infarction increases with older age, low physical activity, and low socioeconomic status. Heart attacks appear to occur more commonly in the morning hours, especially between 6AM and noon. Evidence suggests that heart attacks are at least three times more likely to occur in the morning than in the late evening. Shift work is also associated with a higher risk of MI. One analysis has found an increase in heart attacks immediately following the start of daylight saving time. Women who use combined oral contraceptive pills have a modestly increased risk of myocardial infarction, especially in the presence of other risk factors. The use of non-steroidal anti inflammatory drugs (NSAIDs), even for as short as a week, increases risk. Endometriosis in women under the age of 40 is an identified risk factor. Air pollution is also an important modifiable risk. Short-term exposure to air pollution such as carbon monoxide, nitrogen dioxide, and sulfur dioxide (but not ozone) has been associated with MI and other acute cardiovascular events. For sudden cardiac deaths, every increment of 30 units in Pollutant Standards Index correlated with an 8% increased risk of out-of-hospital cardiac arrest on the day of exposure. Extremes of temperature are also associated. A number of acute and chronic infections including Chlamydophila pneumoniae, influenza, Helicobacter pylori, and Porphyromonas gingivalis among others have been linked to atherosclerosis and myocardial infarction. Myocardial infarction can also occur as a late consequence of Kawasaki disease. Calcium deposits in the coronary arteries can be detected with CT scans. Calcium seen in coronary arteries can provide predictive information beyond that of classical risk factors. High blood levels of the amino acid homocysteine is associated with premature atherosclerosis; whether elevated homocysteine in the normal range is causal is controversial. In people without evident coronary artery disease, possible causes for the myocardial infarction are coronary spasm or coronary artery dissection. Mechanism Atherosclerosis The most common cause of a myocardial infarction is the rupture of an atherosclerotic plaque on an artery supplying heart muscle. Plaques can become unstable, rupture, and additionally promote the formation of a blood clot that blocks the artery; this can occur in minutes. Blockage of an artery can lead to tissue death in tissue being supplied by that artery. Atherosclerotic plaques are often present for decades before they result in symptoms. The gradual buildup of cholesterol and fibrous tissue in plaques in the wall of the coronary arteries or other arteries, typically over decades, is termed atherosclerosis. Atherosclerosis is characterized by progressive inflammation of the walls of the arteries. Inflammatory cells, particularly macrophages, move into affected arterial walls. Over time, they become laden with cholesterol products, particularly LDL, and become foam cells. A cholesterol core forms as foam cells die. In response to growth factors secreted by macrophages, smooth muscle and other cells move into the plaque and act to stabilize it. A stable plaque may have a thick fibrous cap with calcification. If there is ongoing inflammation, the cap may be thin or ulcerate. Exposed to the pressure associated with blood flow, plaques, especially those with a thin lining, may rupture and trigger the formation of a blood clot (thrombus). The cholesterol crystals have been associated with plaque rupture through mechanical injury and inflammation. Other causes Atherosclerotic disease is not the only cause of myocardial infarction, but it may exacerbate or contribute to other causes. A myocardial infarction may result from a heart with a limited blood supply subject to increased oxygen demands, such as in fever, a fast heart rate, hyperthyroidism, too few red blood cells in the bloodstream, or low blood pressure. Damage or failure of procedures such as percutaneous coronary intervention (PCI) or coronary artery bypass grafts (CABG) may cause a myocardial infarction. Spasm of coronary arteries, such as Prinzmetal's angina may cause blockage. Tissue death If impaired blood flow to the heart lasts long enough, it triggers a process called the ischemic cascade; the heart cells in the territory of the blocked coronary artery die (infarction), chiefly through necrosis, and do not grow back. A collagen scar forms in their place. When an artery is blocked, cells lack oxygen, needed to produce ATP in mitochondria. ATP is required for the maintenance of electrolyte balance, particularly through the Na/K ATPase. This leads to an ischemic cascade of intracellular changes, necrosis and apoptosis of affected cells. Cells in the area with the worst blood supply, just below the inner surface of the heart (endocardium), are most susceptible to damage. Ischemia first affects this region, the subendocardial region, and tissue begins to die within 15–30 minutes of loss of blood supply. The dead tissue is surrounded by a zone of potentially reversible ischemia that progresses to become a full-thickness transmural infarct. The initial "wave" of infarction can take place over 3–4 hours. These changes are seen on gross pathology and cannot be predicted by the presence or absence of Q waves on an ECG. The position, size and extent of an infarct depends on the affected artery, totality of the blockage, duration of the blockage, the presence of collateral blood vessels, oxygen demand, and success of interventional procedures. Tissue death and myocardial scarring alter the normal conduction pathways of the heart and weaken affected areas. The size and location put a person at risk of abnormal heart rhythms (arrhythmias) or heart block, aneurysm of the heart ventricles, inflammation of the heart wall following infarction, and rupture of the heart wall that can have catastrophic consequences. Injury to the myocardium also occurs during re-perfusion. This might manifest as ventricular arrhythmia. The re-perfusion injury is a consequence of the calcium and sodium uptake from the cardiac cells and the release of oxygen radicals during reperfusion. No-reflow phenomenon—when blood is still unable to be distributed to the affected myocardium despite clearing the occlusion—also contributes to myocardial injury. Topical endothelial swelling is one of many factors contributing to this phenomenon. Diagnosis Criteria A myocardial infarction, according to current consensus, is defined by elevated cardiac biomarkers with a rising or falling trend and at least one of the following: Symptoms relating to ischemia Changes on an electrocardiogram (ECG), such as ST segment changes, new left bundle branch block, or pathologic Q waves Changes in the motion of the heart wall on imaging Demonstration of a thrombus on angiogram or at autopsy. Types A myocardial infarction is usually clinically classified as an ST-elevation MI (STEMI) or a non-ST elevation MI (NSTEMI). These are based on ST elevation, a portion of a heartbeat graphically recorded on an ECG. STEMIs make up about 25–40% of myocardial infarctions. A more explicit classification system, based on international consensus in 2012, also exists. This classifies myocardial infarctions into five types: Spontaneous MI related to plaque erosion and/or rupture fissuring, or dissection MI related to ischemia, such as from increased oxygen demand or decreased supply, e.g., coronary artery spasm, coronary embolism, anemia, arrhythmias, high blood pressure, or low blood pressure Sudden unexpected cardiac death, including cardiac arrest, where symptoms may suggest MI, an ECG may be taken with suggestive changes, or a blood clot is found in a coronary artery by angiography and/or at autopsy, but where blood samples could not be obtained, or at a time before the appearance of cardiac biomarkers in the blood Associated with coronary angioplasty or stents Associated with percutaneous coronary intervention (PCI) Associated with stent thrombosis as documented by angiography or at autopsy Associated with CABG Associated with spontaneous coronary artery dissection in young, fit women Cardiac biomarkers There are many different biomarkers used to determine the presence of cardiac muscle damage. Troponins, measured through a blood test, are considered to be the best, and are preferred because they have greater sensitivity and specificity for measuring injury to the heart muscle than other tests. A rise in troponin occurs within 2–3 hours of injury to the heart muscle, and peaks within 1–2 days. The level of the troponin, as well as a change over time, are useful in measuring and diagnosing or excluding myocardial infarctions, and the diagnostic accuracy of troponin testing is improving over time. One high-sensitivity cardiac troponin can rule out a heart attack as long as the ECG is normal. Other tests, such as CK-MB or myoglobin, are discouraged. CK-MB is not as specific as troponins for acute myocardial injury, and may be elevated with past cardiac surgery, inflammation or electrical cardioversion; it rises within 4–8 hours and returns to normal within 2–3 days. Copeptin may be useful to rule out MI rapidly when used along with troponin. Electrocardiogram Electrocardiograms (ECGs) are a series of leads placed on a person's chest that measure electrical activity associated with contraction of the heart muscle. The taking of an ECG is an important part of the workup of an AMI, and ECGs are often not just taken once but may be repeated over minutes to hours, or in response to changes in signs or symptoms. ECG readouts produce a waveform with different labeled features. In addition to a rise in biomarkers, a rise in the ST segment, changes in the shape or flipping of T waves, new Q waves, or a new left bundle branch block can be used to diagnose an AMI. In addition, ST elevation can be used to diagnose an ST segment myocardial infarction (STEMI). A rise must be new in V2 and V3 ≥2 mm (0,2 mV) for males or ≥1.5 mm (0.15 mV) for females or ≥1 mm (0.1 mV) in two other adjacent chest or limb leads. ST elevation is associated with infarction, and may be preceded by changes indicating ischemia, such as ST depression or inversion of the T waves. Abnormalities can help differentiate the location of an infarct, based on the leads that are affected by changes. Early STEMIs may be preceded by peaked T waves. Other ECG abnormalities relating to complications of acute myocardial infarctions may also be evident, such as atrial or ventricular fibrillation. Imaging Noninvasive imaging plays an important role in the diagnosis and characterisation of myocardial infarction. Tests such as chest X-rays can be used to explore and exclude alternate causes of a person's symptoms. Echocardiography may assist in modifying clinical suspicion of ongoing myocardial infarction in patients that can't be ruled out or ruled in following initial ECG and Troponin testing. Myocardial perfusion imaging has no role in the acute diagnostic algorithm; however, it can confirm a clinical suspicion of Chronic Coronary Syndrome when the patient's history, physical examination (including cardiac examination) ECG, and cardiac biomarkers suggest coronary artery disease. Echocardiography, an ultrasound scan of the heart, is able to visualize the heart, its size, shape, and any abnormal motion of the heart walls as they beat that may indicate a myocardial infarction. The flow of blood can be imaged, and contrast dyes may be given to improve image. Other scans using radioactive contrast include SPECT CT-scans using thallium, sestamibi (MIBI scans) or tetrofosmin; or a PET scan using Fludeoxyglucose or rubidium-82. These nuclear medicine scans can visualize the perfusion of heart muscle. SPECT may also be used to determine viability of tissue, and whether areas of ischemia are inducible. Medical societies and professional guidelines recommend that the physician confirm a person is at high risk for Chronic Coronary Syndrome before conducting diagnostic non-invasive imaging tests to make a diagnosis, as such tests are unlikely to change management and result in increased costs. Patients who have a normal ECG and who are able to exercise, for example, most likely do not merit routine imaging. Differential diagnosis There are many causes of chest pain, which can originate from the heart, lungs, gastrointestinal tract, aorta, and other muscles, bones and nerves surrounding the chest. In addition to myocardial infarction, other causes include angina, insufficient blood supply (ischemia) to the heart muscles without evidence of cell death, gastroesophageal reflux disease; pulmonary embolism, tumors of the lungs, pneumonia, rib fracture, costochondritis, heart failure and other musculoskeletal injuries. Rarer severe differential diagnoses include aortic dissection, esophageal rupture, tension pneumothorax, and pericardial effusion causing cardiac tamponade. The chest pain in an MI may mimic heartburn. Causes of sudden-onset breathlessness generally involve the lungs or heart – including pulmonary edema, pneumonia, allergic reactions and asthma, and pulmonary embolus, acute respiratory distress syndrome and metabolic acidosis. There are many different causes of fatigue, and myocardial infarction is not a common cause. Prevention There is a large crossover between the lifestyle and activity recommendations to prevent a myocardial infarction, and those that may be adopted as secondary prevention after an initial myocardial infarction, because of shared risk factors and an aim to reduce atherosclerosis affecting heart vessels. The influenza vaccine also appear to protect against myocardial infarction with a benefit of 15 to 45%. Primary prevention Lifestyle Physical activity can reduce the risk of cardiovascular disease, and people at risk are advised to engage in 150 minutes of moderate or 75 minutes of vigorous intensity aerobic exercise a week. Keeping a healthy weight, drinking alcohol within the recommended limits, and quitting smoking reduce the risk of cardiovascular disease. Substituting unsaturated fats such as olive oil and rapeseed oil instead of saturated fats may reduce the risk of myocardial infarction, although there is not universal agreement. Dietary modifications are recommended by some national authorities, with recommendations including increasing the intake of wholegrain starch, reducing sugar intake (particularly of refined sugar), consuming five portions of fruit and vegetables daily, consuming two or more portions of fish per week, and consuming 4–5 portions of unsalted nuts, seeds, or legumes per week. The dietary pattern with the greatest support is the Mediterranean diet. Vitamins and mineral supplements are of no proven benefit, and neither are plant stanols or sterols. Public health measures may also act at a population level to reduce the risk of myocardial infarction, for example by reducing unhealthy diets (excessive salt, saturated fat, and trans-fat) including food labeling and marketing requirements as well as requirements for catering and restaurants and stimulating physical activity. This may be part of regional cardiovascular disease prevention programs or through the health impact assessment of regional and local plans and policies. Most guidelines recommend combining different preventive strategies. A 2015 Cochrane Review found some evidence that such an approach might help with blood pressure, body mass index and waist circumference. However, there was insufficient evidence to show an effect on mortality or actual cardio-vascular events. Medication Statins, drugs that act to lower blood cholesterol, decrease the incidence and mortality rates of myocardial infarctions. They are often recommended in those at an elevated risk of cardiovascular diseases. Aspirin has been studied extensively in people considered at increased risk of myocardial infarction. Based on numerous studies in different groups (e.g. people with or without diabetes), there does not appear to be a benefit strong enough to outweigh the risk of excessive bleeding. Nevertheless, many clinical practice guidelines continue to recommend aspirin for primary prevention, and some researchers feel that those with very high cardiovascular risk but low risk of bleeding should continue to receive aspirin. Secondary prevention There is a large crossover between the lifestyle and activity recommendations to prevent a myocardial infarction, and those that may be adopted as secondary prevention after an initial myocardial infarct. Recommendations include stopping smoking, a gradual return to exercise, eating a healthy diet, low in saturated fat and low in cholesterol, drinking alcohol within recommended limits, exercising, and trying to achieve a healthy weight. Exercise is both safe and effective even if people have had stents or heart failure, and is recommended to start gradually after 1–2 weeks. Counselling should be provided relating to medications used, and for warning signs of depression. Previous studies suggested a benefit from omega-3 fatty acid supplementation but this has not been confirmed. Medications Following a heart attack, nitrates, when taken for two days, and ACE-inhibitors decrease the risk of death. Other medications include: Aspirin is continued indefinitely, as well as another antiplatelet agent such as clopidogrel or ticagrelor ("dual antiplatelet therapy" or DAPT) for up to twelve months. If someone has another medical condition that requires anticoagulation (e.g. with warfarin) this may need to be adjusted based on risk of further cardiac events as well as bleeding risk. In those who have had a stent, more than 12 months of clopidogrel plus aspirin does not affect the risk of death. Beta blocker therapy such as metoprolol or carvedilol is recommended to be started within 24 hours, provided there is no acute heart failure or heart block. The dose should be increased to the highest tolerated. Contrary to most guidelines, the use of beta blockers does not appear to affect the risk of death, possibly because other treatments for MI have improved. When beta blocker medication is given within the first 24–72 hours of a STEMI no lives are saved. However, 1 in 200 people were prevented from a repeat heart attack, and another 1 in 200 from having an abnormal heart rhythm. Additionally, for 1 in 91 the medication causes a temporary decrease in the heart's ability to pump blood. ACE inhibitor therapy should be started within 24 hours and continued indefinitely at the highest tolerated dose. This is provided there is no evidence of worsening kidney failure, high potassium, low blood pressure, or known narrowing of the renal arteries. Those who cannot tolerate ACE inhibitors may be treated with an angiotensin II receptor antagonist. Statin therapy has been shown to reduce mortality and subsequent cardiac events and should be commenced to lower LDL cholesterol. Other medications, such as ezetimibe, may also be added with this goal in mind. Aldosterone antagonists (spironolactone or eplerenone) may be used if there is evidence of left ventricular dysfunction after an MI, ideally after beginning treatment with an ACE inhibitor. Other A defibrillator, an electric device connected to the heart and surgically inserted under the skin, may be recommended. This is particularly if there are any ongoing signs of heart failure, with a low left ventricular ejection fraction and a New York Heart Association grade II or III after 40 days of the infarction. Defibrillators detect potentially fatal arrhythmia and deliver an electrical shock to the person to depolarize a critical mass of the heart muscle. First aid Taking aspirin helps to reduce the risk of mortality in people with myocardial infarction. Management A myocardial infarction requires immediate medical attention. Treatment aims to preserve as much heart muscle as possible, and to prevent further complications. Treatment depends on whether the myocardial infarction is a STEMI or NSTEMI. Treatment in general aims to unblock blood vessels, reduce blood clot enlargement, reduce ischemia, and modify risk factors with the aim of preventing future MIs. In addition, the main treatment for myocardial infarctions with ECG evidence of ST elevation (STEMI) include thrombolysis or percutaneous coronary intervention, although PCI is also ideally conducted within 1–3 days for NSTEMI. In addition to clinical judgement, risk stratification may be used to guide treatment, such as with the TIMI and GRACE scoring systems. Pain The pain associated with myocardial infarction is often treated with nitroglycerin, a vasodilator, or opioid medications such as morphine. Nitroglycerin (given under the tongue or injected into a vein) may improve blood supply to the heart. It is an important part of therapy for its pain relief effects, though there is no proven benefit to mortality. Morphine or other opioid medications may also be used, and are effective for the pain associated with STEMI. There is poor evidence that morphine shows any benefit to overall outcomes, and there is some evidence of potential harm. Antithrombotics Aspirin, an antiplatelet drug, is given as a loading dose to reduce the clot size and reduce further clotting in the affected artery. It is known to decrease mortality associated with acute myocardial infarction by at least 50%. P2Y12 inhibitors such as clopidogrel, prasugrel and ticagrelor are given concurrently, also as a loading dose, with the dose depending on whether further surgical management or fibrinolysis is planned. Prasugrel and ticagrelor are recommended in European and American guidelines, as they are active more quickly and consistently than clopidogrel. P2Y12 inhibitors are recommended in both NSTEMI and STEMI, including in PCI, with evidence also to suggest improved mortality. Heparins, particularly in the unfractionated form, act at several points in the clotting cascade, help to prevent the enlargement of a clot, and are also given in myocardial infarction, owing to evidence suggesting improved mortality rates. In very high-risk scenarios, inhibitors of the platelet glycoprotein αIIbβ3a receptor such as eptifibatide or tirofiban may be used. There is varying evidence on the mortality benefits in NSTEMI. A 2014 review of P2Y12 inhibitors such as clopidogrel found they do not change the risk of death when given to people with a suspected NSTEMI prior to PCI, nor do heparins change the risk of death. They do decrease the risk of having a further myocardial infarction. Angiogram Primary percutaneous coronary intervention (PCI) is the treatment of choice for STEMI if it can be performed in a timely manner, ideally within 90–120 minutes of contact with a medical provider. Some recommend it is also done in NSTEMI within 1–3 days, particularly when considered high-risk. A 2017 review, however, did not find a difference between early versus later PCI in NSTEMI. PCI involves small probes, inserted through peripheral blood vessels such as the femoral artery or radial artery into the blood vessels of the heart. The probes are then used to identify and clear blockages using small balloons, which are dragged through the blocked segment, dragging away the clot, or the insertion of stents. Coronary artery bypass grafting is only considered when the affected area of heart muscle is large, and PCI is unsuitable, for example with difficult cardiac anatomy. After PCI, people are generally placed on aspirin indefinitely and on dual antiplatelet therapy (generally aspirin and clopidogrel) for at least a year. Fibrinolysis If PCI cannot be performed within 90 to 120 minutes in STEMI then fibrinolysis, preferably within 30 minutes of arrival to hospital, is recommended. If a person has had symptoms for 12 to 24 hours evidence for effectiveness of thrombolysis is less and if they have had symptoms for more than 24 hours it is not recommended. Thrombolysis involves the administration of medication that activates the enzymes that normally dissolve blood clots. These medications include tissue plasminogen activator, reteplase, streptokinase, and tenecteplase. Thrombolysis is not recommended in a number of situations, particularly when associated with a high risk of bleeding or the potential for problematic bleeding, such as active bleeding, past strokes or bleeds into the brain, or severe hypertension. Situations in which thrombolysis may be considered, but with caution, include recent surgery, use of anticoagulants, pregnancy, and proclivity to bleeding. Major risks of thrombolysis are major bleeding and intracranial bleeding. Pre-hospital thrombolysis reduces time to thrombolytic treatment, based on studies conducted in higher income countries; however, it is unclear whether this has an impact on mortality rates. Other In the past, high flow oxygen was recommended for everyone with a possible myocardial infarction. More recently, no evidence was found for routine use in those with normal oxygen levels and there is potential harm from the intervention. Therefore, oxygen is currently only recommended if oxygen levels are found to be low or if someone is in respiratory distress. If despite thrombolysis there is significant cardiogenic shock, continued severe chest pain, or less than a 50% improvement in ST elevation on the ECG recording after 90 minutes, then rescue PCI is indicated emergently. Those who have had cardiac arrest may benefit from targeted temperature management with evaluation for implementation of hypothermia protocols. Furthermore, those with cardiac arrest, and ST elevation at any time, should usually have angiography. Aldosterone antagonists appear to be useful in people who have had an STEMI and do not have heart failure. Rehabilitation and exercise Cardiac rehabilitation benefits many who have experienced myocardial infarction, even if there has been substantial heart damage and resultant left ventricular failure. It should start soon after discharge from the hospital. The program may include lifestyle advice, exercise, social support, as well as recommendations about driving, flying, sports participation, stress management, and sexual intercourse. Returning to sexual activity after myocardial infarction is a major concern for most patients, and is an important area to be discussed in the provision of holistic care. In the short-term, exercise-based cardiovascular rehabilitation programs may reduce the risk of a myocardial infarction, reduces a large number of hospitalizations from all causes, reduces hospital costs, improves health-related quality of life, and has a small effect on all-cause mortality. Longer-term studies indicate that exercise-based cardiovascular rehabilitation programs may reduce cardiovascular mortality and myocardial infarction. Prognosis The prognosis after myocardial infarction varies greatly depending on the extent and location of the affected heart muscle, and the development and management of complications. Prognosis is worse with older age and social isolation. Anterior infarcts, persistent ventricular tachycardia or fibrillation, development of heart blocks, and left ventricular impairment are all associated with poorer prognosis. Without treatment, about a quarter of those affected by MI die within minutes and about forty percent within the first month. Morbidity and mortality from myocardial infarction has, however, improved over the years due to earlier and better treatment: in those who have a STEMI in the United States, between 5 and 6 percent die before leaving the hospital and 7 to 18 percent die within a year. It is unusual for babies to experience a myocardial infarction, but when they do, about half die. In the short-term, neonatal survivors seem to have a normal quality of life. Complications Complications may occur immediately following the myocardial infarction or may take time to develop. Disturbances of heart rhythms, including atrial fibrillation, ventricular tachycardia and fibrillation and heart block can arise as a result of ischemia, cardiac scarring, and infarct location. Stroke is also a risk, either as a result of clots transmitted from the heart during PCI, as a result of bleeding following anticoagulation, or as a result of disturbances in the heart's ability to pump effectively as a result of the infarction. Regurgitation of blood through the mitral valve is possible, particularly if the infarction causes dysfunction of the papillary muscle. Cardiogenic shock as a result of the heart being unable to adequately pump blood may develop, dependent on infarct size, and is most likely to occur within the days following an acute myocardial infarction. Cardiogenic shock is the largest cause of in-hospital mortality. Rupture of the ventricular dividing wall or left ventricular wall may occur within the initial weeks. Dressler's syndrome, a reaction following larger infarcts and a cause of pericarditis is also possible. Heart failure may develop as a long-term consequence, with an impaired ability of heart muscle to pump, scarring, and an increase in the size of the existing muscle. Aneurysm of the left ventricle myocardium develops in about 10% of MI and is itself a risk factor for heart failure, ventricular arrhythmia, and the development of clots. Risk factors for complications and death include age, hemodynamic parameters (such as heart failure, cardiac arrest on admission, systolic blood pressure, or Killip class of two or greater), ST-segment deviation, diabetes, serum creatinine, peripheral vascular disease, and elevation of cardiac markers. Epidemiology Myocardial infarction is a common presentation of coronary artery disease. The World Health Organization estimated in 2004, that 12.2% of worldwide deaths were from ischemic heart disease; with it being the leading cause of death in high- or middle-income countries and second only to lower respiratory infections in lower-income countries. Worldwide, more than 3 million people have STEMIs and 4 million have NSTEMIs a year. STEMIs occur about twice as often in men as women. Rates of death from ischemic heart disease (IHD) have slowed or declined in most high-income countries, although cardiovascular disease still accounted for one in three of all deaths in the US in 2008. For example, rates of death from cardiovascular disease have decreased almost a third between 2001 and 2011 in the United States. In contrast, IHD is becoming a more common cause of death in the developing world. For example, in India, IHD had become the leading cause of death by 2004, accounting for 1.46 million deaths (14% of total deaths) and deaths due to IHD were expected to double during 1985–2015. Globally, disability adjusted life years (DALYs) lost to ischemic heart disease are predicted to account for 5.5% of total DALYs in 2030, making it the second-most-important cause of disability (after unipolar depressive disorder), as well as the leading cause of death by this date. Social determinants of health Social determinants such as neighborhood disadvantage, immigration status, lack of social support, social isolation, and access to health services play an important role in myocardial infarction risk and survival. Studies have shown that low socioeconomic status is associated with an increased risk of poorer survival. There are well-documented disparities in myocardial infarction survival by socioeconomic status, race, education, and census-tract-level poverty. Race: In the U.S. African Americans have a greater burden of myocardial infarction and other cardiovascular events. On a population level, there is a higher overall prevalence of risk factors that are unrecognized and therefore not treated, which places these individuals at a greater likelihood of experiencing adverse outcomes and therefore potentially higher morbidity and mortality. Similarly, South Asians (including South Asians that have migrated to other countries around the world) experience higher rates of acute myocardial infarctions at younger ages, which can be largely explained by a higher prevalence of risk factors at younger ages. Socioeconomic status: Among individuals who live in the low-socioeconomic (SES) areas, which is close to 25% of the US population, myocardial infarctions (MIs) occurred twice as often compared with people who lived in higher SES areas. Immigration status: In 2018 many lawfully present immigrants who were eligible for coverage remained uninsured because immigrant families faced a range of enrollment barriers, including fear, confusion about eligibility policies, difficulty navigating the enrollment process, and language and literacy challenges. Uninsured undocumented immigrants are ineligible for coverage options due to their immigration status. Health care access: Lack of health insurance and financial concerns about accessing care were associated with delays in seeking emergency care for acute myocardial infarction which can have significant, adverse consequences on patient outcomes. Education: Researchers found that compared to people with graduate degrees, those with lower educational attainment appeared to have a higher risk of heart attack, dying from a cardiovascular event, and overall death. Society and culture Depictions of heart attacks in popular media often include collapsing or loss of consciousness which are not common symptoms; these depictions contribute to widespread misunderstanding about the symptoms of myocardial infarctions, which in turn contributes to people not getting care when they should. Legal implications At common law, in general, a myocardial infarction is a disease but may sometimes be an injury. This can create coverage issues in the administration of no-fault insurance schemes such as workers' compensation. In general, a heart attack is not covered; however, it may be a work-related injury if it results, for example, from unusual emotional stress or unusual exertion. In addition, in some jurisdictions, heart attacks had by persons in particular occupations such as police officers may be classified as line-of-duty injuries by statute or policy. In some countries or states, a person having had an MI may be prevented from participating in activity that puts other people's lives at risk, for example driving a car or flying an airplane.
Biology and health sciences
Non-infectious disease
null
20556859
https://en.wikipedia.org/wiki/Matrix%20%28mathematics%29
Matrix (mathematics)
In mathematics, a matrix (: matrices) is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object. For example, is a matrix with two rows and three columns. This is often referred to as a "two-by-three matrix", a " matrix", or a matrix of dimension . Matrices are commonly related to linear algebra. Notable exceptions include incidence matrices and adjacency matrices in graph theory. This article focuses on matrices related to linear algebra, and, unless otherwise specified, all matrices represent linear maps or may be viewed as such. Square matrices, matrices with the same number of rows and columns, play a major role in matrix theory. Square matrices of a given dimension form a noncommutative ring, which is one of the most common examples of a noncommutative ring. The determinant of a square matrix is a number associated with the matrix, which is fundamental for the study of a square matrix; for example, a square matrix is invertible if and only if it has a nonzero determinant and the eigenvalues of a square matrix are the roots of a polynomial determinant. In geometry, matrices are widely used for specifying and representing geometric transformations (for example rotations) and coordinate changes. In numerical analysis, many computational problems are solved by reducing them to a matrix computation, and this often involves computing with matrices of huge dimensions. Matrices are used in most areas of mathematics and scientific fields, either directly, or through their use in geometry and numerical analysis. Matrix theory is the branch of mathematics that focuses on the study of matrices. It was initially a sub-branch of linear algebra, but soon grew to include subjects related to graph theory, algebra, combinatorics and statistics. Definition A matrix is a rectangular array of numbers (or other mathematical objects), called the entries of the matrix. Matrices are subject to standard operations such as addition and multiplication. Most commonly, a matrix over a field F is a rectangular array of elements of F. A real matrix and a complex matrix are matrices whose entries are respectively real numbers or complex numbers. More general types of entries are discussed below. For instance, this is a real matrix: The numbers, symbols, or expressions in the matrix are called its entries or its elements. The horizontal and vertical lines of entries in a matrix are called rows and columns, respectively. Size The size of a matrix is defined by the number of rows and columns it contains. There is no limit to the number of rows and columns, that a matrix (in the usual sense) can have as long as they are positive integers. A matrix with rows and columns is called an matrix, or -by- matrix, where and are called its dimensions. For example, the matrix above is a matrix. Matrices with a single row are called row vectors, and those with a single column are called column vectors. A matrix with the same number of rows and columns is called a square matrix. A matrix with an infinite number of rows or columns (or both) is called an infinite matrix. In some contexts, such as computer algebra programs, it is useful to consider a matrix with no rows or no columns, called an empty matrix. Notation The specifics of symbolic matrix notation vary widely, with some prevailing trends. Matrices are commonly written in square brackets or parentheses, so that an matrix is represented as This may be abbreviated by writing only a single generic term, possibly along with indices, as in or in the case that . Matrices are usually symbolized using upper-case letters (such as in the examples above), while the corresponding lower-case letters, with two subscript indices (e.g., , or ), represent the entries. In addition to using upper-case letters to symbolize matrices, many authors use a special typographical style, commonly boldface Roman (non-italic), to further distinguish matrices from other mathematical objects. An alternative notation involves the use of a double-underline with the variable name, with or without boldface style, as in . The entry in the -th row and -th column of a matrix is sometimes referred to as the or entry of the matrix, and commonly denoted by or . Alternative notations for that entry are and . For example, the entry of the following matrix is (also denoted , , or ): Sometimes, the entries of a matrix can be defined by a formula such as . For example, each of the entries of the following matrix is determined by the formula . In this case, the matrix itself is sometimes defined by that formula, within square brackets or double parentheses. For example, the matrix above is defined as or . If matrix size is , the above-mentioned formula is valid for any and any . This can be specified separately or indicated using as a subscript. For instance, the matrix above is , and can be defined as or . Some programming languages utilize doubly subscripted arrays (or arrays of arrays) to represent an -by- matrix. Some programming languages start the numbering of array indexes at zero, in which case the entries of an -by- matrix are indexed by and . This article follows the more common convention in mathematical writing where enumeration starts from . The set of all -by- real matrices is often denoted or The set of all -by- matrices over another field, or over a ring , is similarly denoted or If , such as in the case of square matrices, one does not repeat the dimension: or Often, , or , is used in place of Basic operations Several basic operations can be applied to matrices. Some, such as transposition and submatrix do not depend on the nature of the entries. Others, such as matrix addition, scalar multiplication, matrix multiplication, and row operations involve operations on matrix entries and therefore require that matrix entries are numbers or belong to a field or a ring. In this section, it is supposed that matrix entries belong to a fixed ring, which is typically a field of numbers. Addition, scalar multiplication, subtraction and transposition Addition The sum of two matrices and is calculated entrywise: For example, Scalar multiplication The product of a number (also called a scalar in this context) and a matrix is computed by multiplying every entry of by : This operation is called scalar multiplication, but its result is not named "scalar product" to avoid confusion, since "scalar product" is often used as a synonym for "inner product". For example: Subtraction The subtraction of two matrices is defined by composing matrix addition with scalar multiplication by : Transposition The transpose of an matrix is the matrix (also denoted or ) formed by turning rows into columns and vice versa: For example: Familiar properties of numbers extend to these operations on matrices: for example, addition is commutative, that is, the matrix sum does not depend on the order of the summands: . The transpose is compatible with addition and scalar multiplication, as expressed by and . Finally, . Matrix multiplication Multiplication of two matrices is defined if and only if the number of columns of the left matrix is the same as the number of rows of the right matrix. If is an matrix and is an matrix, then their matrix product is the matrix whose entries are given by dot product of the corresponding row of and the corresponding column of : where and . For example, the underlined entry 2340 in the product is calculated as Matrix multiplication satisfies the rules (associativity), and as well as (left and right distributivity), whenever the size of the matrices is such that the various products are defined. The product may be defined without being defined, namely if and are and matrices, respectively, and Even if both products are defined, they generally need not be equal, that is: In other words, matrix multiplication is not commutative, in marked contrast to (rational, real, or complex) numbers, whose product is independent of the order of the factors. An example of two matrices not commuting with each other is: whereas Besides the ordinary matrix multiplication just described, other less frequently used operations on matrices that can be considered forms of multiplication also exist, such as the Hadamard product and the Kronecker product. They arise in solving matrix equations such as the Sylvester equation. Row operations There are three types of row operations: row addition, that is adding a row to another. row multiplication, that is multiplying all entries of a row by a non-zero constant; row switching, that is interchanging two rows of a matrix; These operations are used in several ways, including solving linear equations and finding matrix inverses. Submatrix A submatrix of a matrix is a matrix obtained by deleting any collection of rows and/or columns. For example, from the following 3-by-4 matrix, we can construct a 2-by-3 submatrix by removing row 3 and column 2: The minors and cofactors of a matrix are found by computing the determinant of certain submatrices. A principal submatrix is a square submatrix obtained by removing certain rows and columns. The definition varies from author to author. According to some authors, a principal submatrix is a submatrix in which the set of row indices that remain is the same as the set of column indices that remain. Other authors define a principal submatrix as one in which the first rows and columns, for some number , are the ones that remain; this type of submatrix has also been called a leading principal submatrix. Linear equations Matrices can be used to compactly write and work with multiple linear equations, that is, systems of linear equations. For example, if is an matrix, designates a column vector (that is, -matrix) of variables and is an -column vector, then the matrix equation is equivalent to the system of linear equations Using matrices, this can be solved more compactly than would be possible by writing out all the equations separately. If and the equations are independent, then this can be done by writing where is the inverse matrix of . If has no inverse, solutions—if any—can be found using its generalized inverse. Linear transformations Matrices and matrix multiplication reveal their essential features when related to linear transformations, also known as linear maps. A real -by- matrix gives rise to a linear transformation mapping each vector in to the (matrix) product , which is a vector in Conversely, each linear transformation arises from a unique -by- matrix : explicitly, the -entry of is the th coordinate of , where is the unit vector with 1 in the th position and 0 elsewhere. The matrix is said to represent the linear map , and is called the transformation matrix of . For example, the 2×2 matrix can be viewed as the transform of the unit square into a parallelogram with vertices at , , , and . The parallelogram pictured at the right is obtained by multiplying with each of the column vectors , and in turn. These vectors define the vertices of the unit square. The following table shows several 2×2 real matrices with the associated linear maps of The original is mapped to the grid and shapes. The origin is marked with a black point. Under the 1-to-1 correspondence between matrices and linear maps, matrix multiplication corresponds to composition of maps: if a -by- matrix represents another linear map , then the composition is represented by since The last equality follows from the above-mentioned associativity of matrix multiplication. The rank of a matrix is the maximum number of linearly independent row vectors of the matrix, which is the same as the maximum number of linearly independent column vectors. Equivalently it is the dimension of the image of the linear map represented by . The rank–nullity theorem states that the dimension of the kernel of a matrix plus the rank equals the number of columns of the matrix. Square matrix A square matrix is a matrix with the same number of rows and columns. An -by- matrix is known as a square matrix of order . Any two square matrices of the same order can be added and multiplied. The entries form the main diagonal of a square matrix. They lie on the imaginary line that runs from the top left corner to the bottom right corner of the matrix. Main types {| class="wikitable" style="float:right; margin:0ex 0ex 2ex 2ex;" |- ! Name !! Example with |- | Diagonal matrix || style="text-align:center;" | |- | Lower triangular matrix || style="text-align:center;" | |- | Upper triangular matrix || style="text-align:center;" | |} Diagonal and triangular matrix If all entries of below the main diagonal are zero, is called an upper triangular matrix. Similarly, if all entries of above the main diagonal are zero, is called a lower triangular matrix. If all entries outside the main diagonal are zero, is called a diagonal matrix. Identity matrix The identity matrix of size is the -by- matrix in which all the elements on the main diagonal are equal to 1 and all other elements are equal to 0, for example, It is a square matrix of order , and also a special kind of diagonal matrix. It is called an identity matrix because multiplication with it leaves a matrix unchanged: for any -by- matrix . A nonzero scalar multiple of an identity matrix is called a scalar matrix. If the matrix entries come from a field, the scalar matrices form a group, under matrix multiplication, that is isomorphic to the multiplicative group of nonzero elements of the field. Symmetric or skew-symmetric matrix A square matrix that is equal to its transpose, that is, , is a symmetric matrix. If instead, is equal to the negative of its transpose, that is, , then is a skew-symmetric matrix. In complex matrices, symmetry is often replaced by the concept of Hermitian matrices, which satisfies , where the star or asterisk denotes the conjugate transpose of the matrix, that is, the transpose of the complex conjugate of . By the spectral theorem, real symmetric matrices and complex Hermitian matrices have an eigenbasis; that is, every vector is expressible as a linear combination of eigenvectors. In both cases, all eigenvalues are real. This theorem can be generalized to infinite-dimensional situations related to matrices with infinitely many rows and columns, see below. Invertible matrix and its inverse A square matrix is called invertible or non-singular if there exists a matrix such that where is the identity matrix with 1s on the main diagonal and 0s elsewhere. If exists, it is unique and is called the inverse matrix of , denoted . There are many algorithms for testing whether a square marix is invertible, and, if it is, computing its inverse. One of the oldest, which is still in common use is Gaussian elimination. Definite matrix A symmetric real matrix is called positive-definite if the associated quadratic form has a positive value for every nonzero vector in If only yields negative values then is negative-definite; if does produce both negative and positive values then is indefinite. If the quadratic form yields only non-negative values (positive or zero), the symmetric matrix is called positive-semidefinite (or if only non-positive values, then negative-semidefinite); hence the matrix is indefinite precisely when it is neither positive-semidefinite nor negative-semidefinite. A symmetric matrix is positive-definite if and only if all its eigenvalues are positive, that is, the matrix is positive-semidefinite and it is invertible. The table at the right shows two possibilities for 2-by-2 matrices. Allowing as input two different vectors instead yields the bilinear form associated to : In the case of complex matrices, the same terminology and result apply, with symmetric matrix, quadratic form, bilinear form, and transpose replaced respectively by Hermitian matrix, Hermitian form, sesquilinear form, and conjugate transpose . Orthogonal matrix An orthogonal matrix is a square matrix with real entries whose columns and rows are orthogonal unit vectors (that is, orthonormal vectors). Equivalently, a matrix is orthogonal if its transpose is equal to its inverse: which entails where is the identity matrix of size . An orthogonal matrix is necessarily invertible (with inverse ), unitary (), and normal (). The determinant of any orthogonal matrix is either or . A special orthogonal matrix is an orthogonal matrix with determinant +1. As a linear transformation, every orthogonal matrix with determinant is a pure rotation without reflection, i.e., the transformation preserves the orientation of the transformed structure, while every orthogonal matrix with determinant reverses the orientation, i.e., is a composition of a pure reflection and a (possibly null) rotation. The identity matrices have determinant and are pure rotations by an angle zero. The complex analog of an orthogonal matrix is a unitary matrix. Main operations Trace The trace, of a square matrix is the sum of its diagonal entries. While matrix multiplication is not commutative as mentioned above, the trace of the product of two matrices is independent of the order of the factors: This is immediate from the definition of matrix multiplication: It follows that the trace of the product of more than two matrices is independent of cyclic permutations of the matrices, however, this does not in general apply for arbitrary permutations (for example, , in general). Also, the trace of a matrix is equal to that of its transpose, that is, Determinant The determinant of a square matrix (denoted or ) is a number encoding certain properties of the matrix. A matrix is invertible if and only if its determinant is nonzero. Its absolute value equals the area (in ) or volume (in ) of the image of the unit square (or cube), while its sign corresponds to the orientation of the corresponding linear map: the determinant is positive if and only if the orientation is preserved. The determinant of 2-by-2 matrices is given by The determinant of 3-by-3 matrices involves 6 terms (rule of Sarrus). The more lengthy Leibniz formula generalizes these two formulae to all dimensions. The determinant of a product of square matrices equals the product of their determinants: or using alternate notation: Adding a multiple of any row to another row, or a multiple of any column to another column does not change the determinant. Interchanging two rows or two columns affects the determinant by multiplying it by −1. Using these operations, any matrix can be transformed to a lower (or upper) triangular matrix, and for such matrices, the determinant equals the product of the entries on the main diagonal; this provides a method to calculate the determinant of any matrix. Finally, the Laplace expansion expresses the determinant in terms of minors, that is, determinants of smaller matrices. This expansion can be used for a recursive definition of determinants (taking as starting case the determinant of a 1-by-1 matrix, which is its unique entry, or even the determinant of a 0-by-0 matrix, which is 1), that can be seen to be equivalent to the Leibniz formula. Determinants can be used to solve linear systems using Cramer's rule, where the division of the determinants of two related square matrices equates to the value of each of the system's variables. Eigenvalues and eigenvectors A number and a non-zero vector satisfying are called an eigenvalue and an eigenvector of , respectively. The number is an eigenvalue of an -matrix if and only if is not invertible, which is equivalent to The polynomial in an indeterminate given by evaluation of the determinant is called the characteristic polynomial of . It is a monic polynomial of degree . Therefore the polynomial equation has at most different solutions, that is, eigenvalues of the matrix. They may be complex even if the entries of are real. According to the Cayley–Hamilton theorem, , that is, the result of substituting the matrix itself into its characteristic polynomial yields the zero matrix. Computational aspects Matrix calculations can be often performed with different techniques. Many problems can be solved by both direct algorithms and iterative approaches. For example, the eigenvectors of a square matrix can be obtained by finding a sequence of vectors converging to an eigenvector when tends to infinity. To choose the most appropriate algorithm for each specific problem, it is important to determine both the effectiveness and precision of all the available algorithms. The domain studying these matters is called numerical linear algebra. As with other numerical situations, two main aspects are the complexity of algorithms and their numerical stability. Determining the complexity of an algorithm means finding upper bounds or estimates of how many elementary operations such as additions and multiplications of scalars are necessary to perform some algorithm, for example, multiplication of matrices. Calculating the matrix product of two -by- matrices using the definition given above needs multiplications, since for any of the entries of the product, multiplications are necessary. The Strassen algorithm outperforms this "naive" algorithm; it needs only multiplications. A refined approach also incorporates specific features of the computing devices. In many practical situations, additional information about the matrices involved is known. An important case is sparse matrices, that is, matrices most of whose entries are zero. There are specifically adapted algorithms for, say, solving linear systems for sparse matrices , such as the conjugate gradient method. An algorithm is, roughly speaking, numerically stable if little deviations in the input values do not lead to big deviations in the result. For example, calculating the inverse of a matrix via Laplace expansion ( denotes the adjugate matrix of ) may lead to significant rounding errors if the determinant of the matrix is very small. The norm of a matrix can be used to capture the conditioning of linear algebraic problems, such as computing a matrix's inverse. Most computer programming languages support arrays but are not designed with built-in commands for matrices. Instead, available external libraries provide matrix operations on arrays, in nearly all currently used programming languages. Matrix manipulation was among the earliest numerical applications of computers. The original Dartmouth BASIC had built-in commands for matrix arithmetic on arrays from its second edition implementation in 1964. As early as the 1970s, some engineering desktop computers such as the HP 9830 had ROM cartridges to add BASIC commands for matrices. Some computer languages such as APL were designed to manipulate matrices, and various mathematical programs can be used to aid computing with matrices. As of 2023, most computers have some form of built-in matrix operations at a low level implementing the standard BLAS specification, upon which most higher-level matrix and linear algebra libraries (e.g., EISPACK, LINPACK, LAPACK) rely. While most of these libraries require a professional level of coding, LAPACK can be accessed by higher-level (and user-friendly) bindings such as NumPy/SciPy, R, GNU Octave, MATLAB. Decomposition There are several methods to render matrices into a more easily accessible form. They are generally referred to as matrix decomposition or matrix factorization techniques. The interest of all these techniques is that they preserve certain properties of the matrices in question, such as determinant, rank, or inverse, so that these quantities can be calculated after applying the transformation, or that certain matrix operations are algorithmically easier to carry out for some types of matrices. The LU decomposition factors matrices as a product of lower () and an upper triangular matrices (). Once this decomposition is calculated, linear systems can be solved more efficiently, by a simple technique called forward and back substitution. Likewise, inverses of triangular matrices are algorithmically easier to calculate. The Gaussian elimination is a similar algorithm; it transforms any matrix to row echelon form. Both methods proceed by multiplying the matrix by suitable elementary matrices, which correspond to permuting rows or columns and adding multiples of one row to another row. Singular value decomposition expresses any matrix as a product , where and are unitary matrices and is a diagonal matrix. The eigendecomposition or diagonalization expresses as a product , where is a diagonal matrix and is a suitable invertible matrix. If can be written in this form, it is called diagonalizable. More generally, and applicable to all matrices, the Jordan decomposition transforms a matrix into Jordan normal form, that is to say matrices whose only nonzero entries are the eigenvalues to of , placed on the main diagonal and possibly entries equal to one directly above the main diagonal, as shown at the right. Given the eigendecomposition, the th power of (that is, -fold iterated matrix multiplication) can be calculated via and the power of a diagonal matrix can be calculated by taking the corresponding powers of the diagonal entries, which is much easier than doing the exponentiation for instead. This can be used to compute the matrix exponential , a need frequently arising in solving linear differential equations, matrix logarithms and square roots of matrices. To avoid numerically ill-conditioned situations, further algorithms such as the Schur decomposition can be employed. Abstract algebraic aspects and generalizations Matrices can be generalized in different ways. Abstract algebra uses matrices with entries in more general fields or even rings, while linear algebra codifies properties of matrices in the notion of linear maps. It is possible to consider matrices with infinitely many columns and rows. Another extension is tensors, which can be seen as higher-dimensional arrays of numbers, as opposed to vectors, which can often be realized as sequences of numbers, while matrices are rectangular or two-dimensional arrays of numbers. Matrices, subject to certain requirements tend to form groups known as matrix groups. Similarly under certain conditions matrices form rings known as matrix rings. Though the product of matrices is not in general commutative certain matrices form fields known as matrix fields. In general, matrices and their multiplication also form a category, the category of matrices. Matrices with more general entries This article focuses on matrices whose entries are real or complex numbers. However, matrices can be considered with much more general types of entries than real or complex numbers. As a first step of generalization, any field, that is, a set where addition, subtraction, multiplication, and division operations are defined and well-behaved, may be used instead of or for example rational numbers or finite fields. For example, coding theory makes use of matrices over finite fields. Wherever eigenvalues are considered, as these are roots of a polynomial they may exist only in a larger field than that of the entries of the matrix; for instance, they may be complex in the case of a matrix with real entries. The possibility to reinterpret the entries of a matrix as elements of a larger field (for example, to view a real matrix as a complex matrix whose entries happen to be all real) then allows considering each square matrix to possess a full set of eigenvalues. Alternatively one can consider only matrices with entries in an algebraically closed field, such as from the outset. More generally, matrices with entries in a ring are widely used in mathematics. Rings are a more general notion than fields in that a division operation need not exist. The very same addition and multiplication operations of matrices extend to this setting, too. The set (also denoted ) of all square -by- matrices over is a ring called matrix ring, isomorphic to the endomorphism ring of the left -module . If the ring is commutative, that is, its multiplication is commutative, then the ring is also an associative algebra over R. The determinant of square matrices over a commutative ring can still be defined using the Leibniz formula; such a matrix is invertible if and only if its determinant is invertible in , generalizing the situation over a field , where every nonzero element is invertible. Matrices over superrings are called supermatrices. Matrices do not always have all their entries in the same ring– or even in any ring at all. One special but common case is block matrices, which may be considered as matrices whose entries themselves are matrices. The entries need not be square matrices, and thus need not be members of any ring; but their sizes must fulfill certain compatibility conditions. Relationship to linear maps Linear maps are equivalent to -by- matrices, as described above. More generally, any linear map between finite-dimensional vector spaces can be described by a matrix , after choosing bases of , and of (so is the dimension of and is the dimension of ), which is such that In other words, column of expresses the image of in terms of the basis vectors of ; thus this relation uniquely determines the entries of the matrix . The matrix depends on the choice of the bases: different choices of bases give rise to different, but equivalent matrices. Many of the above concrete notions can be reinterpreted in this light, for example, the transpose matrix describes the transpose of the linear map given by , concerning the dual bases. These properties can be restated more naturally: the category of matrices with entries in a field with multiplication as composition is equivalent to the category of finite-dimensional vector spaces and linear maps over this field. More generally, the set of matrices can be used to represent the -linear maps between the free modules and for an arbitrary ring with unity. When composition of these maps is possible, and this gives rise to the matrix ring of matrices representing the endomorphism ring of . Matrix groups A group is a mathematical structure consisting of a set of objects together with a binary operation, that is, an operation combining any two objects to a third, subject to certain requirements. A group in which the objects are matrices and the group operation is matrix multiplication is called a matrix group. Since a group of every element must be invertible, the most general matrix groups are the groups of all invertible matrices of a given size, called the general linear groups. Any property of matrices that is preserved under matrix products and inverses can be used to define further matrix groups. For example, matrices with a given size and with a determinant of 1 form a subgroup of (that is, a smaller group contained in) their general linear group, called a special linear group. Orthogonal matrices, determined by the condition form the orthogonal group. Every orthogonal matrix has determinant 1 or −1. Orthogonal matrices with determinant 1 form a subgroup called special orthogonal group. Every finite group is isomorphic to a matrix group, as one can see by considering the regular representation of the symmetric group. General groups can be studied using matrix groups, which are comparatively well understood, using representation theory. Infinite matrices It is also possible to consider matrices with infinitely many rows and/or columns even though, being infinite objects, one cannot write down such matrices explicitly. All that matters is that for every element in the set indexing rows, and every element in the set indexing columns, there is a well-defined entry (these index sets need not even be subsets of the natural numbers). The basic operations of addition, subtraction, scalar multiplication, and transposition can still be defined without problem; however, matrix multiplication may involve infinite summations to define the resulting entries, and these are not defined in general. If is any ring with unity, then the ring of endomorphisms of as a right module is isomorphic to the ring of column finite matrices whose entries are indexed by , and whose columns each contain only finitely many nonzero entries. The endomorphisms of considered as a left module result in an analogous object, the row finite matrices whose rows each only have finitely many nonzero entries. If infinite matrices are used to describe linear maps, then only those matrices can be used all of whose columns have but a finite number of nonzero entries, for the following reason. For a matrix to describe a linear map , bases for both spaces must have been chosen; recall that by definition this means that every vector in the space can be written uniquely as a (finite) linear combination of basis vectors, so that written as a (column) vector of coefficients, only finitely many entries are nonzero. Now the columns of describe the images by of individual basis vectors of in the basis of , which is only meaningful if these columns have only finitely many nonzero entries. There is no restriction on the rows of however: in the product there are only finitely many nonzero coefficients of involved, so every one of its entries, even if it is given as an infinite sum of products, involves only finitely many nonzero terms and is therefore well defined. Moreover, this amounts to forming a linear combination of the columns of that effectively involves only finitely many of them, whence the result has only finitely many nonzero entries because each of those columns does. Products of two matrices of the given type are well defined (provided that the column-index and row-index sets match), are of the same type, and correspond to the composition of linear maps. If is a normed ring, then the condition of row or column finiteness can be relaxed. With the norm in place, absolutely convergent series can be used instead of finite sums. For example, the matrices whose column sums are convergent sequences form a ring. Analogously, the matrices whose row sums are convergent series also form a ring. Infinite matrices can also be used to describe operators on Hilbert spaces, where convergence and continuity questions arise, which again results in certain constraints that must be imposed. However, the explicit point of view of matrices tends to obfuscate the matter, and the abstract and more powerful tools of functional analysis can be used instead. Empty matrix An empty matrix is a matrix in which the number of rows or columns (or both) is zero. Empty matrices help to deal with maps involving the zero vector space. For example, if is a 3-by-0 matrix and is a 0-by-3 matrix, then is the 3-by-3 zero matrix corresponding to the null map from a 3-dimensional space to itself, while is a 0-by-0 matrix. There is no common notation for empty matrices, but most computer algebra systems allow creating and computing with them. The determinant of the 0-by-0 matrix is 1 as follows regarding the empty product occurring in the Leibniz formula for the determinant as 1. This value is also consistent with the fact that the identity map from any finite-dimensional space to itself has determinant1, a fact that is often used as a part of the characterization of determinants. Applications There are numerous applications of matrices, both in mathematics and other sciences. Some of them merely take advantage of the compact representation of a set of numbers in a matrix. For example, in game theory and economics, the payoff matrix encodes the payoff for two players, depending on which out of a given (finite) set of strategies the players choose. Text mining and automated thesaurus compilation makes use of document-term matrices such as tf-idf to track frequencies of certain words in several documents. Complex numbers can be represented by particular real 2-by-2 matrices via under which addition and multiplication of complex numbers and matrices correspond to each other. For example, 2-by-2 rotation matrices represent the multiplication with some complex number of absolute value 1, as above. A similar interpretation is possible for quaternions and Clifford algebras in general. Early encryption techniques such as the Hill cipher also used matrices. However, due to the linear nature of matrices, these codes are comparatively easy to break. Computer graphics uses matrices to represent objects; to calculate transformations of objects using affine rotation matrices to accomplish tasks such as projecting a three-dimensional object onto a two-dimensional screen, corresponding to a theoretical camera observation; and to apply image convolutions such as sharpening, blurring, edge detection, and more. Matrices over a polynomial ring are important in the study of control theory. Chemistry makes use of matrices in various ways, particularly since the use of quantum theory to discuss molecular bonding and spectroscopy. Examples are the overlap matrix and the Fock matrix used in solving the Roothaan equations to obtain the molecular orbitals of the Hartree–Fock method. Graph theory The adjacency matrix of a finite graph is a basic notion of graph theory. It records which vertices of the graph are connected by an edge. Matrices containing just two different values (1 and 0 meaning for example "yes" and "no", respectively) are called logical matrices. The distance (or cost) matrix contains information about the distances of the edges. These concepts can be applied to websites connected by hyperlinks or cities connected by roads etc., in which case (unless the connection network is extremely dense) the matrices tend to be sparse, that is, contain few nonzero entries. Therefore, specifically tailored matrix algorithms can be used in network theory. Analysis and geometry The Hessian matrix of a differentiable function consists of the second derivatives of concerning the several coordinate directions, that is, It encodes information about the local growth behavior of the function: given a critical point , that is, a point where the first partial derivatives of vanish, the function has a local minimum if the Hessian matrix is positive definite. Quadratic programming can be used to find global minima or maxima of quadratic functions closely related to the ones attached to matrices (see above). Another matrix frequently used in geometrical situations is the Jacobi matrix of a differentiable map If denote the components of , then the Jacobi matrix is defined as If , and if the rank of the Jacobi matrix attains its maximal value , is locally invertible at that point, by the implicit function theorem. Partial differential equations can be classified by considering the matrix of coefficients of the highest-order differential operators of the equation. For elliptic partial differential equations this matrix is positive definite, which has a decisive influence on the set of possible solutions of the equation in question. The finite element method is an important numerical method to solve partial differential equations, widely applied in simulating complex physical systems. It attempts to approximate the solution to some equation by piecewise linear functions, where the pieces are chosen concerning a sufficiently fine grid, which in turn can be recast as a matrix equation. Probability theory and statistics Stochastic matrices are square matrices whose rows are probability vectors, that is, whose entries are non-negative and sum up to one. Stochastic matrices are used to define Markov chains with finitely many states. A row of the stochastic matrix gives the probability distribution for the next position of some particle currently in the state that corresponds to the row. Properties of the Markov chain-like absorbing states, that is, states that any particle attains eventually, can be read off the eigenvectors of the transition matrices. Statistics also makes use of matrices in many different forms. Descriptive statistics is concerned with describing data sets, which can often be represented as data matrices, which may then be subjected to dimensionality reduction techniques. The covariance matrix encodes the mutual variance of several random variables. Another technique using matrices are linear least squares, a method that approximates a finite set of pairs , by a linear function which can be formulated in terms of matrices, related to the singular value decomposition of matrices. Random matrices are matrices whose entries are random numbers, subject to suitable probability distributions, such as matrix normal distribution. Beyond probability theory, they are applied in domains ranging from number theory to physics. Symmetries and transformations in physics Linear transformations and the associated symmetries play a key role in modern physics. For example, elementary particles in quantum field theory are classified as representations of the Lorentz group of special relativity and, more specifically, by their behavior under the spin group. Concrete representations involving the Pauli matrices and more general gamma matrices are an integral part of the physical description of fermions, which behave as spinors. For the three lightest quarks, there is a group-theoretical representation involving the special unitary group SU(3); for their calculations, physicists use a convenient matrix representation known as the Gell-Mann matrices, which are also used for the SU(3) gauge group that forms the basis of the modern description of strong nuclear interactions, quantum chromodynamics. The Cabibbo–Kobayashi–Maskawa matrix, in turn, expresses the fact that the basic quark states that are important for weak interactions are not the same as, but linearly related to the basic quark states that define particles with specific and distinct masses. Linear combinations of quantum states The first model of quantum mechanics (Heisenberg, 1925) represented the theory's operators by infinite-dimensional matrices acting on quantum states. This is also referred to as matrix mechanics. One particular example is the density matrix that characterizes the "mixed" state of a quantum system as a linear combination of elementary, "pure" eigenstates. Another matrix serves as a key tool for describing the scattering experiments that form the cornerstone of experimental particle physics: Collision reactions such as occur in particle accelerators, where non-interacting particles head towards each other and collide in a small interaction zone, with a new set of non-interacting particles as the result, can be described as the scalar product of outgoing particle states and a linear combination of ingoing particle states. The linear combination is given by a matrix known as the S-matrix, which encodes all information about the possible interactions between particles. Normal modes A general application of matrices in physics is the description of linearly coupled harmonic systems. The equations of motion of such systems can be described in matrix form, with a mass matrix multiplying a generalized velocity to give the kinetic term, and a force matrix multiplying a displacement vector to characterize the interactions. The best way to obtain solutions is to determine the system's eigenvectors, its normal modes, by diagonalizing the matrix equation. Techniques like this are crucial when it comes to the internal dynamics of molecules: the internal vibrations of systems consisting of mutually bound component atoms. They are also needed for describing mechanical vibrations, and oscillations in electrical circuits. Geometrical optics Geometrical optics provides further matrix applications. In this approximative theory, the wave nature of light is neglected. The result is a model in which light rays are indeed geometrical rays. If the deflection of light rays by optical elements is small, the action of a lens or reflective element on a given light ray can be expressed as multiplication of a two-component vector with a two-by-two matrix called ray transfer matrix analysis: the vector's components are the light ray's slope and its distance from the optical axis, while the matrix encodes the properties of the optical element. There are two kinds of matrices, viz. a refraction matrix describing the refraction at a lens surface, and a translation matrix, describing the translation of the plane of reference to the next refracting surface, where another refraction matrix applies. The optical system, consisting of a combination of lenses and/or reflective elements, is simply described by the matrix resulting from the product of the components' matrices. Electronics Traditional mesh analysis and nodal analysis in electronics lead to a system of linear equations that can be described with a matrix. The behavior of many electronic components can be described using matrices. Let be a 2-dimensional vector with the component's input voltage and input current as its elements, and let be a 2-dimensional vector with the component's output voltage and output current as its elements. Then the behavior of the electronic component can be described by , where is a 2 x 2 matrix containing one impedance element (), one admittance element (), and two dimensionless elements ( and ). Calculating a circuit now reduces to multiplying matrices. History Matrices have a long history of application in solving linear equations but they were known as arrays until the 1800s. The Chinese text The Nine Chapters on the Mathematical Art written in the 10th–2nd century BCE is the first example of the use of array methods to solve simultaneous equations, including the concept of determinants. In 1545 Italian mathematician Gerolamo Cardano introduced the method to Europe when he published Ars Magna. The Japanese mathematician Seki used the same array methods to solve simultaneous equations in 1683. The Dutch mathematician Jan de Witt represented transformations using arrays in his 1659 book Elements of Curves (1659). Between 1700 and 1710 Gottfried Wilhelm Leibniz publicized the use of arrays for recording information or solutions and experimented with over 50 different systems of arrays. Cramer presented his rule in 1750. The term "matrix" (Latin for "womb", "dam" (non-human female animal kept for breeding), "source", "origin", "list", and "register", are derived from mater—mother) was coined by James Joseph Sylvester in 1850, who understood a matrix as an object giving rise to several determinants today called minors, that is to say, determinants of smaller matrices that derive from the original one by removing columns and rows. In an 1851 paper, Sylvester explains: Arthur Cayley published a treatise on geometric transformations using matrices that were not rotated versions of the coefficients being investigated as had previously been done. Instead, he defined operations such as addition, subtraction, multiplication, and division as transformations of those matrices and showed the associative and distributive properties held. Cayley investigated and demonstrated the non-commutative property of matrix multiplication as well as the commutative property of matrix addition. Early matrix theory had limited the use of arrays almost exclusively to determinants and Arthur Cayley's abstract matrix operations were revolutionary. He was instrumental in proposing a matrix concept independent of equation systems. In 1858 Cayley published his A memoir on the theory of matrices in which he proposed and demonstrated the Cayley–Hamilton theorem. The English mathematician Cuthbert Edmund Cullis was the first to use modern bracket notation for matrices in 1913 and he simultaneously demonstrated the first significant use of the notation to represent a matrix where refers to the th row and the th column. The modern study of determinants sprang from several sources. Number-theoretical problems led Gauss to relate coefficients of quadratic forms, that is, expressions such as and linear maps in three dimensions to matrices. Eisenstein further developed these notions, including the remark that, in modern parlance, matrix products are non-commutative. Cauchy was the first to prove general statements about determinants, using as the definition of the determinant of a matrix the following: replace the powers by in the polynomial , where denotes the product of the indicated terms. He also showed, in 1829, that the eigenvalues of symmetric matrices are real. Jacobi studied "functional determinants"—later called Jacobi determinants by Sylvester—which can be used to describe geometric transformations at a local (or infinitesimal) level, see above. Kronecker's Vorlesungen über die Theorie der Determinanten and Weierstrass' Zur Determinantentheorie, both published in 1903, first treated determinants axiomatically, as opposed to previous more concrete approaches such as the mentioned formula of Cauchy. At that point, determinants were firmly established. Many theorems were first established for small matrices only, for example, the Cayley–Hamilton theorem was proved for 2×2 matrices by Cayley in the aforementioned memoir, and by Hamilton for 4×4 matrices. Frobenius, working on bilinear forms, generalized the theorem to all dimensions (1898). Also at the end of the 19th century, the Gauss–Jordan elimination (generalizing a special case now known as Gauss elimination) was established by Wilhelm Jordan. In the early 20th century, matrices attained a central role in linear algebra, partially due to their use in the classification of the hypercomplex number systems of the previous century. The inception of matrix mechanics by Heisenberg, Born and Jordan led to studying matrices with infinitely many rows and columns. Later, von Neumann carried out the mathematical formulation of quantum mechanics, by further developing functional analytic notions such as linear operators on Hilbert spaces, which, very roughly speaking, correspond to Euclidean space, but with an infinity of independent directions. Other historical usages of the word "matrix" in mathematics The word has been used in unusual ways by at least two authors of historical importance. Bertrand Russell and Alfred North Whitehead in their Principia Mathematica (1910–1913) use the word "matrix" in the context of their axiom of reducibility. They proposed this axiom as a means to reduce any function to one of lower type, successively, so that at the "bottom" (0 order) the function is identical to its extension: For example, a function of two variables and can be reduced to a collection of functions of a single variable, for example, , by "considering" the function for all possible values of "individuals" substituted in place of a variable . And then the resulting collection of functions of the single variable , that is, , can be reduced to a "matrix" of values by "considering" the function for all possible values of "individuals" substituted in place of variable : Alfred Tarski in his 1946 Introduction to Logic used the word "matrix" synonymously with the notion of truth table as used in mathematical logic.
Mathematics
Algebra
null
20556903
https://en.wikipedia.org/wiki/Higgs%20boson
Higgs boson
The Higgs boson, sometimes called the Higgs particle, is an elementary particle in the Standard Model of particle physics produced by the quantum excitation of the Higgs field, one of the fields in particle physics theory. In the Standard Model, the Higgs particle is a massive scalar boson with zero spin, even (positive) parity, no electric charge, and no colour charge that couples to (interacts with) mass. It is also very unstable, decaying into other particles almost immediately upon generation. The Higgs field is a scalar field with two neutral and two electrically charged components that form a complex doublet of the weak isospin SU(2) symmetry. Its "Sombrero potential" leads it to take a nonzero value everywhere (including otherwise empty space), which breaks the weak isospin symmetry of the electroweak interaction and, via the Higgs mechanism, gives a rest mass to all massive elementary particles of the Standard Model, including the Higgs boson itself. The existence of the Higgs field became the last unverified part of the Standard Model of particle physics, and for several decades was considered "the central problem in particle physics". Both the field and the boson are named after physicist Peter Higgs, who in 1964, along with five other scientists in three teams, proposed the Higgs mechanism, a way for some particles to acquire mass. All fundamental particles known at the time should be massless at very high energies, but fully explaining how some particles gain mass at lower energies had been extremely difficult. If these ideas were correct, a particle known as a scalar boson (with certain properties) should also exist. This particle was called the Higgs boson and could be used to test whether the Higgs field was the correct explanation. After a 40-year search, a subatomic particle with the expected properties was discovered in 2012 by the ATLAS and CMS experiments at the Large Hadron Collider (LHC) at CERN near Geneva, Switzerland. The new particle was subsequently confirmed to match the expected properties of a Higgs boson. Physicists from two of the three teams, Peter Higgs and François Englert, were awarded the Nobel Prize in Physics in 2013 for their theoretical predictions. Although Higgs's name has come to be associated with this theory, several researchers between about 1960 and 1972 independently developed different parts of it. In the media, the Higgs boson has often been called the "God particle" after the 1993 book The God Particle by Nobel Laureate Leon Lederman. The name has been criticised by physicists, including Peter Higgs. Introduction Standard Model Physicists explain the fundamental particles and forces of our universe in terms of the Standard Model – a widely accepted framework based on quantum field theory that predicts almost all known particles and forces aside from gravity with great accuracy. (A separate theory, general relativity, is used for gravity.) In the Standard Model, the particles and forces in nature (aside from gravity) arise from properties of quantum fields known as gauge invariance and symmetries. Forces in the Standard Model are transmitted by particles known as gauge bosons. Gauge invariant theories and symmetries "It is only slightly overstating the case to say that physics is the study of symmetry" – Philip Anderson, Nobel Prize Physics Gauge invariant theories are theories which have a useful feature, i.e., some kinds of changes to the value of certain items do not make any difference to the outcomes or the measurements we make. For example, changing voltages in an electromagnet by +100 volts does not cause any change to the magnetic field it produces. Similarly, measuring the speed of light in vacuum seems to give the identical result, whatever the location in time and space, and whatever the local gravitational field. In these kinds of theories, the gauge is an item whose value we can change. The fact that some changes leave the results we measure unchanged means it is a gauge invariant theory, and symmetries are the specific kinds of changes to the gauge which have the effect of leaving measurements unchanged. Symmetries of this kind are powerful tools for a deep understanding of the fundamental forces and particles of our physical world. Gauge invariance is therefore an important property within particle physics theory. They are closely connected to conservation laws and are described mathematically using group theory. Quantum field theory and the Standard Model are both gauge invariant theories – meaning they focus on properties of our universe, demonstrating this property of gauge invariance and the symmetries which are involved. Gauge boson (rest) mass problem Quantum field theories based on gauge invariance had been used with great success in understanding the electromagnetic and strong forces, but by around 1960, all attempts to create a gauge invariant theory for the weak force (and its combination with the electromagnetic force, known together as the electroweak interaction) had consistently failed. As a result of these failures, gauge theories began to fall into disrepute. The problem was that symmetry requirements for these two forces incorrectly predicted the weak force's gauge bosons (W and Z) would have "zero mass" (in the specialized terminology of particle physics, "mass" refers specifically to a particle's rest mass). But experiments showed the W and Z gauge bosons had non-zero (rest) mass. Further, many promising solutions seemed to require the existence of extra particles known as Goldstone bosons, but evidence suggested these did not exist either. This meant either gauge invariance was an incorrect approach, or something unknown was giving the weak force's W and Z bosons their mass, and doing it in a way that did not create Goldstone bosons. By the late 1950s and early 1960s, physicists were at a loss as to how to resolve these issues, or how to create a comprehensive theory for particle physics. Symmetry breaking In the late 1950s, Yoichiro Nambu recognised that spontaneous symmetry breaking, a process where a symmetric system becomes asymmetric, could occur under certain conditions. Symmetry breaking is when some variable that previously didn't affect the measured results (it was originally a "symmetry") now does affect the measured results (it's now "broken" and no longer a symmetry). In 1962 physicist Philip Anderson, an expert in condensed matter physics, observed that symmetry breaking played a role in superconductivity, and suggested it could also be part of the answer to the problem of gauge invariance in particle physics. Specifically, Anderson suggested that the Goldstone bosons that would result from symmetry breaking might instead, in some circumstances, be "absorbed" by the massless W and Z bosons. If so, perhaps the Goldstone bosons would not exist, and the W and Z bosons could gain mass, solving both problems at once. Similar behaviour was already theorised in superconductivity. In 1964, this was shown to be theoretically possible by physicists Abraham Klein and Benjamin Lee, at least for some limited (non-relativistic) cases. Higgs mechanism Following the 1963 and early 1964 papers, three groups of researchers independently developed these theories more completely, in what became known as the 1964 PRL symmetry breaking papers. All three groups reached similar conclusions and for all cases, not just some limited cases. They showed that the conditions for electroweak symmetry would be "broken" if an unusual type of field existed throughout the universe, and indeed, there would be no Goldstone bosons and some existing bosons would acquire mass. The field required for this to happen (which was purely hypothetical at the time) became known as the Higgs field (after Peter Higgs, one of the researchers) and the mechanism by which it led to symmetry breaking became known as the Higgs mechanism. A key feature of the necessary field is that it would take less energy for the field to have a non-zero value than a zero value, unlike all other known fields; therefore, the Higgs field has a non-zero value (or vacuum expectation) everywhere. This non-zero value could in theory break electroweak symmetry. It was the first proposal capable of showing how the weak force gauge bosons could have mass despite their governing symmetry, within a gauge invariant theory. Although these ideas did not gain much initial support or attention, by 1972 they had been developed into a comprehensive theory and proved capable of giving "sensible" results that accurately described particles known at the time, and which, with exceptional accuracy, predicted several other particles discovered during the following years. During the 1970s these theories rapidly became the Standard Model of particle physics. Higgs field To allow symmetry breaking, the Standard Model includes a field of the kind needed to "break" electroweak symmetry and give particles their correct mass. This field, which became known as the "Higgs Field", was hypothesized to exist throughout space, and to break some symmetry laws of the electroweak interaction, triggering the Higgs mechanism. It would therefore cause the W and Z gauge bosons of the weak force to be massive at all temperatures below an extremely high value. When the weak force bosons acquire mass, this affects the distance they can freely travel, which becomes very small, also matching experimental findings. Furthermore, it was later realised that the same field would also explain, in a different way, why other fundamental constituents of matter (including electrons and quarks) have mass. Unlike all other known fields, such as the electromagnetic field, the Higgs field is a scalar field, and has a non-zero average value in vacuum. The "central problem" There was not yet any direct evidence that the Higgs field existed, but even without direct proof, the accuracy of its predictions led scientists to believe the theory might be true. By the 1980s, the question of whether the Higgs field existed, and therefore whether the entire Standard Model was correct, had come to be regarded as one of the most important unanswered questions in particle physics. The existence of the Higgs field became the last unverified part of the Standard Model of particle physics, and for several decades was considered "the central problem in particle physics". For many decades, scientists had no way to determine whether the Higgs field existed because the technology needed for its detection did not exist at that time. If the Higgs field did exist, then it would be unlike any other known fundamental field, but it also was possible that these key ideas, or even the entire Standard Model, were somehow incorrect. The hypothesised Higgs theory made several key predictions. One crucial prediction was that a matching particle, called the "Higgs boson", should also exist. Proving the existence of the Higgs boson would prove whether the Higgs field existed, and therefore finally prove whether the Standard Model's explanation was correct. Therefore, there was an extensive search for the Higgs boson as a way to prove the Higgs field itself existed. Search and discovery Although the Higgs field would exist everywhere, proving its existence was far from easy. In principle, it can be proved to exist by detecting its excitations, which manifest as Higgs particles (the Higgs boson), but these are extremely difficult to produce and detect due to the energy required to produce them and their very rare production even if the energy is sufficient. It was, therefore, several decades before the first evidence of the Higgs boson could be found. Particle colliders, detectors, and computers capable of looking for Higgs bosons took more than 30 years to develop. The importance of this fundamental question led to a 40-year search, and the construction of one of the world's most expensive and complex experimental facility to date, CERN's Large Hadron Collider (LHC), in an attempt to create Higgs bosons and other particles for observation and study. On 4 July 2012, the discovery of a new particle with a mass between was announced; physicists suspected that it was the Higgs boson. Since then, the particle has been shown to behave, interact, and decay in many of the ways predicted for Higgs particles by the Standard Model, as well as having even parity and zero spin, two fundamental attributes of a Higgs boson. This also means it is the first elementary scalar particle discovered in nature. By March 2013, the existence of the Higgs boson was confirmed, and therefore the concept of some type of Higgs field throughout space is strongly supported. The presence of the field, now confirmed by experimental investigation, explains why some fundamental particles have (a rest) mass, despite the symmetries controlling their interactions implying that they should be "massless". It also resolves several other long-standing puzzles, such as the reason for the extremely short distance travelled by the weak force bosons, and therefore the weak force's extremely short range. As of 2018, in-depth research shows the particle continuing to behave in line with predictions for the Standard Model's Higgs boson. More studies are needed to verify with higher precision that the discovered particle has all of the properties predicted or whether, as described by some theories, multiple Higgs bosons exist. The nature and properties of this field are now being investigated further, using more data collected at the LHC. Interpretation Various analogies have been used to describe the Higgs field and boson, including analogies with well-known symmetry-breaking effects such as the rainbow and prism, electric fields, and ripples on the surface of water. Other analogies based on the resistance of macro objects moving through media (such as people moving through crowds, or some objects moving through syrup or molasses) are commonly used but misleading, since the Higgs field does not actually resist particles, and the effect of mass is not caused by resistance. Overview of Higgs boson and field properties In the Standard Model, the Higgs boson is a massive scalar boson whose mass must be found experimentally. Its mass has been determined to be by CMS (2022) and by ATLAS (2023). It is the only particle that remains massive even at very high energies. It has zero spin, even (positive) parity, no electric charge, and no colour charge, and it couples to (interacts with) mass. It is also very unstable, decaying into other particles almost immediately via several possible pathways. The Higgs field is a scalar field, with two neutral and two electrically charged components that form a complex doublet of the weak isospin SU(2) symmetry. Unlike any other known quantum field, it has a Sombrero potential. This shape means that below extremely high energies of about such as those seen during the first of the Big Bang, the Higgs field in its ground state takes less energy to have a nonzero vacuum expectation (value) than a zero value. Therefore in today's universe the Higgs field has a nonzero value everywhere (including otherwise empty space). This nonzero value in turn breaks the weak isospin SU(2) symmetry of the electroweak interaction everywhere. (Technically the non-zero expectation value converts the Lagrangian's Yukawa coupling terms into mass terms.) When this happens, three components of the Higgs field are "absorbed" by the SU(2) and U(1) gauge bosons (the "Higgs mechanism") to become the longitudinal components of the now-massive W and Z bosons of the weak force. The remaining electrically neutral component either manifests as a Higgs boson, or may couple separately to other particles known as fermions (via Yukawa couplings), causing these to acquire mass as well. Significance Evidence of the Higgs field and its properties has been extremely significant for many reasons. The importance of the Higgs boson largely is that it is able to be examined using existing knowledge and experimental technology, as a way to confirm and study the entire Higgs field theory. Conversely, proof that the Higgs field and boson did not exist would have also been significant. Particle physics Validation of the Standard Model The Higgs boson validates the Standard Model through the mechanism of mass generation. As more precise measurements of its properties are made, more advanced extensions may be suggested or excluded. As experimental means to measure the field's behaviours and interactions are developed, this fundamental field may be better understood. If the Higgs field had not been discovered, the Standard Model would have needed to be modified or superseded. Related to this, a belief generally exists among physicists that there is likely to be "new" physics beyond the Standard Model, and the Standard Model will at some point be extended or superseded. The Higgs discovery, as well as the many measured collisions occurring at the LHC, provide physicists a sensitive tool to search their data for any evidence that the Standard Model seems to fail, and could provide considerable evidence guiding researchers into future theoretical developments. Symmetry breaking of the electroweak interaction Below an extremely high temperature, electroweak symmetry breaking causes the electroweak interaction to manifest in part as the short-ranged weak force, which is carried by massive gauge bosons. In the history of the universe, electroweak symmetry breaking is believed to have happened at about after the Big Bang, when the universe was at a temperature . This symmetry breaking is required for atoms and other structures to form, as well as for nuclear reactions in stars, such as the Sun. The Higgs field is responsible for this symmetry breaking. Particle mass acquisition The Higgs field is pivotal in generating the masses of quarks and charged leptons (through Yukawa coupling) and the W and Z gauge bosons (through the Higgs mechanism). The Higgs field does not "create" mass out of nothing (which would violate the law of conservation of energy), nor is the Higgs field responsible for the mass of all particles. For example, approximately 99% of the mass of baryons (composite particles such as the proton and neutron), is due instead to quantum chromodynamic binding energy, which is the sum of the kinetic energies of quarks and the energies of the massless gluons mediating the strong interaction inside the baryons. In Higgs-based theories, the property of "mass" is a manifestation of potential energy transferred to fundamental particles when they interact ("couple") with the Higgs field, which had contained that mass in the form of energy. Scalar fields and extension of the Standard Model The Higgs field is the only scalar (spin-0) field to be detected; all the other fundamental fields in the Standard Model are spin- fermions or spin-1 bosons. According to Rolf-Dieter Heuer, director general of CERN when the Higgs boson was discovered, this existence proof of a scalar field is almost as important as the Higgs's role in determining the mass of other particles. It suggests that other hypothetical scalar fields suggested by other theories, from the inflaton to quintessence, could perhaps exist as well. Cosmology Inflaton There has been considerable scientific research on possible links between the Higgs field and the inflatona hypothetical field suggested as the explanation for the expansion of space during the first fraction of a second of the universe (known as the "inflationary epoch"). Some theories suggest that a fundamental scalar field might be responsible for this phenomenon; the Higgs field is such a field, and its existence has led to papers analysing whether it could also be the inflaton responsible for this exponential expansion of the universe during the Big Bang. Such theories are highly tentative and face significant problems related to unitarity, but may be viable if combined with additional features such as large non-minimal coupling, a Brans–Dicke scalar, or other "new" physics, and they have received treatments suggesting that Higgs inflation models are still of interest theoretically. Nature of the universe, and its possible fates In the Standard Model, there exists the possibility that the underlying state of our universe – known as the "vacuum" – is long-lived, but not completely stable. In this scenario, the universe as we know it could effectively be destroyed by collapsing into a more stable vacuum state. This was sometimes misreported as the Higgs boson "ending" the universe. If the masses of the Higgs boson and top quark are known more precisely, and the Standard Model provides an accurate description of particle physics up to extreme energies of the Planck scale, then it is possible to calculate whether the vacuum is stable or merely long-lived. A Higgs mass of seems to be extremely close to the boundary for stability, but a definitive answer requires much more precise measurements of the pole mass of the top quark. New physics can change this picture. If measurements of the Higgs boson suggest that our universe lies within a false vacuum of this kind, then it would implymore than likely in many billions of yearsthat the universe's forces, particles, and structures could cease to exist as we know them (and be replaced by different ones), if a true vacuum happened to nucleate. It also suggests that the Higgs self-coupling and its function could be very close to zero at the Planck scale, with "intriguing" implications, including theories of gravity and Higgs-based inflation. A future electron–positron collider would be able to provide the precise measurements of the top quark needed for such calculations. Vacuum energy and the cosmological constant More speculatively, the Higgs field has also been proposed as the energy of the vacuum, which at the extreme energies of the first moments of the Big Bang caused the universe to be a kind of featureless symmetry of undifferentiated, extremely high energy. In this kind of speculation, the single unified field of a Grand Unified Theory is identified as (or modelled upon) the Higgs field, and it is through successive symmetry breakings of the Higgs field, or some similar field, at phase transitions that the presently known forces and fields of the universe arise. The relationship (if any) between the Higgs field and the presently observed vacuum energy density of the universe has also come under scientific study. As observed, the present vacuum energy density is extremely close to zero, but the energy densities predicted from the Higgs field, supersymmetry, and other current theories are typically many orders of magnitude larger. It is unclear how these should be reconciled. This cosmological constant problem remains a major unanswered problem in physics. History Theorisation Particle physicists study matter made from fundamental particles whose interactions are mediated by exchange particlesgauge bosonsacting as force carriers. At the beginning of the 1960s a number of these particles had been discovered or proposed, along with theories suggesting how they relate to each other, some of which had already been reformulated as field theories in which the objects of study are not particles and forces, but quantum fields and their symmetries. However, attempts to produce quantum field models for two of the four known fundamental forces – the electromagnetic force and the weak nuclear force – and then to unify these interactions, were still unsuccessful. One known problem was that gauge invariant approaches, including non-abelian models such as Yang–Mills theory (1954), which held great promise for unified theories, also seemed to predict known massive particles as massless. Goldstone's theorem, relating to continuous symmetries within some theories, also appeared to rule out many obvious solutions, since it appeared to show that zero-mass particles known as Goldstone bosons would also have to exist that simply were "not seen". According to Guralnik, physicists had "no understanding" how these problems could be overcome. Particle physicist and mathematician Peter Woit summarised the state of research at the time: The Higgs mechanism is a process by which vector bosons can acquire rest mass without explicitly breaking gauge invariance, as a byproduct of spontaneous symmetry breaking. Initially, the mathematical theory behind spontaneous symmetry breaking was conceived and published within particle physics by Yoichiro Nambu in 1960 (and somewhat anticipated by Ernst Stueckelberg in 1938), and the concept that such a mechanism could offer a possible solution for the "mass problem" was originally suggested in 1962 by Philip Anderson, who had previously written papers on broken symmetry and its outcomes in superconductivity. Anderson concluded in his 1963 paper on the Yang–Mills theory, that "considering the superconducting analog ... [t]hese two types of bosons seem capable of canceling each other out ... leaving finite mass bosons"), and in March 1964, Abraham Klein and Benjamin Lee showed that Goldstone's theorem could be avoided this way in at least some non-relativistic cases, and speculated it might be possible in truly relativistic cases. These approaches were quickly developed into a full relativistic model, independently and almost simultaneously, by three groups of physicists: by François Englert and Robert Brout in August 1964; by Peter Higgs in October 1964; and by Gerald Guralnik, Carl Hagen, and Tom Kibble (GHK) in November 1964. Higgs also wrote a short, but important, response published in September 1964 to an objection by Gilbert, which showed that if calculating within the radiation gauge, Goldstone's theorem and Gilbert's objection would become inapplicable. Higgs later described Gilbert's objection as prompting his own paper. Properties of the model were further considered by Guralnik in 1965, by Higgs in 1966, by Kibble in 1967, and further by GHK in 1967. The original three 1964 papers demonstrated that when a gauge theory is combined with an additional charged scalar field that spontaneously breaks the symmetry, the gauge bosons may consistently acquire a finite mass. In 1967, Steven Weinberg and Abdus Salam independently showed how a Higgs mechanism could be used to break the electroweak symmetry of Sheldon Glashow's unified model for the weak and electromagnetic interactions, (itself an extension of work by Schwinger), forming what became the Standard Model of particle physics. Weinberg was the first to observe that this would also provide mass terms for the fermions. At first, these seminal papers on spontaneous breaking of gauge symmetries were largely ignored, because it was widely believed that the (non-Abelian gauge) theories in question were a dead-end, and in particular that they could not be renormalised. In 1971–72, Martinus Veltman and Gerard 't Hooft proved renormalisation of Yang–Mills was possible in two papers covering massless, and then massive, fields. Their contribution, and the work of others on the renormalisation groupincluding "substantial" theoretical work by Russian physicists Ludvig Faddeev, Andrei Slavnov, Efim Fradkin, and Igor Tyutinwas eventually "enormously profound and influential", but even with all key elements of the eventual theory published there was still almost no wider interest. For example, Coleman found in a study that "essentially no-one paid any attention" to Weinberg's paper prior to 1971 and discussed by David Politzer in his 2004 Nobel speech.now the most cited in particle physicsand even in 1970 according to Politzer, Glashow's teaching of the weak interaction contained no mention of Weinberg's, Salam's, or Glashow's own work. In practice, Politzer states, almost everyone learned of the theory due to physicist Benjamin Lee, who combined the work of Veltman and 't Hooft with insights by others, and popularised the completed theory. In this way, from 1971, interest and acceptance "exploded" and the ideas were quickly absorbed in the mainstream. The resulting electroweak theory and Standard Model have accurately predicted (among other things) weak neutral currents, three bosons, the top and charm quarks, and with great precision, the mass and other properties of some of these. Many of those involved eventually won Nobel Prizes or other renowned awards. A 1974 paper and comprehensive review in Reviews of Modern Physics commented that "while no one doubted the [mathematical] correctness of these arguments, no one quite believed that nature was diabolically clever enough to take advantage of them", adding that the theory had so far produced accurate answers that accorded with experiment, but it was unknown whether the theory was fundamentally correct. By 1986 and again in the 1990s it became possible to write that understanding and proving the Higgs sector of the Standard Model was "the central problem today in particle physics". Summary and impact of the PRL papers The three papers written in 1964 were each recognised as milestone papers during Physical Review Letters 50th anniversary celebration. Their six authors were also awarded the 2010 J. J. Sakurai Prize for Theoretical Particle Physics for this work. (A controversy also arose the same year, because in the event of a Nobel Prize only up to three scientists could be recognised, with six being credited for the papers.) Two of the three PRL papers (by Higgs and by GHK) contained equations for the hypothetical field that eventually would become known as the Higgs field and its hypothetical quantum, the Higgs boson. Higgs' subsequent 1966 paper showed the decay mechanism of the boson; only a massive boson can decay and the decays can prove the mechanism. In the paper by Higgs the boson is massive, and in a closing sentence Higgs writes that "an essential feature" of the theory "is the prediction of incomplete multiplets of scalar and vector bosons". (Frank Close comments that 1960s gauge theorists were focused on the problem of massless vector bosons, and the implied existence of a massive scalar boson was not seen as important; only Higgs directly addressed it.) In the paper by GHK the boson is massless and decoupled from the massive states. In reviews dated 2009 and 2011, Guralnik states that in the GHK model the boson is massless only in a lowest-order approximation, but it is not subject to any constraint and acquires mass at higher orders, and adds that the GHK paper was the only one to show that there are no massless Goldstone bosons in the model and to give a complete analysis of the general Higgs mechanism. All three reached similar conclusions, despite their very different approaches: Higgs' paper essentially used classical techniques, Englert and Brout's involved calculating vacuum polarisation in perturbation theory around an assumed symmetry-breaking vacuum state, and GHK used operator formalism and conservation laws to explore in depth the ways in which Goldstone's theorem may be worked around. Some versions of the theory predicted more than one kind of Higgs fields and bosons, and alternative "Higgsless" models were considered until the discovery of the Higgs boson. Experimental search To produce Higgs bosons, two beams of particles are accelerated to very high energies and allowed to collide within a particle detector. Occasionally, although rarely, a Higgs boson will be created fleetingly as part of the collision byproducts. Because the Higgs boson decays very quickly, particle detectors cannot detect it directly. Instead the detectors register all the decay products (the decay signature) and from the data the decay process is reconstructed. If the observed decay products match a possible decay process (known as a decay channel) of a Higgs boson, this indicates that a Higgs boson may have been created. In practice, many processes may produce similar decay signatures. Fortunately, the Standard Model precisely predicts the likelihood of each of these, and each known process, occurring. So, if the detector detects more decay signatures consistently matching a Higgs boson than would otherwise be expected if Higgs bosons did not exist, then this would be strong evidence that the Higgs boson exists. Because Higgs boson production in a particle collision is likely to be very rare (1 in 10 billion at the LHC), and many other possible collision events can have similar decay signatures, the data of hundreds of trillions of collisions needs to be analysed and must "show the same picture" before a conclusion about the existence of the Higgs boson can be reached. To conclude that a new particle has been found, particle physicists require that the statistical analysis of two independent particle detectors each indicate that there is less than a one-in-a-million chance that the observed decay signatures are due to just background random Standard Model eventsi.e., that the observed number of events is more than five standard deviations (sigma) different from that expected if there was no new particle. More collision data allows better confirmation of the physical properties of any new particle observed, and allows physicists to decide whether it is indeed a Higgs boson as described by the Standard Model or some other hypothetical new particle. To find the Higgs boson, a powerful particle accelerator was needed, because Higgs bosons might not be seen in lower-energy experiments. The collider needed to have a high luminosity in order to ensure enough collisions were seen for conclusions to be drawn. Finally, advanced computing facilities were needed to process the vast amount of data (25 petabytes per year as of 2012) produced by the collisions. For the announcement of 4 July 2012, a new collider known as the Large Hadron Collider was constructed at CERN with a planned eventual collision energy of 14 TeVover seven times any previous colliderand over 300 trillion () LHC proton–proton collisions were analysed by the LHC Computing Grid, the world's largest computing grid (as of 2012), comprising over 170 computing facilities in a worldwide network across 36 countries. Search before 4 July 2012 The first extensive search for the Higgs boson was conducted at the Large Electron–Positron Collider (LEP) at CERN in the 1990s. At the end of its service in 2000, LEP had found no conclusive evidence for the Higgs. This implied that if the Higgs boson were to exist it would have to be heavier than . The search continued at Fermilab in the United States, where the Tevatronthe collider that discovered the top quark in 1995 – had been upgraded for this purpose. There was no guarantee that the Tevatron would be able to find the Higgs, but it was the only supercollider that was operational since the Large Hadron Collider (LHC) was still under construction and the planned Superconducting Super Collider had been cancelled in 1993 and never completed. The Tevatron was only able to exclude further ranges for the Higgs mass, and was shut down on 30 September 2011 because it no longer could keep up with the LHC. The final analysis of the data excluded the possibility of a Higgs boson with a mass between and . In addition, there was a small (but not significant) excess of events possibly indicating a Higgs boson with a mass between and . The Large Hadron Collider at CERN in Switzerland, was designed specifically to be able to either confirm or exclude the existence of the Higgs boson. Built in a 27 km tunnel under the ground near Geneva originally inhabited by LEP, it was designed to collide two beams of protons, initially at energies of per beam (7 TeV total), or almost 3.6 times that of the Tevatron, and upgradeable to (14 TeV total) in future. Theory suggested if the Higgs boson existed, collisions at these energy levels should be able to reveal it. As one of the most complicated scientific instruments ever built, its operational readiness was delayed for 14 months by a magnet quench event nine days after its inaugural tests, caused by a faulty electrical connection that damaged over 50 superconducting magnets and contaminated the vacuum system. Data collection at the LHC finally commenced in March 2010. By December 2011 the two main particle detectors at the LHC, ATLAS and CMS, had narrowed down the mass range where the Higgs could exist to around (ATLAS) and (CMS). There had also already been a number of promising event excesses that had "evaporated" and proven to be nothing but random fluctuations. However, from around May 2011, both experiments had seen among their results, the slow emergence of a small yet consistent excess of gamma and 4-lepton decay signatures and several other particle decays, all hinting at a new particle at a mass around . By around November 2011, the anomalous data at was becoming "too large to ignore" (although still far from conclusive), and the team leaders at both ATLAS and CMS each privately suspected they might have found the Higgs. On 28 November 2011, at an internal meeting of the two team leaders and the director general of CERN, the latest analyses were discussed outside their teams for the first time, suggesting both ATLAS and CMS might be converging on a possible shared result at , and initial preparations commenced in case of a successful finding. While this information was not known publicly at the time, the narrowing of the possible Higgs range to around and the repeated observation of small but consistent event excesses across multiple channels at both ATLAS and CMS in the region (described as "tantalising hints" of around 2–3 sigma) were public knowledge with "a lot of interest". It was therefore widely anticipated around the end of 2011, that the LHC would provide sufficient data to either exclude or confirm the finding of a Higgs boson by the end of 2012, when their 2012 collision data (with slightly higher 8 TeV collision energy) had been examined. Discovery of candidate boson at CERN On 22 June 2012 CERN announced an upcoming seminar covering tentative findings for 2012, and shortly afterwards (from around 1 July 2012 according to an analysis of the spreading rumour in social media) rumours began to spread in the media that this would include a major announcement, but it was unclear whether this would be a stronger signal or a formal discovery. Speculation escalated to a "fevered" pitch when reports emerged that Peter Higgs, who proposed the particle, was to be attending the seminar, and that "five leading physicists" had been invitedgenerally believed to signify the five living 1964 authorswith Higgs, Englert, Guralnik, Hagen attending and Kibble confirming his invitation (Brout having died in 2011). On 4 July 2012 both of the CERN experiments announced they had independently made the same discovery: CMS of a previously unknown boson with mass and ATLAS of a boson with mass . Using the combined analysis of two interaction types (known as 'channels'), both experiments independently reached a local significance of 5 sigmaimplying that the probability of getting at least as strong a result by chance alone is less than one in three million. When additional channels were taken into account, the CMS significance was reduced to 4.9 sigma. The two teams had been working 'blinded' from each other from around late 2011 or early 2012, meaning they did not discuss their results with each other, providing additional certainty that any common finding was genuine validation of a particle. This level of evidence, confirmed independently by two separate teams and experiments, meets the formal level of proof required to announce a confirmed discovery. On 31 July 2012, the ATLAS collaboration presented additional data analysis on the "observation of a new particle", including data from a third channel, which improved the significance to 5.9 sigma (1 in 588 million chance of obtaining at least as strong evidence by random background effects alone) and mass , and CMS improved the significance to 5-sigma and mass . New particle tested as a possible Higgs boson Following the 2012 discovery, it was still unconfirmed whether the particle was a Higgs boson. On one hand, observations remained consistent with the observed particle being the Standard Model Higgs boson, and the particle decayed into at least some of the predicted channels. Moreover, the production rates and branching ratios for the observed channels broadly matched the predictions by the Standard Model within the experimental uncertainties. However, the experimental uncertainties currently still left room for alternative explanations, meaning an announcement of the discovery of a Higgs boson would have been premature. To allow more opportunity for data collection, the LHC's proposed 2012 shutdown and 2013–14 upgrade were postponed by seven weeks into 2013. In November 2012, in a conference in Kyoto researchers said evidence gathered since July was falling into line with the basic Standard Model more than its alternatives, with a range of results for several interactions matching that theory's predictions. Physicist Matt Strassler highlighted "considerable" evidence that the new particle is not a pseudoscalar negative parity particle (consistent with this required finding for a Higgs boson), "evaporation" or lack of increased significance for previous hints of non-Standard Model findings, expected Standard Model interactions with W and Z bosons, absence of "significant new implications" for or against supersymmetry, and in general no significant deviations to date from the results expected of a Standard Model Higgs boson. However some kinds of extensions to the Standard Model would also show very similar results; so commentators noted that based on other particles that are still being understood long after their discovery, it may take years to be sure, and decades to fully understand the particle that has been found. These findings meant that as of January 2013, scientists were very sure they had found an unknown particle of mass ~ , and had not been misled by experimental error or a chance result. They were also sure, from initial observations, that the new particle was some kind of boson. The behaviours and properties of the particle, so far as examined since July 2012, also seemed quite close to the behaviours expected of a Higgs boson. Even so, it could still have been a Higgs boson or some other unknown boson, since future tests could show behaviours that do not match a Higgs boson, so as of December 2012 CERN still only stated that the new particle was "consistent with" the Higgs boson, and scientists did not yet positively say it was the Higgs boson. Despite this, in late 2012, widespread media reports announced (incorrectly) that a Higgs boson had been confirmed during the year. In January 2013, CERN director-general Rolf-Dieter Heuer stated that based on data analysis to date, an answer could be possible 'towards' mid-2013, and the deputy chair of physics at Brookhaven National Laboratory stated in February 2013 that a "definitive" answer might require "another few years" after the collider's 2015 restart. In early March 2013, CERN Research Director Sergio Bertolucci stated that confirming spin-0 was the major remaining requirement to determine whether the particle is at least some kind of Higgs boson. Confirmation of existence and current status On 14 March 2013 CERN confirmed the following: CMS and ATLAS have compared a number of options for the spin-parity of this particle, and these all prefer no spin and even parity [two fundamental criteria of a Higgs boson consistent with the Standard Model]. This, coupled with the measured interactions of the new particle with other particles, strongly indicates that it is a Higgs boson. This also makes the particle the first elementary scalar particle to be discovered in nature. The following are examples of tests used to confirm that the discovered particle is the Higgs boson: Findings since 2013 In July 2017, CERN confirmed that all measurements still agree with the predictions of the Standard Model, and called the discovered particle simply "the Higgs boson". As of 2019, the Large Hadron Collider has continued to produce findings that confirm the 2013 understanding of the Higgs field and particle. The LHC's experimental work since restarting in 2015 has included probing the Higgs field and boson to a greater level of detail, and confirming whether less common predictions were correct. In particular, exploration since 2015 has provided strong evidence of the predicted direct decay into fermions such as pairs of bottom quarks (3.6 σ)described as an "important milestone" in understanding its short lifetime and other rare decaysand also to confirm decay into pairs of tau leptons (5.9 σ). This was described by CERN as being "of paramount importance to establishing the coupling of the Higgs boson to leptons and represents an important step towards measuring its couplings to third generation fermions, the very heavy copies of the electrons and quarks, whose role in nature is a profound mystery". Published results as of 19 March 2018 at 13 TeV for ATLAS and CMS had their measurements of the Higgs mass at and respectively. In July 2018, the ATLAS and CMS experiments reported observing the Higgs boson decay into a pair of bottom quarks, which makes up approximately 60% of all of its decays. Theoretical issues Theoretical need for the Higgs Gauge invariance is an important property of modern particle theories such as the Standard Model, partly due to its success in other areas of fundamental physics such as electromagnetism and the strong interaction (quantum chromodynamics). However, before Sheldon Glashow extended the electroweak unification models in 1961, there were great difficulties in developing gauge theories for the weak nuclear force or a possible unified electroweak interaction. Fermions with a mass term would violate gauge symmetry and therefore cannot be gauge invariant. (This can be seen by examining the Dirac Lagrangian for a fermion in terms of left and right handed components; we find none of the spin-half particles could ever flip helicity as required for mass, so they must be massless.) W and Z bosons are observed to have mass, but a boson mass term contains terms which clearly depend on the choice of gauge, and therefore these masses too cannot be gauge invariant. Therefore, it seems that none of the standard model fermions or bosons could "begin" with mass as an inbuilt property except by abandoning gauge invariance. If gauge invariance were to be retained, then these particles had to be acquiring their mass by some other mechanism or interaction. Additionally, solutions based on spontaneous symmetry breaking appeared to fail, seemingly an inevitable result of Goldstone's theorem. Because there is no potential energy cost to moving around the complex plane's "circular valley" responsible for spontaneous symmetry breaking, the resulting quantum excitation is pure kinetic energy, and therefore a massless boson ("Goldstone boson"), which in turn implies a new long range force. But no new long range forces or massless particles were detected either. So whatever was giving these particles their mass had to not "break" gauge invariance as the basis for other parts of the theories where it worked well, and had to not require or predict unexpected massless particles or long-range forces which did not actually seem to exist in nature. A solution to all of these overlapping problems came from the discovery of a previously unnoticed borderline case hidden in the mathematics of Goldstone's theorem, that under certain conditions it might theoretically be possible for a symmetry to be broken without disrupting gauge invariance and without any new massless particles or forces, and having "sensible" (renormalisable) results mathematically. This became known as the Higgs mechanism. The Standard Model hypothesises a field which is responsible for this effect, called the Higgs field (symbol: ), which has the unusual property of a non-zero amplitude in its ground state; i.e., a non-zero vacuum expectation value. It can have this effect because of its unusual "Mexican hat" shaped potential whose lowest "point" is not at its "centre". In simple terms, unlike all other known fields, the Higgs field requires less energy to have a non-zero value than a zero value, so it ends up having a non-zero value everywhere. Below a certain extremely high energy level the existence of this non-zero vacuum expectation spontaneously breaks electroweak gauge symmetry which in turn gives rise to the Higgs mechanism and triggers the acquisition of mass by those particles interacting with the field. This effect occurs because scalar field components of the Higgs field are "absorbed" by the massive bosons as degrees of freedom, and couple to the fermions via Yukawa coupling, thereby producing the expected mass terms. When symmetry breaks under these conditions, the Goldstone bosons that arise interact with the Higgs field (and with other particles capable of interacting with the Higgs field) instead of becoming new massless particles. The intractable problems of both underlying theories "neutralise" each other, and the residual outcome is that elementary particles acquire a consistent mass based on how strongly they interact with the Higgs field. It is the simplest known process capable of giving mass to the gauge bosons while remaining compatible with gauge theories. Its quantum would be a scalar boson, known as the Higgs boson. Simple explanation of the theory, from its origins in superconductivity The proposed Higgs mechanism arose as a result of theories proposed to explain observations in superconductivity. A superconductor does not allow penetration by external magnetic fields (the Meissner effect). This strange observation implies that somehow, the electromagnetic field becomes short ranged during this phenomenon. Successful theories arose to explain this during the 1950s, first for fermions (Ginzburg–Landau theory, 1950), and then for bosons (BCS theory, 1957). In these theories, superconductivity is interpreted as arising from a charged condensate field. Initially, the condensate value does not have any preferred direction, implying it is scalar, but its phase is capable of defining a gauge, in gauge based field theories. To do this, the field must be charged. A charged scalar field must also be complex (or described another way, it contains at least two components, and a symmetry capable of rotating each into the other(s)). In naïve gauge theory, a gauge transformation of a condensate usually rotates the phase. But in these circumstances, it instead fixes a preferred choice of phase. However, it turns out that fixing the choice of gauge so that the condensate has the same phase everywhere also causes the electromagnetic field to gain an extra term. This extra term causes the electromagnetic field to become short range. Once attention was drawn to this theory within particle physics, the parallels were clear. A change of the usually long range electromagnetic field to become short ranged, within a gauge invariant theory, was exactly the needed effect sought for the weak force bosons (because a long range force has massless gauge bosons, and a short ranged force implies massive gauge bosons, suggesting that a result of this interaction is that the field's gauge bosons acquired mass, or a similar and equivalent effect). The features of a field required to do this were also quite well defined – it would have to be a charged scalar field, with at least two components, and complex in order to support a symmetry able to rotate these into each other. Alternative models The Minimal Standard Model as described above is the simplest known model for the Higgs mechanism with just one Higgs field. However, an extended Higgs sector with additional Higgs particle doublets or triplets is also possible, and many extensions of the Standard Model have this feature. The non-minimal Higgs sector favoured by theory are the two-Higgs-doublet models (2HDM), which predict the existence of a quintet of scalar particles: two CP-even neutral Higgs bosons h0 and H0, a CP-odd neutral Higgs boson A0, and two charged Higgs particles H±. Supersymmetry ("SUSY") also predicts relations between the Higgs-boson masses and the masses of the gauge bosons, and could accommodate a neutral Higgs boson. The key method to distinguish between these different models involves study of the particles' interactions ("coupling") and exact decay processes ("branching ratios"), which can be measured and tested experimentally in particle collisions. In the Type-I 2HDM model one Higgs doublet couples to up and down quarks, while the second doublet does not couple to quarks. This model has two interesting limits, in which the lightest Higgs couples to just fermions ("gauge-phobic") or just gauge bosons ("fermiophobic"), but not both. In the Type-II 2HDM model, one Higgs doublet only couples to up-type quarks, the other only couples to down-type quarks. The heavily researched Minimal Supersymmetric Standard Model (MSSM) includes a Type-II 2HDM Higgs sector, so it could be disproven by evidence of a Type-I 2HDM Higgs. In other models the Higgs scalar is a composite particle. For example, in technicolour the role of the Higgs field is played by strongly bound pairs of fermions called techniquarks. Other models feature pairs of top quarks (see top quark condensate). In yet other models, there is no Higgs field at all and the electroweak symmetry is broken using extra dimensions. Further theoretical issues and hierarchy problem The Standard Model leaves the mass of the Higgs boson as a parameter to be measured, rather than a value to be calculated. This is seen as theoretically unsatisfactory, particularly as quantum corrections (related to interactions with virtual particles) should apparently cause the Higgs particle to have a mass immensely higher than that observed, but at the same time the Standard Model requires a mass of the order of to ensure unitarity (in this case, to unitarise longitudinal vector boson scattering). Reconciling these points appears to require explaining why there is an almost-perfect cancellation resulting in the visible mass of ~ , and it is not clear how to do this. Because the weak force is about 1032 times stronger than gravity, and (linked to this) the Higgs boson's mass is so much less than the Planck mass or the grand unification energy, it appears that either there is some underlying connection or reason for these observations which is unknown and not described by the Standard Model, or some unexplained and extremely precise fine-tuning of parametershowever at present neither of these explanations is proven. This is known as a hierarchy problem. More broadly, the hierarchy problem amounts to the worry that a future theory of fundamental particles and interactions should not have excessive fine-tunings or unduly delicate cancellations, and should allow masses of particles such as the Higgs boson to be calculable. The problem is in some ways unique to spin-0 particles (such as the Higgs boson), which can give rise to issues related to quantum corrections that do not affect particles with spin. A number of solutions have been proposed, including supersymmetry, conformal solutions and solutions via extra dimensions such as braneworld models. There are also issues of quantum triviality, which suggests that it may not be possible to create a consistent quantum field theory involving elementary scalar particles. Triviality constraints can be used to restrict or predict parameters such as the Higgs boson mass. This can also lead to a predictable Higgs mass in asymptotic safety scenarios. Properties Properties of the Higgs field In the Standard Model, the Higgs field is a scalar tachyonic fieldscalar meaning it does not transform under Lorentz transformations, and tachyonic meaning the field (but not the particle) has imaginary mass, and in certain configurations must undergo symmetry breaking. It consists of four components: Two neutral ones and two charged component fields. Both of the charged components and one of the neutral fields are Goldstone bosons, which act as the longitudinal third-polarisation components of the massive W+, W−, and Z bosons. The quantum of the remaining neutral component corresponds to (and is theoretically realised as) the massive Higgs boson. This component can interact with fermions via Yukawa coupling to give them mass as well. Mathematically, the Higgs field has imaginary mass and is therefore a tachyonic field. While tachyons (particles that move faster than light) are a purely hypothetical concept, fields with imaginary mass have come to play an important role in modern physics. Under no circumstances do any excitations ever propagate faster than light in such theoriesthe presence or absence of a tachyonic mass has no effect whatsoever on the maximum velocity of signals (there is no violation of causality). Instead of faster-than-light particles, the imaginary mass creates an instability: Any configuration in which one or more field excitations are tachyonic must spontaneously decay, and the resulting configuration contains no physical tachyons. This process is known as tachyon condensation, and is now believed to be the explanation for how the Higgs mechanism itself arises in nature, and therefore the reason behind electroweak symmetry breaking. Although the notion of imaginary mass might seem troubling, it is only the field, and not the mass itself, that is quantised. Therefore, the field operators at spacelike separated points still commute (or anticommute), and information and particles still do not propagate faster than light. Tachyon condensation drives a physical system that has reached a local limitand might naively be expected to produce physical tachyonsto an alternate stable state where no physical tachyons exist. Once a tachyonic field such as the Higgs field reaches the minimum of the potential, its quanta are not tachyons any more but rather are ordinary particles such as the Higgs boson. Properties of the Higgs boson Since the Higgs field is scalar, the Higgs boson has no spin. The Higgs boson is also its own antiparticle, is CP-even, and has zero electric and colour charge. The Standard Model does not predict the mass of the Higgs boson. If that mass is between (consistent with empirical observations of ), then the Standard Model can be valid at energy scales all the way up to the Planck scale (). It should be the only particle in the Standard Model that remains massive even at high energies. Many theorists expect new physics beyond the Standard Model to emerge at the TeV-scale, based on unsatisfactory properties of the Standard Model. The highest possible mass scale allowed for the Higgs boson (or some other electroweak symmetry breaking mechanism) is 1.4 TeV; beyond this point, the Standard Model becomes inconsistent without such a mechanism, because unitarity is violated in certain scattering processes. It is also possible, although experimentally difficult, to estimate the mass of the Higgs boson indirectly: In the Standard Model, the Higgs boson has a number of indirect effects; most notably, Higgs loops result in tiny corrections to masses of the W and Z bosons. Precision measurements of electroweak parameters, such as the Fermi constant and masses of the W and Z bosons, can be used to calculate constraints on the mass of the Higgs. As of July 2011, the precision electroweak measurements tell us that the mass of the Higgs boson is likely to be less than about at 95% confidence level. These indirect constraints rely on the assumption that the Standard Model is correct. It may still be possible to discover a Higgs boson above these masses, if it is accompanied by other particles beyond those accommodated by the Standard Model. The LHC cannot directly measure the Higgs boson's lifetime, due to its extreme brevity. It is predicted as based on the predicted decay width of . However it can be measured indirectly, based upon comparing masses measured from quantum phenomena occurring in the on shell production pathways and in the, much rarer, off shell production pathways, derived from Dalitz decay via a virtual photon . Using this technique, the lifetime of the Higgs boson was tentatively measured in 2021 as , at sigma 3.2 (1 in 1000) significance. Production If Higgs particle theories are valid, then a Higgs particle can be produced much like other particles that are studied, in a particle collider. This involves accelerating a large number of particles to extremely high energies and extremely close to the speed of light, then allowing them to smash together. Protons and lead ions (the bare nuclei of lead atoms) are used at the LHC. In the extreme energies of these collisions, the desired esoteric particles will occasionally be produced and this can be detected and studied; any absence or difference from theoretical expectations can also be used to improve the theory. The relevant particle theory (in this case the Standard Model) will determine the necessary kinds of collisions and detectors. The Standard Model predicts that Higgs bosons could be formed in a number of ways, although the probability of producing a Higgs boson in any collision is always expected to be very smallfor example, only one Higgs boson per 10 billion collisions in the Large Hadron Collider. The most common expected processes for Higgs boson production are: Gluon fusion If the collided particles are hadrons such as the proton or antiprotonas is the case in the LHC and Tevatronthen it is most likely that two of the gluons binding the hadron together collide. The easiest way to produce a Higgs particle is if the two gluons combine to form a loop of virtual quarks. Since the coupling of particles to the Higgs boson is proportional to their mass, this process is more likely for heavy particles. In practice it is enough to consider the contributions of virtual top and bottom quarks (the heaviest quarks). This process is the dominant contribution at the LHC and Tevatron being about ten times more likely than any of the other processes. Higgs Strahlung If an elementary fermion collides with an anti-fermione.g., a quark with an anti-quark or an electron with a positronthe two can merge to form a virtual W or Z boson which, if it carries sufficient energy, can then emit a Higgs boson. This process was the dominant production mode at the LEP, where an electron and a positron collided to form a virtual Z boson, and it was the second largest contribution for Higgs production at the Tevatron. At the LHC this process is only the third largest, because the LHC collides protons with protons, making a quark-antiquark collision less likely than at the Tevatron. Higgs Strahlung is also known as associated production. Weak boson fusion Another possibility when two (anti-)fermions collide is that the two exchange a virtual W or Z boson, which emits a Higgs boson. The colliding fermions do not need to be the same type. So, for example, an up quark may exchange a Z boson with an anti-down quark. This process is the second most important for the production of Higgs particle at the LHC and LEP. Top fusion The final process that is commonly considered is by far the least likely (by two orders of magnitude). This process involves two colliding gluons, which each decay into a heavy quark–antiquark pair. A quark and antiquark from each pair can then combine to form a Higgs particle. Decay Quantum mechanics predicts that if it is possible for a particle to decay into a set of lighter particles, then it will eventually do so. This is also true for the Higgs boson. The likelihood with which this happens depends on a variety of factors including: the difference in mass, the strength of the interactions, etc. Most of these factors are fixed by the Standard Model, except for the mass of the Higgs boson itself. For a Higgs boson with a mass of the SM predicts a mean life time of about . Since it interacts with all the massive elementary particles of the SM, the Higgs boson has many different processes through which it can decay. Each of these possible processes has its own probability, expressed as the branching ratio; the fraction of the total number decays that follows that process. The SM predicts these branching ratios as a function of the Higgs mass (see plot). One way that the Higgs can decay is by splitting into a fermion–antifermion pair. As general rule, the Higgs is more likely to decay into heavy fermions than light fermions, because the mass of a fermion is proportional to the strength of its interaction with the Higgs. By this logic the most common decay should be into a top–antitop quark pair. However, such a decay would only be possible if the Higgs were heavier than ~, twice the mass of the top quark. For a Higgs mass of the SM predicts that the most common decay is into a bottom–antibottom quark pair, which happens 57.7% of the time. The second most common fermion decay at that mass is a tau–antitau pair, which happens only about 6.3% of the time. Another possibility is for the Higgs to split into a pair of massive gauge bosons. The most likely possibility is for the Higgs to decay into a pair of W bosons (the light blue line in the plot), which happens about 21.5% of the time for a Higgs boson with a mass of . The W bosons can subsequently decay either into a quark and an antiquark or into a charged lepton and a neutrino. The decays of W bosons into quarks are difficult to distinguish from the background, and the decays into leptons cannot be fully reconstructed (because neutrinos are impossible to detect in particle collision experiments). A cleaner signal is given by decay into a pair of Z-bosons (which happens about 2.6% of the time for a Higgs with a mass of ), if each of the bosons subsequently decays into a pair of easy-to-detect charged leptons (electrons or muons). Decay into massless gauge bosons (i.e., gluons or photons) is also possible, but requires intermediate loop of virtual heavy quarks (top or bottom) or massive gauge bosons. The most common such process is the decay into a pair of gluons through a loop of virtual heavy quarks. This process, which is the reverse of the gluon fusion process mentioned above, happens approximately 8.6% of the time for a Higgs boson with a mass of . Much rarer is the decay into a pair of photons mediated by a loop of W bosons or heavy quarks, which happens only twice for every thousand decays. However, this process is very relevant for experimental searches for the Higgs boson, because the energy and momentum of the photons can be measured very precisely, giving an accurate reconstruction of the mass of the decaying particle. In 2021 the extremely rare Dalitz decay was tentatively observed, into two leptons (electrons or muons) and a photon (ℓℓγ), via virtual photon decay. This can happen in three ways; Higgs to virtual photon to ℓℓγ in which the virtual photon (γ*) has very small but nonzero mass, Higgs to Z boson to ℓℓγ, or Higgs to two leptons, one of which emits a final-state photon leading to ℓℓγ. ATLAS searched for evidence of the first of these at low di-lepton mass , where this process should dominate. The observation is at sigma 3.2 (1 in 1000) significance. This decay path is important because it facilitates measuring the on- and off-shell mass of the Higgs boson (allowing indirect measurement of decay time), and the decay into two charged particles allows exploration of charge conjugation and charge parity (CP) violation. Public discussion Naming Names used by physicists The name most strongly associated with the particle and field is the Higgs boson and Higgs field. For some time the particle was known by a combination of its PRL author names (including at times Anderson), for example the Brout–Englert–Higgs particle, the Anderson–Higgs particle, or the Englert–Brout–Higgs–Guralnik–Hagen–Kibble mechanism, and these are still used at times. Fuelled in part by the issue of recognition and a potential shared Nobel Prize, the most appropriate name was still occasionally a topic of debate until 2013. Higgs himself preferred to call the particle either by an acronym of all those involved, or "the scalar boson", or "the so-called Higgs particle". A considerable amount has been written on how Higgs' name came to be exclusively used. Two main explanations are offered. The first is that Higgs undertook a step which was either unique, clearer or more explicit in his paper in formally predicting and examining the particle. Of the PRL papers' authors, only the paper by Higgs explicitly offered as a prediction that a massive particle would exist and calculated some of its properties; he was therefore "the first to postulate the existence of a massive particle" according to Nature. Physicist and author Frank Close and physicist-blogger Peter Woit both comment that the paper by GHK was also completed after Higgs and Brout–Englert were submitted to Physical Review Letters, and that Higgs alone had drawn attention to a predicted massive scalar boson, while all others had focused on the massive vector bosons. In this way, Higgs' contribution also provided experimentalists with a crucial "concrete target" needed to test the theory. However, in Higgs' view, Brout and Englert did not explicitly mention the boson since its existence is plainly obvious in their work, while according to Guralnik the GHK paper was a complete analysis of the entire symmetry breaking mechanism whose mathematical rigour is absent from the other two papers, and a massive particle may exist in some solutions. Higgs' paper also provided an "especially sharp" statement of the challenge and its solution according to science historian David Kaiser. The alternative explanation is that the name was popularised in the 1970s due to its use as a convenient shorthand or because of a mistake in citing. Many accounts including Higgs' own credit the "Higgs" name to physicist Benjamin Lee. Lee was a significant populariser of the theory in its early days, and habitually attached the name "Higgs" as a "convenient shorthand" for its components from 1972, and in at least one instance from as early as 1966. Although Lee clarified in his footnotes that "'Higgs' is an abbreviation for Higgs, Kibble, Guralnik, Hagen, Brout, Englert", his use of the term (and perhaps also Steven Weinberg's mistaken cite of Higgs' paper as the first in his seminal 1967 paper ) meant that by around 1975–1976 others had also begun to use the name "Higgs" exclusively as a shorthand. In 2012, physicist Frank Wilczek, who was credited for naming the elementary particle, the axion (over an alternative proposal "Higglet", by Weinberg), endorsed the "Higgs boson" name, stating "History is complicated, and wherever you draw the line, there will be somebody just below it." Nickname The Higgs boson is often referred to as the "God particle" in popular media outside the scientific community. The nickname comes from the title of the 1993 book on the Higgs boson and particle physics, The God Particle: If the Universe Is the Answer, What Is the Question? by Physics Nobel Prize winner and Fermilab director Leon Lederman. Lederman wrote it in the context of failing US government support for the Superconducting Super Collider, a partially constructed titanic competitor to the Large Hadron Collider with planned collision energies of that was championed by Lederman since its 1983 inception and shut down in 1993. The book sought in part to promote awareness of the significance and need for such a project in the face of its possible loss of funding. Lederman, a leading researcher in the field, writes that he wanted to title his book The Goddamn Particle: If the Universe is the Answer, What is the Question? Lederman's editor decided that the title was too controversial and convinced him to change the title to The God Particle: If the Universe is the Answer, What is the Question? While media use of this term may have contributed to wider awareness and interest, many scientists feel the name is inappropriate since it is sensational hyperbole and misleads readers; the particle also has nothing to do with any God, leaves open numerous questions in fundamental physics, and does not explain the ultimate origin of the universe. Higgs, an atheist, was reported to be displeased and stated in a 2008 interview that he found it "embarrassing" because it was "the kind of misuse[...] which I think might offend some people". The nickname has been satirised in mainstream media as well. Science writer Ian Sample stated in his 2010 book on the search that the nickname is "universally hate[d]" by physicists and perhaps the "worst derided" in the history of physics, but that (according to Lederman) the publisher rejected all titles mentioning "Higgs" as unimaginative and too unknown. Lederman begins with a review of the long human search for knowledge, and explains that his tongue-in-cheek title draws an analogy between the impact of the Higgs field on the fundamental symmetries at the Big Bang, and the apparent chaos of structures, particles, forces and interactions that resulted and shaped our present universe, with the biblical story of Babel in which the primordial single language of early Genesis was fragmented into many disparate languages and cultures. Lederman asks whether the Higgs boson was added just to perplex and confound those seeking knowledge of the universe, and whether physicists will be confounded by it as recounted in that story, or ultimately surmount the challenge and understand "how beautiful is the universe [God has] made". Other proposals A renaming competition by British newspaper The Guardian in 2009 resulted in their science correspondent choosing the name "the champagne bottle boson" as the best submission: "The bottom of a champagne bottle is in the shape of the Higgs potential and is often used as an illustration in physics lectures. So it's not an embarrassingly grandiose name, it is memorable, and [it] has some physics connection too." The name Higgson was suggested as well, in an opinion piece in the Institute of Physics' online publication physicsworld.com. Educational explanations and analogies There has been considerable public discussion of analogies and explanations for the Higgs particle and how the field creates mass, including coverage of explanatory attempts in their own right and a competition in 1993 for the best popular explanation by then-UK Minister for Science Sir William Waldegrave and articles in newspapers worldwide. An educational collaboration involving an LHC physicist and a High School Teachers at CERN educator suggests that dispersion of lightresponsible for the rainbow and dispersive prismis a useful analogy for the Higgs field's symmetry breaking and mass-causing effect. Matt Strassler uses electric fields as an analogy: A similar explanation was offered by The Guardian: The Higgs field's effect on particles was famously described by physicist David Miller as akin to a room full of political party workers spread evenly throughout a room: The crowd gravitates to and slows down famous people but does not slow down others. He also drew attention to well-known effects in solid state physics where an electron's effective mass can be much greater than usual in the presence of a crystal lattice. Analogies based on drag effects, including analogies of "syrup" or "molasses" are also well known, but can be somewhat misleading since they may be understood (incorrectly) as saying that the Higgs field simply resists some particles' motion but not others'a simple resistive effect could also conflict with Newton's third law. The Higgs boson is commonly misunderstood as responsible for mass, rather than the Higgs field, and as relating to most mass in the universe. Recognition and awards There was considerable discussion prior to late 2013 of how to allocate the credit if the Higgs boson is proven, made more pointed as a Nobel prize had been expected, and the very wide basis of people entitled to consideration. These include a range of theoreticians who made the Higgs mechanism theory possible, the theoreticians of the 1964 PRL papers (including Higgs himself), the theoreticians who derived from these a working electroweak theory and the Standard Model itself, and also the experimentalists at CERN and other institutions who made possible the proof of the Higgs field and boson in reality. The Nobel prize has a limit of three persons to share an award, and some possible winners are already prize holders for other work, or are deceased (the prize is only awarded to persons in their lifetime). Existing prizes for works relating to the Higgs field, boson, or mechanism include: Nobel Prize in Physics (1979) – Glashow, Salam, and Weinberg, for contributions to the theory of the unified weak and electromagnetic interaction between elementary particles Nobel Prize in Physics (1999) – 't Hooft and Veltman, for elucidating the quantum structure of electroweak interactions in physics J. J. Sakurai Prize for Theoretical Particle Physics (2010)Hagen, Englert, Guralnik, Higgs, Brout, and Kibble, for elucidation of the properties of spontaneous symmetry breaking in four-dimensional relativistic gauge theory and of the mechanism for the consistent generation of vector boson masses (for the 1964 papers described above) Wolf Prize (2004)Englert, Brout, and Higgs Special Breakthrough Prize in Fundamental Physics (2013)Fabiola Gianotti and Peter Jenni, spokespersons of the ATLAS Collaboration and Michel Della Negra, Tejinder Singh Virdee, Guido Tonelli, and Joseph Incandela spokespersons, past and present, of the CMS collaboration, "For [their] leadership role in the scientific endeavour that led to the discovery of the new Higgs-like particle by the ATLAS and CMS collaborations at CERN's Large Hadron Collider". Nobel Prize in Physics (2013) – Peter Higgs and François Englert, for the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles, and which recently was confirmed through the discovery of the predicted fundamental particle, by the ATLAS and CMS experiments at CERN's Large Hadron Collider Englert's co-researcher Robert Brout had died in 2011 and the Nobel Prize is not ordinarily given posthumously. Additionally Physical Review Letters' 50-year review (2008) recognised the 1964 PRL symmetry breaking papers and Weinberg's 1967 paper A model of Leptons (the most cited paper in particle physics, as of 2012) "milestone Letters". Following reported observation of the Higgs-like particle in July 2012, several Indian media outlets reported on the supposed neglect of credit to Indian physicist Satyendra Nath Bose after whose work in the 1920s the class of particles "bosons" is named (although physicists have described Bose's connection to the discovery as tenuous). Technical aspects and mathematical formulation In the Standard Model, the Higgs field is a four-component scalar field that forms a complex doublet of the weak isospin SU(2) symmetry: while the field has charge + under the weak hypercharge U(1) symmetry. The Higgs part of the Lagrangian is where and are the gauge bosons of the SU(2) and U(1) symmetries, and their respective coupling constants, are the Pauli matrices (a complete set of generators of the SU(2) symmetry), and and , so that the ground state breaks the SU(2) symmetry (see figure). The ground state of the Higgs field (the bottom of the potential) is degenerate with different ground states related to each other by a SU(2) gauge transformation. It is always possible to pick a gauge such that in the ground state . The expectation value of in the ground state (the vacuum expectation value or VEV) is then , where . The measured value of this parameter is ~. It has units of mass, and is the only free parameter of the Standard Model that is not a dimensionless number. Quadratic terms in and arise, which give masses to the W and Z bosons: with their ratio determining the Weinberg angle, , and leave a massless U(1) photon, . The mass of the Higgs boson itself is given by The quarks and the leptons interact with the Higgs field through Yukawa interaction terms: where are left-handed and right-handed quarks and leptons of the th generation, are matrices of Yukawa couplings where h.c. denotes the hermitian conjugate of all the preceding terms. In the symmetry breaking ground state, only the terms containing remain, giving rise to mass terms for the fermions. Rotating the quark and lepton fields to the basis where the matrices of Yukawa couplings are diagonal, one gets where the masses of the fermions are , and denote the eigenvalues of the Yukawa matrices.
Physical sciences
Bosons
null
20556915
https://en.wikipedia.org/wiki/Boson
Boson
In particle physics, a boson ( ) is a subatomic particle whose spin quantum number has an integer value (0, 1, 2, ...). Bosons form one of the two fundamental classes of subatomic particle, the other being fermions, which have odd half-integer spin (, , , ...). Every observed subatomic particle is either a boson or a fermion. Paul Dirac coined the name boson to commemorate the contribution of Satyendra Nath Bose, an Indian physicist. Some bosons are elementary particles occupying a special role in particle physics, distinct from the role of fermions (which are sometimes described as the constituents of "ordinary matter"). Certain elementary bosons (e.g. gluons) act as force carriers, which give rise to forces between other particles, while one (the Higgs boson) contributes to the phenomenon of mass. Other bosons, such as mesons, are composite particles made up of smaller constituents. Outside the realm of particle physics, multiple identical composite bosons (in this context sometimes known as 'bose particles') behave at high densities or low temperatures in a characteristic manner described by Bose–Einstein statistics: for example a gas of helium-4 atoms becomes a superfluid at temperatures close to absolute zero. Similarly, superconductivity arises because some quasiparticles, such as Cooper pairs, behave in the same way. Name The name boson was coined by Paul Dirac to commemorate the contribution of Satyendra Nath Bose, an Indian physicist. When Bose was a reader (later professor) at the University of Dhaka, Bengal (now in Bangladesh), he and Albert Einstein developed the theory characterising such particles, now known as Bose–Einstein statistics and Bose–Einstein condensate. Elementary bosons All observed elementary particles are either bosons (with integer spin) or fermions (with odd half-integer spin). Whereas the elementary particles that make up ordinary matter (leptons and quarks) are fermions, elementary bosons occupy a special role in particle physics. They act either as force carriers which give rise to forces between other particles, or in one case give rise to the phenomenon of mass. According to the Standard Model of Particle Physics there are five elementary bosons: One scalar boson (spin = 0) Higgs boson – the particle that contributes to the phenomenon of mass via the Higgs mechanism Four vector bosons (spin = 1) that act as force carriers. These are the gauge bosons: Photon – the force carrier of the electromagnetic field Gluons (eight different types) – force carriers that mediate the strong force Neutral weak boson – the force carrier that mediates the weak force   Charged weak bosons (two types) – also force carriers that mediate the weak force A second order tensor boson (spin = 2) called the graviton (G) has been hypothesised as the force carrier for gravity, but so far all attempts to incorporate gravity into the Standard Model have failed. Composite bosons Composite particles (such as hadrons, nuclei, and atoms) can be bosons or fermions depending on their constituents. Since bosons have integer spin and fermions odd half-integer spin, any composite particle made up of an even number of fermions is a boson. Composite bosons include: All mesons of every type Stable nuclei with even mass numbers such as deuterium, helium-4 (the alpha particle), carbon-12, lead-208, and many others. As quantum particles, the behaviour of multiple indistinguishable bosons at high densities is described by Bose–Einstein statistics. One characteristic which becomes important in superfluidity and other applications of Bose–Einstein condensates is that there is no restriction on the number of bosons that may occupy the same quantum state. As a consequence, when for example a gas of helium-4 atoms is cooled to temperatures very close to absolute zero and the kinetic energy of the particles becomes negligible, it condenses into a low-energy state and becomes a superfluid. Quasiparticles Certain quasiparticles are observed to behave as bosons and to follow Bose–Einstein statistics, including Cooper pairs, plasmons and phonons.
Physical sciences
Bosons
null
20556944
https://en.wikipedia.org/wiki/Tor%20%28network%29
Tor (network)
Tor is a free overlay network for enabling anonymous communication. Built on free and open-source software and more than seven thousand volunteer-operated relays worldwide, users can have their Internet traffic routed via a random path through the network. Using Tor makes it more difficult to trace a user's Internet activity by preventing any single point on the Internet (other than the user's device) from being able to view both where traffic originated from and where it is ultimately going to at the same time. This conceals a user's location and usage from anyone performing network surveillance or traffic analysis from any such point, protecting the user's freedom and ability to communicate confidentially. History The core principle of Tor, known as onion routing, was developed in the mid-1990s by United States Naval Research Laboratory employees, mathematician Paul Syverson, and computer scientists Michael G. Reed and David Goldschlag, to protect American intelligence communications online. Onion routing is implemented by means of encryption in the application layer of the communication protocol stack, nested like the layers of an onion. The alpha version of Tor, developed by Syverson and computer scientists Roger Dingledine and Nick Mathewson and then called The Onion Routing project (which was later given the acronym "Tor"), was launched on 20 September 2002. The first public release occurred a year later. In 2004, the Naval Research Laboratory released the code for Tor under a free license, and the Electronic Frontier Foundation (EFF) began funding Dingledine and Mathewson to continue its development. In 2006, Dingledine, Mathewson, and five others founded The Tor Project, a Massachusetts-based 501(c)(3) research-education nonprofit organization responsible for maintaining Tor. The EFF acted as The Tor Project's fiscal sponsor in its early years, and early financial supporters included the U.S. Bureau of Democracy, Human Rights, and Labor and International Broadcasting Bureau, Internews, Human Rights Watch, the University of Cambridge, Google, and Netherlands-based Stichting NLnet. Over the course of its existence, various Tor vulnerabilities have been discovered and occasionally exploited. Attacks against Tor are an active area of academic research that is welcomed by The Tor Project itself. Usage Tor enables its users to surf the Internet, chat and send instant messages anonymously, and is used by a wide variety of people for both licit and illicit purposes. Tor has, for example, been used by criminal enterprises, hacktivism groups, and law enforcement agencies at cross purposes, sometimes simultaneously; likewise, agencies within the U.S. government variously fund Tor (the U.S. State Department, the National Science Foundation, and – through the Broadcasting Board of Governors, which itself partially funded Tor until October 2012 – Radio Free Asia) and seek to subvert it. Tor was one of a dozen circumvention tools evaluated by a Freedom House-funded report based on user experience from China in 2010, which include Ultrasurf, Hotspot Shield, and Freegate. Tor is not meant to completely solve the issue of anonymity on the web. Tor is not designed to completely erase tracking but instead to reduce the likelihood for sites to trace actions and data back to the user. Tor is also used for illegal activities. These can include privacy protection or censorship circumvention, as well as distribution of child abuse content, drug sales, or malware distribution. Tor has been described by The Economist, in relation to Bitcoin and Silk Road, as being "a dark corner of the web". It has been targeted by the American National Security Agency and the British GCHQ signals intelligence agencies, albeit with marginal success, and more successfully by the British National Crime Agency in its Operation Notarise. At the same time, GCHQ has been using a tool named "Shadowcat" for "end-to-end encrypted access to VPS over SSH using the Tor network". Tor can be used for anonymous defamation, unauthorized news leaks of sensitive information, copyright infringement, distribution of illegal sexual content, selling controlled substances, weapons, and stolen credit card numbers, money laundering, bank fraud, credit card fraud, identity theft and the exchange of counterfeit currency; the black market utilizes the Tor infrastructure, at least in part, in conjunction with Bitcoin. It has also been used to brick IoT devices. In its complaint against Ross William Ulbricht of Silk Road, the US Federal Bureau of Investigation acknowledged that Tor has "known legitimate uses". According to CNET, Tor's anonymity function is "endorsed by the Electronic Frontier Foundation (EFF) and other civil liberties groups as a method for whistleblowers and human rights workers to communicate with journalists". EFF's Surveillance Self-Defense guide includes a description of where Tor fits in a larger strategy for protecting privacy and anonymity. In 2014, the EFF's Eva Galperin told Businessweek that "Tor's biggest problem is press. No one hears about that time someone wasn't stalked by their abuser. They hear how somebody got away with downloading child porn." The Tor Project states that Tor users include "normal people" who wish to keep their Internet activities private from websites and advertisers, people concerned about cyber-spying, and users who are evading censorship such as activists, journalists, and military professionals. In November 2013, Tor had about four million users. According to the Wall Street Journal, in 2012 about 14% of Tor's traffic connected from the United States, with people in "Internet-censoring countries" as its second-largest user base. Tor is increasingly used by victims of domestic violence and the social workers and agencies that assist them, even though shelter workers may or may not have had professional training on cyber-security matters. Properly deployed, however, it precludes digital stalking, which has increased due to the prevalence of digital media in contemporary online life. Along with SecureDrop, Tor is used by news organizations such as The Guardian, The New Yorker, ProPublica and The Intercept to protect the privacy of whistleblowers. In March 2015, the Parliamentary Office of Science and Technology released a briefing which stated that "There is widespread agreement that banning online anonymity systems altogether is not seen as an acceptable policy option in the U.K." and that "Even if it were, there would be technical challenges." The report further noted that Tor "plays only a minor role in the online viewing and distribution of indecent images of children" (due in part to its inherent latency); its usage by the Internet Watch Foundation, the utility of its onion services for whistleblowers, and its circumvention of the Great Firewall of China were touted. Tor's executive director, Andrew Lewman, also said in August 2014 that agents of the NSA and the GCHQ have anonymously provided Tor with bug reports. The Tor Project's FAQ offers supporting reasons for the EFF's endorsement: Operation Tor aims to conceal its users' identities and their online activity from surveillance and traffic analysis by separating identification and routing. It is an implementation of onion routing, which encrypts and then randomly bounces communications through a network of relays run by volunteers around the globe. These onion routers employ encryption in a multi-layered manner (hence the onion metaphor) to ensure perfect forward secrecy between relays, thereby providing users with anonymity in a network location. That anonymity extends to the hosting of censorship-resistant content by Tor's anonymous onion service feature. Furthermore, by keeping some of the entry relays (bridge relays) secret, users can evade Internet censorship that relies upon blocking public Tor relays. Because the IP address of the sender and the recipient are not both in cleartext at any hop along the way, anyone eavesdropping at any point along the communication channel cannot directly identify both ends. Furthermore, to the recipient, it appears that the last Tor node (called the exit node), rather than the sender, is the originator of the communication. Originating traffic A Tor user's SOCKS-aware applications can be configured to direct their network traffic through a Tor instance's SOCKS interface, which is listening on TCP port 9050 (for standalone Tor) or 9150 (for Tor Browser bundle) at localhost. Tor periodically creates virtual circuits through the Tor network through which it can multiplex and onion-route that traffic to its destination. Once inside a Tor network, the traffic is sent from router to router along the circuit, ultimately reaching an exit node at which point the cleartext packet is available and is forwarded on to its original destination. Viewed from the destination, the traffic appears to originate at the Tor exit node. Tor's application independence sets it apart from most other anonymity networks: it works at the Transmission Control Protocol (TCP) stream level. Applications whose traffic is commonly anonymized using Tor include Internet Relay Chat (IRC), instant messaging, and World Wide Web browsing. Onion services Tor can also provide anonymity to websites and other servers. Servers configured to receive inbound connections only through Tor are called onion services (formerly, hidden services). Rather than revealing a server's IP address (and thus its network location), an onion service is accessed through its onion address, usually via the Tor Browser or some other software designed to use Tor. The Tor network understands these addresses by looking up their corresponding public keys and introduction points from a distributed hash table within the network. It can route data to and from onion services, even those hosted behind firewalls or network address translators (NAT), while preserving the anonymity of both parties. Tor is necessary to access these onion services. Because the connection never leaves the Tor network, and is handled by the Tor application on both ends, the connection is always end-to-end encrypted. Onion services were first specified in 2003 and have been deployed on the Tor network since 2004. They are unlisted by design, and can only be discovered on the network if the onion address is already known, though a number of sites and services do catalog publicly known onion addresses. Popular sources of .onion links include Pastebin, Twitter, Reddit, other Internet forums, and tailored search engines. While onion services are often discussed in terms of websites, they can be used for any TCP service, and are commonly used for increased security or easier routing to non-web services, such as secure shell remote login, chat services such as IRC and XMPP, or file sharing. They have also become a popular means of establishing peer-to-peer connections in messaging and file sharing applications. Web-based onion services can be accessed from a standard web browser without client-side connection to the Tor network using services like Tor2web, which remove client anonymity. Attacks and Limitations Like all software with an attack surface, Tor's protections have limitations, and Tor's implementation or design have been vulnerable to attacks at various points throughout its history. While most of these limitations and attacks are minor, either being fixed without incident or proving inconsequential, others are more notable. End-to-end traffic correlation Tor is designed to provide relatively high performance network anonymity against an attacker with a single vantage point on the connection (e.g., control over one of the three relays, the destination server, or the user's internet service provider). Like all current low-latency anonymity networks, Tor cannot and does not attempt to protect against an attacker performing simultaneous monitoring of traffic at the boundaries of the Tor network—i.e., the traffic entering and exiting the network. While Tor does provide protection against traffic analysis, it cannot prevent traffic confirmation via end-to-end correlation. There are no documented cases of this limitation being used at scale; as of the 2013 Snowden leaks, law enforcement agencies such as the NSA were unable to perform dragnet surveillance on Tor itself, and relied on attacking other software used in conjunction with Tor, such as vulnerabilities in web browsers. However, targeted attacks have been able to make use of traffic confirmation on individual Tor users, via police surveillance or investigations confirming that a particular person already under suspicion was sending Tor traffic at the exact times the connections in question occurred. The relay early traffic confirmation attack also relied on traffic confirmation as part of its mechanism, though on requests for onion service descriptors, rather than traffic to the destination server. Consensus attacks Like many decentralized systems, Tor relies on a consensus mechanism to periodically update its current operating parameters. For Tor, these include network parameters like which nodes are good/bad relays, exits, guards, and how much traffic each can handle. Tor's architecture for deciding the consensus relies on a small number of directory authority nodes voting on current network parameters. Currently, there are nine directory authority nodes, and their health is publicly monitored. The IP addresses of the authority nodes are hard coded into each Tor client. The authority nodes vote every hour to update the consensus, and clients download the most recent consensus on startup. A compromise of the majority of the directory authorities could alter the consensus in a way that is beneficial to an attacker. Alternatively, a network congestion attack, such as a DDoS, could theoretically prevent the consensus nodes from communicating, and thus prevent voting to update the consensus (though such an attack would be visible). Server-side restrictions Tor makes no attempt to conceal the IP addresses of exit relays, or hide from a destination server the fact that a user is connecting via Tor. Operators of Internet sites therefore have the ability to prevent traffic from Tor exit nodes or to offer reduced functionality for Tor users. For example, Wikipedia generally forbids all editing when using Tor or when using an IP address also used by a Tor exit node, and the BBC blocks the IP addresses of all known Tor exit nodes from its iPlayer service. Apart from intentional restrictions of Tor traffic, Tor use can trigger defense mechanisms on websites intended to block traffic from IP addresses observed to generate malicious or abnormal traffic. Because traffic from all Tor users is shared by a comparatively small number of exit relays, tools can misidentify distinct sessions as originating from the same user, and attribute the actions of a malicious user to a non-malicious user, or observe an unusually large volume of traffic for one IP address. Conversely, a site may observe a single session connecting from different exit relays, with different Internet geolocations, and assume the connection is malicious, or trigger geo-blocking. When these defense mechanisms are triggered, it can result in the site blocking access, or presenting captchas to the user. Relay early traffic confirmation attack In July 2014, the Tor Project issued a security advisory for a "relay early traffic confirmation" attack, disclosing the discovery of a group of relays attempting to de-anonymize onion service users and operators. A set of onion service directory nodes (i.e., the Tor relays responsible for providing information about onion services) were found to be modifying traffic of requests. The modifications made it so the requesting client's guard relay, if controlled by the same adversary as the onion service directory node, could easily confirm that the traffic was from the same request. This would allow the adversary to simultaneously know the onion service involved in the request, and the IP address of the client requesting it (where the requesting client could be a visitor or owner of the onion service). The attacking nodes joined the network on 30 January, using a Sybil attack to comprise 6.4% of guard relay capacity, and were removed on 4 July. In addition to removing the attacking relays, the Tor application was patched to prevent the specific traffic modifications that made the attack possible. In November 2014, there was speculation in the aftermath of Operation Onymous, resulting in 17 arrests internationally, that a Tor weakness had been exploited. A representative of Europol was secretive about the method used, saying: "This is something we want to keep for ourselves. The way we do this, we can't share with the whole world, because we want to do it again and again and again." A BBC source cited a "technical breakthrough" that allowed tracking physical locations of servers, and the initial number of infiltrated sites led to the exploit speculation. A Tor Project representative downplayed this possibility, suggesting that execution of more traditional police work was more likely. In November 2015, court documents suggested a connection between the attack and arrests, and raised concerns about security research ethics. The documents revealed that the FBI obtained IP addresses of onion services and their visitors from a "university-based research institute", leading to arrests. Reporting from Motherboard found that the timing and nature of the relay early traffic confirmation attack matched the description in the court documents. Multiple experts, including a senior researcher with the ICSI of UC Berkeley, Edward Felten of Princeton University, and the Tor Project agreed that the CERT Coordination Center of Carnegie Mellon University was the institute in question. Concerns raised included the role of an academic institution in policing, sensitive research involving non-consenting users, the non-targeted nature of the attack, and the lack of disclosure about the incident. Vulnerable applications Many attacks targeted at Tor users result from flaws in applications used with Tor, either in the application itself, or in how it operates in combination with Tor. E.g., researchers with Inria in 2011 performed an attack on BitTorrent users by attacking clients that established connections both using and not using Tor, then associating other connections shared by the same Tor circuit. Fingerprinting When using Tor, applications may still provide data tied to a device, such as information about screen resolution, installed fonts, language configuration, or supported graphics functionality, reducing the set of users a connection could possibly originate from, or uniquely identifying them. This information is known as the device fingerprint, or browser fingerprint in the case of web browsers. Applications implemented with Tor in mind, such as Tor Browser, can be designed to minimize the amount of information leaked by the application and reduce its fingerprint. Eavesdropping Tor cannot encrypt the traffic between an exit relay and the destination server. If an application does not add an additional layer of end-to-end encryption between the client and the server, such as Transport Layer Security (TLS, used in HTTPS) or the Secure Shell (SSH) protocol, this allows the exit relay to capture and modify traffic. Attacks from malicious exit relays have recorded usernames and passwords, and modified Bitcoin addresses to redirect transactions. Some of these attacks involved actively removing the HTTPS protections that would have otherwise been used. To attempt to prevent this, Tor Browser has since made it so only connections via onion services or HTTPS are allowed by default. Firefox/Tor browser attacks In 2011, the Dutch authority investigating child pornography discovered the IP address of a Tor onion service site from an unprotected administrator's account and gave it to the FBI, who traced it to Aaron McGrath. After a year of surveillance, the FBI launched "Operation Torpedo" which resulted in McGrath's arrest and allowed them to install their Network Investigative Technique (NIT) malware on the servers for retrieving information from the users of the three onion service sites that McGrath controlled. The technique exploited a vulnerability in Firefox/Tor Browser that had already been patched, and therefore targeted users that had not updated. A Flash application sent a user's IP address directly back to an FBI server, and resulted in revealing at least 25 US users as well as numerous users from other countries. McGrath was sentenced to 20 years in prison in early 2014, while at least 18 others (including a former Acting HHS Cyber Security Director) were sentenced in subsequent cases. In August 2013, it was discovered that the Firefox browsers in many older versions of the Tor Browser Bundle were vulnerable to a JavaScript-deployed shellcode attack, as NoScript was not enabled by default. Attackers used this vulnerability to extract users' MAC and IP addresses and Windows computer names. News reports linked this to a FBI operation targeting Freedom Hosting's owner, Eric Eoin Marques, who was arrested on a provisional extradition warrant issued by a United States' court on 29 July. The FBI extradited Marques from Ireland to the state of Maryland on 4 charges: distributing; conspiring to distribute; and advertising child pornography, as well as aiding and abetting advertising of child pornography. The FBI acknowledged the attack in a 12 September 2013 court filing in Dublin; further technical details from a training presentation leaked by Edward Snowden revealed the code name for the exploit as "EgotisticalGiraffe". In 2022, Kaspersky researchers found that when looking up "Tor Browser" in Chinese on YouTube, one of the URLs provided under the top-ranked Chinese-language video actually pointed to malware disguised as Tor Browser. Once installed, it saved browsing history and form data that genuine Tor forgot by default, and downloaded malicious components if the device's IP addresses was in China. Kaspersky researchers noted that the malware was not stealing data to sell for profit, but was designed to identify users. Onion service configuration Like client applications that use Tor, servers relying on onion services for protection can introduce their own weaknesses. Servers that are reachable through Tor onion services and the public Internet can be subject to correlation attacks, and all onion services are susceptible to misconfigured services (e.g., identifying information included by default in web server error responses), leaking uptime and downtime statistics, intersection attacks, or various user errors. The OnionScan program, written by independent security researcher Sarah Jamie Lewis, comprehensively examines onion services for such flaws and vulnerabilities. Software The main implementation of Tor is written primarily in C. Tor Browser The Tor Browser is a web browser capable of accessing the Tor network. It was created as the Tor Browser Bundle by Steven J. Murdoch and announced in January 2008. The Tor Browser consists of a modified Mozilla Firefox ESR web browser, the TorButton, TorLauncher, NoScript and the Tor proxy. Users can run the Tor Browser from removable media. It can operate under Microsoft Windows, macOS, Android and Linux. The default search engine is DuckDuckGo (until version 4.5, Startpage.com was its default). The Tor Browser automatically starts Tor background processes and routes traffic through the Tor network. Upon termination of a session the browser deletes privacy-sensitive data such as HTTP cookies and the browsing history. This is effective in reducing web tracking and canvas fingerprinting, and it also helps to prevent creation of a filter bubble. To allow download from places where accessing the Tor Project URL may be risky or blocked, a GitHub repository is maintained with links for releases hosted in other domains. Tor Messenger On 29 October 2015, the Tor Project released Tor Messenger Beta, an instant messaging program based on Instantbird with Tor and OTR built in and used by default. Like Pidgin and Adium, Tor Messenger supports multiple different instant messaging protocols; however, it accomplishes this without relying on libpurple, implementing all chat protocols in the memory-safe language JavaScript instead. According to Lucian Armasu of Toms Hardware, in April 2018, the Tor Project shut down the Tor Messenger project for three reasons: the developers of "Instabird" discontinued support for their own software, limited resources and known metadata problems. The Tor Messenger developers explained that overcoming any vulnerabilities discovered in the future would be impossible due to the project relying on outdated software dependencies. Tor Phone In 2016, Tor developer Mike Perry announced a prototype tor-enabled smartphone based on CopperheadOS. It was meant as a direction for Tor on mobile. The project was called 'Mission Improbable'. Copperhead's then lead developer Daniel Micay welcomed the prototype. Third-party applications The Vuze (formerly Azureus) BitTorrent client, Bitmessage anonymous messaging system, and TorChat instant messenger include Tor support. The Briar messenger routes all messaging via Tor by default. OnionShare allows users to share files using Tor. The Guardian Project is actively developing a free and open-source suite of applications and firmware for the Android operating system to improve the security of mobile communications. The applications include the ChatSecure instant messaging client, Orbot Tor implementation (also available for iOS), Orweb (discontinued) privacy-enhanced mobile browser, Orfox, the mobile counterpart of the Tor Browser, ProxyMob Firefox add-on, and ObscuraCam. Onion Browser is open-source, privacy-enhancing web browser for iOS, which uses Tor. It is available in the iOS App Store, and source code is available on GitHub. Brave added support for Tor in its desktop browser's private-browsing mode. Security-focused operating systems In September of 2024, it was announced that Tails, a security-focused operating system, had become part of the Tor Project. Other security-focused operating systems that make or made extensive use of Tor include Hardened Linux From Scratch, Incognito, Liberté Linux, Qubes OS, Subgraph, Parrot OS, Tor-ramdisk, and Whonix. Reception, impact, and legislation Tor has been praised for providing privacy and anonymity to vulnerable Internet users such as political activists fearing surveillance and arrest, ordinary web users seeking to circumvent censorship, and people who have been threatened with violence or abuse by stalkers. The U.S. National Security Agency (NSA) has called Tor "the king of high-secure, low-latency Internet anonymity", and BusinessWeek magazine has described it as "perhaps the most effective means of defeating the online surveillance efforts of intelligence agencies around the world". Other media have described Tor as "a sophisticated privacy tool", "easy to use" and "so secure that even the world's most sophisticated electronic spies haven't figured out how to crack it". Advocates for Tor say it supports freedom of expression, including in countries where the Internet is censored, by protecting the privacy and anonymity of users. The mathematical underpinnings of Tor lead it to be characterized as acting "like a piece of infrastructure, and governments naturally fall into paying for infrastructure they want to use". The project was originally developed on behalf of the U.S. intelligence community and continues to receive U.S. government funding, and has been criticized as "more resembl[ing] a spook project than a tool designed by a culture that values accountability or transparency". , 80% of The Tor Project's $2M annual budget came from the United States government, with the U.S. State Department, the Broadcasting Board of Governors, and the National Science Foundation as major contributors, aiming "to aid democracy advocates in authoritarian states". Other public sources of funding include DARPA, the U.S. Naval Research Laboratory, and the Government of Sweden. Some have proposed that the government values Tor's commitment to free speech, and uses the darknet to gather intelligence. Tor also receives funding from NGOs including Human Rights Watch, and private sponsors including Reddit and Google. Dingledine said that the United States Department of Defense funds are more similar to a research grant than a procurement contract. Tor executive director Andrew Lewman said that even though it accepts funds from the U.S. federal government, the Tor service did not collaborate with the NSA to reveal identities of users. Critics say that Tor is not as secure as it claims, pointing to U.S. law enforcement's investigations and shutdowns of Tor-using sites such as web-hosting company Freedom Hosting and online marketplace Silk Road. In October 2013, after analyzing documents leaked by Edward Snowden, The Guardian reported that the NSA had repeatedly tried to crack Tor and had failed to break its core security, although it had had some success attacking the computers of individual Tor users. The Guardian also published a 2012 NSA classified slide deck, entitled "Tor Stinks", which said: "We will never be able to de-anonymize all Tor users all the time", but "with manual analysis we can de-anonymize a very small fraction of Tor users". When Tor users are arrested, it is typically due to human error, not to the core technology being hacked or cracked. On 7 November 2014, for example, a joint operation by the FBI, ICE Homeland Security investigations and European Law enforcement agencies led to 17 arrests and the seizure of 27 sites containing 400 pages. A late 2014 report by Der Spiegel using a new cache of Snowden leaks revealed, however, that the NSA deemed Tor on its own as a "major threat" to its mission, and when used in conjunction with other privacy tools such as OTR, Cspace, ZRTP, RedPhone, Tails, and TrueCrypt was ranked as "catastrophic," leading to a "near-total loss/lack of insight to target communications, presence..." 2011 In March 2011, The Tor Project received the Free Software Foundation's 2010 Award for Projects of Social Benefit. The citation read, "Using free software, Tor has enabled roughly 36 million people around the world to experience freedom of access and expression on the Internet while keeping them in control of their privacy and anonymity. Its network has proved pivotal in dissident movements in both Iran and more recently Egypt." Iran tried to block Tor at least twice in 2011. One attempt simply blocked all servers with 2-hour-expiry security certificates; it was successful for less than 24 hours. 2012 In 2012, Foreign Policy magazine named Dingledine, Mathewson, and Syverson among its Top 100 Global Thinkers "for making the web safe for whistleblowers". 2013 In 2013, Jacob Appelbaum described Tor as a "part of an ecosystem of software that helps people regain and reclaim their autonomy. It helps to enable people to have agency of all kinds; it helps others to help each other and it helps you to help yourself. It runs, it is open and it is supported by a large community spread across all walks of life." In June 2013, whistleblower Edward Snowden used Tor to send information about PRISM to The Washington Post and The Guardian. 2014 In 2014, the Russian government offered a $111,000 contract to "study the possibility of obtaining technical information about users and users' equipment on the Tor anonymous network". In September 2014, in response to reports that Comcast had been discouraging customers from using the Tor Browser, Comcast issued a public statement that "We have no policy against Tor, or any other browser or software." In October 2014, The Tor Project hired the public relations firm Thomson Communications to improve its public image (particularly regarding the terms "Dark Net" and "hidden services," which are widely viewed as being problematic) and to educate journalists about the technical aspects of Tor. Turkey blocked downloads of Tor Browser from the Tor Project. 2015 In June 2015, the special rapporteur from the United Nations' Office of the High Commissioner for Human Rights specifically mentioned Tor in the context of the debate in the U.S. about allowing so-called backdoors in encryption programs for law enforcement purposes in an interview for The Washington Post. In July 2015, the Tor Project announced an alliance with the Library Freedom Project to establish exit nodes in public libraries. The pilot program, which established a middle relay running on the excess bandwidth afforded by the Kilton Library in Lebanon, New Hampshire, making it the first library in the U.S. to host a Tor node, was briefly put on hold when the local city manager and deputy sheriff voiced concerns over the cost of defending search warrants for information passed through the Tor exit node. Although the Department of Homeland Security (DHS) had alerted New Hampshire authorities to the fact that Tor is sometimes used by criminals, the Lebanon Deputy Police Chief and the Deputy City Manager averred that no pressure to strong-arm the library was applied, and the service was re-established on 15 September 2015. U.S. Rep. Zoe Lofgren (D-Calif) released a letter on 10 December 2015, in which she asked the DHS to clarify its procedures, stating that "While the Kilton Public Library's board ultimately voted to restore their Tor relay, I am no less disturbed by the possibility that DHS employees are pressuring or persuading public and private entities to discontinue or degrade services that protect the privacy and anonymity of U.S. citizens." In a 2016 interview, Kilton Library IT Manager Chuck McAndrew stressed the importance of getting libraries involved with Tor: "Librarians have always cared deeply about protecting privacy, intellectual freedom, and access to information (the freedom to read). Surveillance has a very well-documented chilling effect on intellectual freedom. It is the job of librarians to remove barriers to information." The second library to host a Tor node was the Las Naves Public Library in Valencia, Spain, implemented in the first months of 2016. In August 2015, an IBM security research group, called "X-Force", put out a quarterly report that advised companies to block Tor on security grounds, citing a "steady increase" in attacks from Tor exit nodes as well as botnet traffic. In September 2015, Luke Millanta created OnionView (now defunct), a web service that plots the location of active Tor relay nodes onto an interactive map of the world. The project's purpose was to detail the network's size and escalating growth rate. In December 2015, Daniel Ellsberg (of the Pentagon Papers), Cory Doctorow (of Boing Boing), Edward Snowden, and artist-activist Molly Crabapple, amongst others, announced their support of Tor. 2016 In March 2016, New Hampshire state representative Keith Ammon introduced a bill allowing public libraries to run privacy software. The bill specifically referenced Tor. The text was crafted with extensive input from Alison Macrina, the director of the Library Freedom Project. The bill was passed by the House 268–62. Also in March 2016, the first Tor node, specifically a middle relay, was established at a library in Canada, the Graduate Resource Centre (GRC) in the Faculty of Information and Media Studies (FIMS) at the University of Western Ontario. Given that the running of a Tor exit node is an unsettled area of Canadian law, and that in general institutions are more capable than individuals to cope with legal pressures, Alison Macrina of the Library Freedom Project has opined that in some ways she would like to see intelligence agencies and law enforcement attempt to intervene in the event that an exit node were established. On 16 May 2016, CNN reported on the case of core Tor developer "isis agora lovecruft", who had fled to Germany under the threat of a subpoena by the FBI during the Thanksgiving break of the previous year. The Electronic Frontier Foundation legally represented lovecruft. On 2 December 2016, The New Yorker reported on burgeoning digital privacy and security workshops in the San Francisco Bay Area, particularly at the hackerspace Noisebridge, in the wake of the 2016 United States presidential election; downloading the Tor browser was mentioned. Also, in December 2016, Turkey has blocked the usage of Tor, together with ten of the most used VPN services in Turkey, which were popular ways of accessing banned social media sites and services. Tor (and Bitcoin) was fundamental to the operation of the dark web marketplace AlphaBay, which was taken down in an international law enforcement operation in July 2017. Despite federal claims that Tor would not shield a user, however, elementary operational security errors outside of the ambit of the Tor network led to the site's downfall. 2017 In June 2017 the Democratic Socialists of America recommended intermittent Tor usage for politically active organizations and individuals as a defensive mitigation against information security threats. And in August 2017, according to reportage, cybersecurity firms which specialize in monitoring and researching the dark web (which relies on Tor as its infrastructure) on behalf of banks and retailers routinely share their findings with the FBI and with other law enforcement agencies "when possible and necessary" regarding illegal content. The Russian-speaking underground offering a crime-as-a-service model is regarded as being particularly robust. 2018 In June 2018, Venezuela blocked access to the Tor network. The block affected both direct connections to the network and connections being made via bridge relays. On 20 June 2018, Bavarian police raided the homes of the board members of the non-profit Zwiebelfreunde, a member of torservers.net, which handles the European financial transactions of riseup.net in connection with a blog post there which apparently promised violence against the upcoming Alternative for Germany convention. Tor came out strongly against the raid on its support organization, which provides legal and financial aid for the setting up and maintenance of high-speed relays and exit nodes. According to torservers.net, on 23 August 2018 the German court at Landgericht München ruled that the raid and seizures were illegal. The hardware and documentation seized had been kept under seal, and purportedly were neither analyzed nor evaluated by the Bavarian police. Since October 2018, Chinese online communities within Tor have begun to dwindle due to increased efforts to stop them by the Chinese government. 2019 In November 2019, Edward Snowden called for a full, unabridged simplified Chinese translation of his autobiography, Permanent Record, as the Chinese publisher had violated their agreement by expurgating all mentions of Tor and other matters deemed politically sensitive by the Chinese Communist Party. 2021 On 8 December 2021, the Russian government agency Roskomnadzor announced it has banned Tor and six VPN services for failing to abide by the Russian Internet blacklist. Russian ISPs unsuccessfully attempted to block Tor's main website as well as several bridges beginning on 1 December 2021. The Tor Project has appealed to Russian courts over this ban. 2022 In response to Internet censorship during the Russian invasion of Ukraine, the BBC and VOA have directed Russian audiences to Tor. The Russian government increased efforts to block access to Tor through technical and political means, while the network reported an increase in traffic from Russia, and increased Russian use of its anti-censorship Snowflake tool. Russian courts temporarily lifted the blockade on Tor's website (but not connections to relays) on May 24, 2022 due to Russian law requiring that the Tor Project be involved in the case. However, the blockade was reinstated on July 21, 2022. Iran implemented rolling internet blackouts during the Mahsa Amini protests, and Tor and Snowflake were used to circumvent them. China, with its highly centralized control of its internet, had effectively blocked Tor. Improved security Tor responded to earlier vulnerabilities listed above by patching them and improving security. In one way or another, human (user) errors can lead to detection. The Tor Project website provides the best practices (instructions) on how to properly use the Tor browser. When improperly used, Tor is not secure. For example, Tor warns its users that not all traffic is protected; only the traffic routed through the Tor browser is protected. Users are also warned to use HTTPS versions of websites, not to torrent with Tor, not to enable browser plugins, not to open documents downloaded through Tor while online, and to use safe bridges. Users are also warned that they cannot provide their name or other revealing information in web forums over Tor and stay anonymous at the same time. Despite intelligence agencies' claims that 80% of Tor users would be de-anonymized within 6 months in the year 2013, that has still not happened. In fact, as late as September 2016, the FBI could not locate, de-anonymize and identify the Tor user who hacked into the email account of a staffer on Hillary Clinton's email server. The best tactic of law enforcement agencies to de-anonymize users appears to remain with Tor-relay adversaries running poisoned nodes, as well as counting on the users themselves using the Tor browser improperly. For example, downloading a video through the Tor browser and then opening the same file on an unprotected hard drive while online can make the users' real IP addresses available to authorities. Odds of detection When properly used, odds of being de-anonymized through Tor are said to be extremely low. Tor project's co-founder Nick Mathewson explained that the problem of "Tor-relay adversaries" running poisoned nodes means that a theoretical adversary of this kind is not the network's greatest threat: Tor does not provide protection against end-to-end timing attacks: if an attacker can watch the traffic coming out of the target computer, and also the traffic arriving at the target's chosen destination (e.g. a server hosting a .onion site), that attacker can use statistical analysis to discover that they are part of the same circuit. A similar attack has been used by German authorities to track down users related to Boystown. Levels of security Depending on individual user needs, Tor browser offers three levels of security located under the Security Level (the small gray shield at the top-right of the screen) icon > Advanced Security Settings. In addition to encrypting the data, including constantly changing an IP address through a virtual circuit comprising successive, randomly selected Tor relays, several other layers of security are at a user's disposal: Standard At this level, all features from the Tor Browser and other websites are enabled. Safer This level eliminates website features that are often pernicious to the user. This may cause some sites to lose functionality. JavaScript is disabled on all non-HTTPS sites; some fonts and mathematical symbols are disabled. Also, audio and video (HTML5 media) are click-to-play. Safest This level only allows website features required for static sites and basic services. These changes affect images, media, and scripts. Javascript is disabled by default on all sites; some fonts, icons, math symbols, and images are disabled; audio and video (HTML5 media) are click-to-play. Introduction of Proof-of-Work Defense for Onion Services In a recent development, Tor has unveiled a new defense mechanism to safeguard its onion services against denial of service (DoS) attacks. With the release of Tor 0.4.8, this proof-of-work (PoW) defense promises to prioritize legitimate network traffic while deterring malicious attacks.
Technology
Internet
null
739199
https://en.wikipedia.org/wiki/Continued%20fraction
Continued fraction
A continued fraction is a mathematical expression that can be written as a fraction with a denominator that is a sum that contains another simple or continued fraction. Depending on whether this iteration terminates with a simple fraction or not, the continued fraction is finite or infinite. Different fields of mathematics have different terminology and notation for continued fraction. In number theory the standard unqualified use of the term continued fraction refers to the special case where all numerators are 1, and is treated in the article Simple continued fraction. The present article treats the case where numerators and denominators are sequences of constants or functions. From the perspective of number theory, these are called generalized continued fraction. From the perspective of complex analysis or numerical analysis, however, they are just standard, and in the present article they will simply be called "continued fraction". Formulation A continued fraction is an expression of the form where the () are the partial numerators, the are the partial denominators, and the leading term is called the integer part of the continued fraction. The successive convergents of the continued fraction are formed by applying the fundamental recurrence formulas: where is the numerator and is the denominator, called continuants, of the th convergent. They are given by the three-term recurrence relation with initial values If the sequence of convergents approaches a limit, the continued fraction is convergent and has a definite value. If the sequence of convergents never approaches a limit, the continued fraction is divergent. It may diverge by oscillation (for example, the odd and even convergents may approach two different limits), or it may produce an infinite number of zero denominators . History The story of continued fractions begins with the Euclidean algorithm, a procedure for finding the greatest common divisor of two natural numbers and . That algorithm introduced the idea of dividing to extract a new remainder – and then dividing by the new remainder repeatedly. Nearly two thousand years passed before devised a technique for approximating the roots of quadratic equations with continued fractions in the mid-sixteenth century. Now the pace of development quickened. Just 24 years later, in 1613, Pietro Cataldi introduced the first formal notation for the generalized continued fraction. Cataldi represented a continued fraction as with the dots indicating where the next fraction goes, and each representing a modern plus sign. Late in the seventeenth century John Wallis introduced the term "continued fraction" into mathematical literature. New techniques for mathematical analysis (Newton's and Leibniz's calculus) had recently come onto the scene, and a generation of Wallis' contemporaries put the new phrase to use. In 1748 Euler published a theorem showing that a particular kind of continued fraction is equivalent to a certain very general infinite series. Euler's continued fraction formula is still the basis of many modern proofs of convergence of continued fractions. In 1761, Johann Heinrich Lambert gave the first proof that is irrational, by using the following continued fraction for : Continued fractions can also be applied to problems in number theory, and are especially useful in the study of Diophantine equations. In the late eighteenth century Lagrange used continued fractions to construct the general solution of Pell's equation, thus answering a question that had fascinated mathematicians for more than a thousand years. Lagrange's discovery implies that the canonical continued fraction expansion of the square root of every non-square integer is periodic and that, if the period is of length , it contains a palindromic string of length . In 1813 Gauss derived from complex-valued hypergeometric functions what is now called Gauss's continued fractions. They can be used to express many elementary functions and some more advanced functions (such as the Bessel functions), as continued fractions that are rapidly convergent almost everywhere in the complex plane. Notation The long continued fraction expression displayed in the introduction is easy for an unfamiliar reader to interpret. However, it takes up a lot of space and can be difficult to typeset. So mathematicians have devised several alternative notations. One convenient way to express a generalized continued fraction sets each nested fraction on the same line, indicating the nesting by dangling plus signs in the denominators: Sometimes the plus signs are typeset to vertically align with the denominators but not under the fraction bars: Pringsheim wrote a generalized continued fraction this way: Carl Friedrich Gauss evoked the more familiar infinite product when he devised this notation: Here the "" stands for Kettenbruch, the German word for "continued fraction". This is probably the most compact and convenient way to express continued fractions; however, it is not widely used by English typesetters. Some elementary considerations Here are some elementary results that are of fundamental importance in the further development of the analytic theory of continued fractions. Partial numerators and denominators If one of the partial numerators is zero, the infinite continued fraction is really just a finite continued fraction with fractional terms, and therefore a rational function of to and to . Such an object is of little interest from the point of view adopted in mathematical analysis, so it is usually assumed that all . There is no need to place this restriction on the partial denominators . The determinant formula When the th convergent of a continued fraction is expressed as a simple fraction we can use the determinant formula to relate the numerators and denominators of successive convergents and to one another. The proof for this can be easily seen by induction. Base case The case results from a very simple computation. Inductive step Assume that () holds for . Then we need to see the same relation holding true for . Substituting the value of and in () we obtain: which is true because of our induction hypothesis. Specifically, if neither nor is zero () we can express the difference between the th and th convergents like this: The equivalence transformation If is any infinite sequence of non-zero complex numbers we can prove, by induction, that where equality is understood as equivalence, which is to say that the successive convergents of the continued fraction on the left are exactly the same as the convergents of the fraction on the right. The equivalence transformation is perfectly general, but two particular cases deserve special mention. First, if none of the are zero, a sequence can be chosen to make each partial numerator a 1: where , , , and in general . Second, if none of the partial denominators are zero we can use a similar procedure to choose another sequence to make each partial denominator a 1: where and otherwise . These two special cases of the equivalence transformation are enormously useful when the general convergence problem is analyzed. Notions of convergence As mentioned in the introduction, the continued fraction converges if the sequence of convergents } tends to a finite limit. This notion of convergence is very natural, but it is sometimes too restrictive. It is therefore useful to introduce the notion of general convergence of a continued fraction. Roughly speaking, this consists in replacing the part of the fraction by , instead of by 0, to compute the convergents. The convergents thus obtained are called modified convergents. We say that the continued fraction converges generally if there exists a sequence such that the sequence of modified convergents converges for all sufficiently distinct from . The sequence is then called an exceptional sequence for the continued fraction. See Chapter 2 of for a rigorous definition. There also exists a notion of absolute convergence for continued fractions, which is based on the notion of absolute convergence of a series: a continued fraction is said to be absolutely convergent when the series where are the convergents of the continued fraction, converges absolutely. The Śleszyński–Pringsheim theorem provides a sufficient condition for absolute convergence. Finally, a continued fraction of one or more complex variables is uniformly convergent in an open neighborhood when its convergents converge uniformly on ; that is, when for every there exists such that for all , for all , Even and odd convergents It is sometimes necessary to separate a continued fraction into its even and odd parts. For example, if the continued fraction diverges by oscillation between two distinct limit points and , then the sequence must converge to one of these, and must converge to the other. In such a situation it may be convenient to express the original continued fraction as two different continued fractions, one of them converging to , and the other converging to . The formulas for the even and odd parts of a continued fraction can be written most compactly if the fraction has already been transformed so that all its partial denominators are unity. Specifically, if is a continued fraction, then the even part and the odd part are given by and respectively. More precisely, if the successive convergents of the continued fraction are , then the successive convergents of as written above are , and the successive convergents of are . Conditions for irrationality If and are positive integers with for all sufficiently large , then converges to an irrational limit. Fundamental recurrence formulas The partial numerators and denominators of the fraction's successive convergents are related by the fundamental recurrence formulas: The continued fraction's successive convergents are then given by These recurrence relations are due to John Wallis (1616–1703) and Leonhard Euler (1707–1783). These recurrence relations are simply a different notation for the relations obtained by Pietro Antonio Cataldi (1548-1626). As an example, consider the regular continued fraction in canonical form that represents the golden ratio : Applying the fundamental recurrence formulas we find that the successive numerators are and the successive denominators are , the Fibonacci numbers. Since all the partial numerators in this example are equal to one, the determinant formula assures us that the absolute value of the difference between successive convergents approaches zero quite rapidly. Linear fractional transformations A linear fractional transformation (LFT) is a complex function of the form where is a complex variable, and are arbitrary complex constants such that . An additional restriction that is customarily imposed, to rule out the cases in which is a constant. The linear fractional transformation, also known as a Möbius transformation, has many fascinating properties. Four of these are of primary importance in developing the analytic theory of continued fractions. If the LFT has one or two fixed points. This can be seen by considering the equation which is clearly a quadratic equation in . The roots of this equation are the fixed points of . If the discriminant is zero the LFT fixes a single point; otherwise it has two fixed points. If the LFT is an invertible conformal mapping of the extended complex plane onto itself. In other words, this LFT has an inverse function such that for every point in the extended complex plane, and both and preserve angles and shapes at vanishingly small scales. From the form of we see that is also an LFT. The composition of two different LFTs for which is itself an LFT for which . In other words, the set of all LFTs for which is closed under composition of functions. The collection of all such LFTs, together with the "group operation" composition of functions, is known as the automorphism group of the extended complex plane. If the LFT reduces to which is a very simple meromorphic function of with one simple pole (at ) and a residue equal to . (
Mathematics
Basics
null
740355
https://en.wikipedia.org/wiki/Spitz
Spitz
A spitz (; , in reference to the pointed muzzle) is a type of domestic dog consisting of between 50 and 70 breeds depending on classification. There is no precise definition of 'spitz' but typically most spitz breeds have pricked ears, almond shaped eyes, a pointed muzzle, a double coat, and a tail that curves over the back. The exact origins of spitz dogs remain unknown, though most of the spitzes seen today originate from the Arctic region or from Siberia. Johann Friedrich Gmelin described the type as Canis pomeranus in his 1788 revision of Systema Naturae. History Dogs of spitz type have been depicted on the tombs of the IVth dynasty of Egypt. Toy dogs of spitz type have been depicted since 500 B.C. in Ancient Greek pottery. Characteristics Spitzes are well suited to living in harsh northern climates. They often have an insulating, waterproof undercoat that is denser than the topcoat to trap warmth. Small, upright ears help to reduce the risk of frostbite, square proportions and thick fur that grows on the paws protects the dogs from sharp ice. Many spitz breeds, like the Japanese Akita and Chow Chow, retain wolf-like characteristics such as independence, suspiciousness, and aggression towards unfamiliar humans and other dogs, and they require much training and socialization when they are puppies before they become manageable in an urban environment. Some, such as the Karelian Bear Dog, are more difficult to train as companion dogs. Some breeds, such as the Pomeranian, have manes. Several spitz breeds (such as huskies) are bred for one purpose only. However it is common for many spitz breeds (such as the Russian laikas) to be general purpose dogs in their native lands, used for hunting, hauling, herding, and guarding. Smaller breeds have faces that resemble fox faces, while larger breeds have faces that resemble wolf faces. Companions and toys Spitzes, with their thick fur, fluffy ruffs, curled tails and small muzzles and ears, have been bred into non-working dogs designed to be companions or lap dogs. This trend is most evident in the tiny Pomeranian, which was originally a much larger dog closer to the size of a Keeshond before being bred down to make an acceptable court animal. The Keeshond, the Wolfspitz variety of the German Spitz, is an affectionate, loyal, and very energetic pet that was bred as a watchdog for barges (hence the name Dutch Barge Dog). Often, these breeds are recognized for their "smiling" mouths. Other spitzes that have been bred away from working uses are the American Eskimo Dog, Alaskan Klee Kai, German Spitz, Volpino Italiano and Japanese Spitz.
Biology and health sciences
Dogs
null
740463
https://en.wikipedia.org/wiki/Skin%20grafting
Skin grafting
Skin grafting, a type of graft surgery, involves the transplantation of skin without a defined circulation. The transplanted tissue is called a skin graft. Surgeons may use skin grafting to treat: extensive wounding or trauma burns areas of extensive skin loss due to infection such as necrotizing fasciitis or purpura fulminans specific surgeries that may require skin grafts for healing to occur – most commonly removal of skin cancers Skin grafting often takes place after serious injuries when some of the body's skin is damaged. Surgical removal (excision or debridement) of the damaged skin is followed by skin grafting. The grafting serves two purposes: reducing the course of treatment needed (and time in the hospital), and improving the function and appearance of the area of the body which receives the skin graft. There are two types of skin grafts: Partial-thickness: The more common type involves removing a thin layer of skin from a healthy part of the body (the donor section). Full-thickness: Involves excising a defined area of skin, with a depth of excision down to the fat. The full thickness portion of skin is then placed at the recipient site. A full-thickness skin graft is more risky, in terms of the body accepting the skin, yet it leaves only a scar line on the donor section, similar to a Cesarean-section scar. In the case of full-thickness skin grafts, the donor section will often heal much more quickly than the injury and causes less pain than a partial-thickness skin graft. A partial thickness donor site must heal by re-epithelialization which can be painful and take an extensive length of time. Medical uses Two layers of skin created from animal sources has been found to be useful in venous leg ulcers. Classification Grafts can be classified by their thickness, the source, and the purpose. By source: Autologous: The donor skin is taken from a different site on the same individual's body (also known as an autograft). Isogeneic: The donor and recipient individuals are genetically identical (e.g., monozygotic twins, animals of a single inbred strain; isograft or syngraft). Allogeneic: The donor and recipient are of the same species (human→human, dog→dog; allograft). Xenogeneic: The donor and recipient are of different species (e.g., bovine cartilage; pig skin; xenograft or heterograft). Prosthetic: Lost tissue is replaced with synthetic materials such as metal, plastic, or ceramic (prosthetic implants). Allografts, xenografts, and prosthetic grafts are usually used as temporary skin substitutes, that is a wound dressing for preventing infection and fluid loss. They will eventually need to be removed as the body starts to reject the foreign material. Autologous grafts and some forms of treated allografts can be left on permanently without rejection. Genetically modified pigs can produce allograft-equivalent skin material, and tilapia skin is used as an experimental cheap xenograft in places where porcine skin is unavailable and in veterinary medicine. By thickness: Split-thickness A split-thickness skin graft (STSG) includes the epidermis and part of the dermis. Its thickness depends on the donor site and the needs of the person receiving the graft. It can be processed through a skin mesher which makes apertures onto the graft, allowing it to expand up to nine times its size. Split-thickness grafts are frequently used as they can cover large areas and the rate of autorejection is low. The same site can be harvested again after six weeks. The donor site heals by re-epithelialisation from the dermis and surrounding skin and requires dressings. Full-thickness A full-thickness skin graft consists of the epidermis and the entire thickness of the dermis. The donor site is either sutured closed directly or covered by a split-thickness skin graft. Composite graft A composite graft is a small graft containing skin and underlying cartilage or other tissue. Donor sites include, for example, ear skin and cartilage to reconstruct nasal alar rim defects. Donor selection When grafts are taken from other animals, they are known as heterografts or xenografts. By definition, they are temporary biologic dressings which the body will reject within days to a few weeks. They are useful in reducing the bacterial concentration of an open wound, as well as reducing fluid loss. For more extensive tissue loss, a full-thickness skin graft, which includes the entire thickness of the skin, may be necessary. This is often performed for defects of the face and hand where contraction of the graft should be minimized. The general rule is that the thicker the graft, the less the contraction and deformity. Cell cultured epithelial autograft (CEA) procedures take skin cells from the person needing the graft to grow new skin cells in sheets in a laboratory; because the cells are taken from the person, that person's immune system will not reject them. However, because these sheets are very thin (only a few cell layers thick) they do not stand up to trauma, and the "take" is often less than 100%. Research is investigating the possibilities of combining CEA and a dermal matrix in one product. Experimental procedures are being tested for burn victims using stem cells in solution which are applied to the burned area using a skin cell gun. Recent advances have been successful in applying the cells without damage. In order to remove the thin and well preserved skin slices and strips from the donor, surgeons use a special surgical instrument called a dermatome. This usually produces a split-thickness skin graft, which contains the epidermis with only a portion of the dermis. The dermis left behind at the donor site contains hair follicles and sebaceous glands, both of which contain epidermal cells which gradually proliferate out to form a new layer of epidermis. The donor site may be extremely painful and vulnerable to infection. There are several ways to treat donor site pain. These include subcutaneous anesthetic agents, topical anesthetic agents, and certain types of wound dressings. Healing process Stages of healing The graft is carefully spread on the bare area to be covered. It is held in place by a few small stitches or surgical staples. The healing process for skin grafts typically occurs in three stages: plasmatic imbibition, capillary inosculation, and neovascularization. During the first 24 hours, the graft is initially nourished by a process called plasmatic imbibition in which the graft "drinks plasma" (i.e., absorbs nutrients from the underlying recipient bed). Between 2–3 days, new blood vessels begin growing from the recipient area into the transplanted skin in a process called capillary inosculation. Between 4–7 days, neovascularization occurs in which new blood vessels form between the graft and the recipient tissues. Other To prevent the accumulation of fluid under the graft which can prevent its attachment and revascularization, the graft is frequently meshed by making lengthwise rows of short, interrupted cuts, each a few millimeters long, with each row offset by half a cut length like bricks in a wall. In addition to allowing for drainage, this allows the graft to both stretch and cover a larger area as well as to more closely approximate the contours of the recipient area. However, it results in a rather pebbled appearance upon healing that may ultimately look less aesthetically pleasing. An increasingly common aid to both pre-operative wound maintenance and post-operative graft healing is the use of negative pressure wound therapy (NPWT). This system works by placing a section of foam cut to size over the wound, then laying a perforated tube onto the foam. The arrangement is then secured with bandages. A vacuum unit then creates negative pressure, sealing the edges of the wound to the foam, and drawing out excess blood and fluids. This process typically helps to maintain cleanliness in the graft site, promotes the development of new blood vessels, and increases the chances of the graft successfully taking. NPWT can also be used between debridement and graft operations to assist an infected wound in remaining clean for a period of time before new skin is applied. Skin grafting can also be seen as a skin transplant. Z-plasty This is based upon the principle of mobilizing a full segment of skin from an area to the site needing tissue replacement. The flaps are triangularly shaped opposite each other. It can be used in direct excision and closure of a contracted scar to produce a better looking and healing scar. It helps to elongate and break up a linear scar. It also gives a good result in the release of linear contractures. Z-plasty is of paramount importance to the plastic surgeon and a frequently used method in both single multiple forms. Risks Risks for the skin graft surgery are: Bleeding Infection Loss of grafted skin Nerve damage Graft-versus-host disease Marjolin's ulcer Rejection may occur in xenografts. To prevent this, the person receiving the graft usually must be treated with long-term immunosuppressant drugs. Prognosis Most skin grafts are successful, but in some cases grafts do not heal well and may require repeat grafting. The graft should also be monitored for good circulation. Recovery time from skin grafting can be long. Graft recipients wear compression garments for several months and are at risk for depression and anxiety consequent to long-term pain and loss of function. History Skin grafting, in more rudimentary forms, has been practiced since ancient times. The Ebers Papyrus of ancient Egypt contains a brief treatise on xenografting. Around 500 years later, members of the Hindu Kamma caste are described as performing skin grafts which included the usage of subcutaneous fat. The 2nd century AD Greek philosopher Celsus is also known to have developed a method to reconstruct the foreskins of Jewish men using skin grafts, as circumcision was considered barbaric in Greek and Roman society. More modern uses of skin grafting were described in the mid-to-late 19th century, including Reverdin's use of the pinch graft in 1869; Ollier's and Thiersch's uses of the split-thickness graft in 1872 and 1886, respectively; and Wolfe's and Krause's use of the full-thickness graft in 1875 and 1893, respectively. John Harvey Girdner demonstrated skin graft transplant from a deceased donor in 1880. Today, skin grafting is commonly used in dermatologic surgery. Recently Reverdin's technique is used but with very small (less than 3 mm diameter). Such small wounds heal in a short time without scars. This technique is called SkinDot. Alternatives to skin grafting There are alternatives to skin grafting including skin substitutes using cells from patients, skin from other animals, such as pigs, known as Xenograft and medical devices that help to close large wounds. Xenograft was originally known as "zoografting." Other animals that can be used include dogs, rabbits, frogs, and cats, with the greatest success achieved with porcine skin. Other skin substitutes include Allograft, Biobrane, TransCyte, Integra, AlloDerm, Cultured epithelial autografts (CEA). There are medical devices that help close large wounds. The device uses skin anchors that are attached to healthy skin. An adjustable tension controller then exerts a constant pulling tension on sutures looped around the skin anchors. The device gradually closes the wound over time. Experimental Techniques “Microcolumn grafting” is a new grafting method being researched that uses needles to take autologous skin biopsies from the patient & implant them in the wound site.
Biology and health sciences
Surgery
Health
740480
https://en.wikipedia.org/wiki/Dermis
Dermis
The dermis or corium is a layer of skin between the epidermis (with which it makes up the cutis) and subcutaneous tissues, that primarily consists of dense irregular connective tissue and cushions the body from stress and strain. It is divided into two layers, the superficial area adjacent to the epidermis called the papillary region and a deep thicker area known as the reticular dermis. The dermis is tightly connected to the epidermis through a basement membrane. Structural components of the dermis are collagen, elastic fibers, and extrafibrillar matrix. It also contains mechanoreceptors that provide the sense of touch and thermoreceptors that provide the sense of heat. In addition, hair follicles, sweat glands, sebaceous glands (oil glands), apocrine glands, lymphatic vessels, nerves and blood vessels are present in the dermis. Those blood vessels provide nourishment and waste removal for both dermal and epidermal cells. Structure The dermis is composed of three major types of cells: fibroblasts, macrophages, and mast cells. Apart from these cells, the dermis is also composed of matrix components such as collagen (which provides strength), elastin (which provides elasticity), and extrafibrillar matrix, an extracellular gel-like substance primarily composed of glycosaminoglycans (most notably hyaluronan), proteoglycans, and glycoproteins. Layers Papillary dermis The papillary dermis is the uppermost layer of the dermis. It intertwines with the rete ridges of the epidermis and is composed of fine and loosely arranged collagen fibers. The papillary region is composed of loose areolar connective tissue. It is named for its fingerlike projections called papillae or dermal papillae specifically, that extend toward the epidermis and contain either terminal networks of blood capillaries or tactile Meissner's corpuscles. Dermal papillae The dermal papillae (DP) (singular papilla, diminutive of Latin papula, 'pimple') are small, nipple-like extensions (or interdigitations) of the dermis into the epidermis. At the surface of the skin in hands and feet, they appear as epidermal, papillary or friction ridges (colloquially known as fingerprints). Blood vessels in the dermal papillae nourish all hair follicles and bring nutrients and oxygen to the lower layers of epidermal cells. The pattern of ridges produced in hands and feet are partly genetically determined features that are developed before birth. They remain substantially unaltered (except in size) throughout life, and therefore determine the patterns of fingerprints, making them useful in certain functions of personal identification. The dermal papillae are part of the uppermost layer of the dermis, the papillary dermis, and the ridges they form greatly increase the surface area between the dermis and epidermis. Because the main function of the dermis is to support the epidermis, this greatly increases the exchange of oxygen, nutrients, and waste products between these two layers. Additionally, the increase in the surface area prevents the dermal and epidermal layers from separating by strengthening the junction between them. With age, the papillae tend to flatten and sometimes increase in number. The skin of the hands and fingers and the feet and toes is known by forensic scientists as friction ridge skin. It is known by anatomists as thick skin, volar skin or hairless skin. It has raised ridges, a thicker and more complex epidermis, increased sensory abilities, and the absence of hair and sebaceous glands. The ridges increase friction for improved grasping. Dermal papillae also play a pivotal role in hair formation, growth and cycling. In mucous membranes, the equivalent structures to dermal papillae are generally termed "connective tissue papillae", which interdigitate with the rete pegs of the superficial epithelium. Dermal papillae are less pronounced in thin skin areas. Reticular dermis The reticular dermis is the lower layer of the dermis, found under the papillary dermis, composed of dense irregular connective tissue featuring densely-packed collagen fibers. It is the primary location of dermal elastic fibers. The reticular region is usually much thicker than the overlying papillary dermis. It receives its name from the dense concentration of collagenous, elastic, and reticular fibers that weave throughout it. These protein fibers give the dermis its properties of strength, extensibility, and elasticity. Within the reticular region are the roots of the hair, sebaceous glands, sweat glands, receptors, nails, and blood vessels. The orientation of collagen fibers within the reticular dermis creates lines of tension called Langer's lines, which are of some relevance in surgery and wound healing.
Biology and health sciences
Integumentary system
Biology
740540
https://en.wikipedia.org/wiki/Engineering%20physics
Engineering physics
Engineering physics (EP), sometimes engineering science, is the field of study combining pure science disciplines (such as physics, mathematics, chemistry or biology) and engineering disciplines (computer, nuclear, electrical, aerospace, medical, materials, mechanical, etc.). History The name and subject have been used since 1861 by the German physics teacher J. Frick in his publications. Definition and Terminology It is notable that in many languages and countries, the term for "Engineering physics" would be directly translated into English as "Technical physics". In some countries, both what would be translated as "engineering physics" and what would be translated as "technical physics" are disciplines leading to academic degrees. In China, for example, with the former specializing in nuclear power research (i.e. nuclear engineering), and the latter closer to engineering physics. In some universities and their institutions, an engineering (or applied) physics major is a discipline or specialization within the scope of engineering science, or applied science. Related Names Several related names have existed since the inception of the interdisciplinary field. For example, some university courses are called or contain the phrase "physical technologies" or "physical engineering sciences" or "physical technics". In some cases, a program formerly called "physical engineering" has been renamed "applied physics" or has evolved into specialized fields such as "photonics engineering". Other Meanings A "Physical Design Engineer" or improperly called as "Physical Engineer" is the role of an electrical engineer who is responsible for the design and layout (routing) in CAE, specifically in ASIC/FPGA design. This role could be performed by a person trained in engineering physics if the person has received training in integrated electronics design, but this does not necessarily mean that an engineering physicist is an IC design engineer. Expertise Unlike traditional engineering disciplines, engineering science/physics is not necessarily confined to a particular branch of science, engineering or physics. Instead, engineering science/physics is meant to provide a more thorough grounding in applied physics for a selected specialty such as optics, quantum physics, materials science, applied mechanics, electronics, nanotechnology, microfabrication, microelectronics, computing, photonics, mechanical engineering, electrical engineering, nuclear engineering, biophysics, control theory, aerodynamics, energy, solid-state physics, etc. It is the discipline devoted to creating and optimizing engineering solutions through enhanced understanding and integrated application of mathematical, scientific, statistical, and engineering principles. The discipline is also meant for cross-functionality and bridges the gap between theoretical science and practical engineering with emphasis in research and development, design, and analysis. Degrees In many universities, engineering science programs may be offered at the levels of B.Tech., B.Sc., M.Sc. and Ph.D. Usually, a core of basic and advanced courses in mathematics, physics, chemistry, and biology forms the foundation of the curriculum, while typical elective areas may include fluid dynamics, quantum physics, economics, plasma physics, relativity, solid mechanics, operations research, quantitative finance, information technology and engineering, dynamical systems, bioengineering, environmental engineering, computational engineering, engineering mathematics and statistics, solid-state devices, materials science, electromagnetism, nanoscience, nanotechnology, energy, and optics. Awards There are awards for excellence in engineering physics. For example, Princeton University's Jeffrey O. Kephart '80 Prize is awarded annually to the graduating senior with the best record. Since 2002, the German Physical Society has awarded the Georg-Simon-Ohm-Preis for outstanding research in this field.
Technology
Disciplines
null
740575
https://en.wikipedia.org/wiki/Throwing%20knife
Throwing knife
A throwing knife is a knife that is specially designed and weighted so that it can be thrown effectively. They are a distinct category from ordinary knives. Throwing knives are used by many cultures around the world, and as such different tactics for throwing them have been developed, as have different shapes and forms of throwing knife. Throwing knives are also used in sideshow acts and sport. Central Africa Throwing knives saw use in central Africa. The wide area they were used over means that they were referred to by a number of names such as Onzil, Kulbeda, Mambele, Pinga, and Trombash. These weapons had multiple iron blades and were used for warfare and hunting. A maximum effective range of about has been suggested. The weapon appears to have originated in central Sudan somewhere around 1000 AD from where it spread south. It has however been suggested that the same weapon is depicted in Libyan wall sculptures dating around 1350 BC. The throwing knives were extensively collected by Europeans with the result that many European and American museums have extensive collections. However the collectors generally failed to record the origin of the blades or their use. As a result, the history and use of the throwing knives is poorly understood. A further complication is that the label "Throwing knife" was attached by ethnographers to various objects that didn't fit into other weapon categories even though they may not have been thrown. Western tradition Throwing knives are commonly made of a single piece of steel or other material, without handles, unlike other types of knives. The knife has two sections, the "blade" which is the sharpened half of the knife and the "grip" which is not sharpened. The purpose of the grip is to allow the knife to be safely handled by the user and also to balance the weight of the blade. Throwing knives are of two kinds, balanced and unbalanced. A balanced knife is made in such a way that the center of gravity and the geometrical center of the knife (the centroid) are the same. The trajectory of a thrown knife is the path of the center of gravity through the air. When a balanced knife is thrown, the circles described by the point and the end of the hilt as the knife rotates about the center of gravity will have the same diameter, making the trajectory more predictable. For an unbalanced knife, the circles described will have different diameters, meaning that the point and the end of the hilt will hit a target in different locations at any point along the trajectory. This makes predicting the trajectory more difficult. Balanced knives can be thrown by gripping either the point or the hilt, depending upon the user's preference and the distance to the target. Unbalanced knives are generally thrown by gripping the lighter end. There are also knives with adjustable weights which can slide on the length of the blade. This way, it can function both as a balanced or unbalanced knife depending upon the position of the weight. Balanced knives are generally preferred over unbalanced ones for two reasons: a) Balanced knives can be thrown from the handle as well as from the blade, and b) it is easier to change from one balanced knife to another. The weight of the throwing knife and the throwing speed determine the power of the impact. Lighter knives can be thrown with relative ease, but they may fail to penetrate the target properly, resulting in "bounce back". Heavy throwing knives are more stable in their flight and cause more damage to the target, but more strength is needed to throw them accurately. Hans Talhoffer (c. 1410-1415 – after 1482) and Paulus Hector Mair (1517–1579) both mention throwing daggers in their treaties on combat and weapons. Talhoffer specifies a type of spiked dagger for throwing while Mair describes throwing the dagger at your opponent's chest.
Technology
Knives
null
740818
https://en.wikipedia.org/wiki/Siding%20%28construction%29
Siding (construction)
Siding or wall cladding is the protective material attached to the exterior side of a wall of a house or other building. Along with the roof, it forms the first line of defense against the elements, most importantly sun, rain/snow, heat and cold, thus creating a stable, more comfortable environment on the interior side. The siding material and style also can enhance or detract from the building's beauty. There is a wide and expanding variety of materials to side with, both natural and artificial, each with its own benefits and drawbacks. Masonry walls as such do not require siding, but any wall can be sided. Walls that are internally framed, whether with wood, or steel I-beams, however, must always be sided. Most siding consists of pieces of weather-resistant material that are smaller than the wall they cover, to allow for expansion and contraction of the materials due to moisture and temperature changes. There are various styles of joining the pieces, from board and batton, where the butt joints between panels is covered with a thin strip (usually 25 to 50 mm wide) of wood, to a variety of clapboard, also called lap siding, in which planks are laid horizontally across the wall starting from the bottom, and building up, the board below overlapped by the board above it. These techniques of joinery are designed to prevent water from entering the walls. Siding that does not consist of pieces joined would include stucco, which is widely used in the Southwestern United States. It is a plaster-like siding and is applied over a lattice, just like plaster. However, because of the lack of joints, it eventually cracks and is susceptible to water damage. Rainscreen construction is used to improve siding's ability to keep walls dry. Wood siding Wood siding is very versatile in style and can be used on a wide variety of building structures. It can be painted or stained in any color palette desired. Though installation and repair is relatively simple, wood siding requires more maintenance than other popular solutions, requiring treatment every four to nine years depending on the severity of the elements to which it is exposed. Ants and termites are a threat to many types of wood siding, such that extra treatment and maintenance that can significantly increase the cost in some pest-infested areas. Wood is a moderately renewable resource and is biodegradable. However, most paints and stains used to treat wood are not environmentally friendly and can be toxic. Wood siding can provide some minor insulation and structural properties as compared to thinner cladding materials. Shingles Wood shingles or irregular cedar "shake" siding was used in early New England construction, and was revived in Shingle Style and Queen Anne style architecture in the late 19th century. Clapboards Wood siding in overlapping horizontal rows or "courses" is called clapboard, weatherboard (British English), or bevel siding which is made with beveled boards, thin at the top edge and thick at the butt. In colonial North America, Eastern white pine was the most common material. Wood siding can also be made of naturally rot-resistant woods such as redwood or cedar. Drop siding Jointed horizontal siding (also called "drop" siding or novelty siding) may be shiplapped or tongue and grooved (though less common). Drop siding comes in a wide variety of face finishes, including Dutch Lap (also called German or Cove Lap) and log siding (milled with curve). Vertical boards Vertical siding may have a cover over the joint: board and batten, popular in American wooden Carpenter Gothic houses; or less commonly behind the joint called batten and board or reversed board and batten. Wooden sheet siding Plywood sheet siding is sometimes used on inexpensive buildings, sometimes with grooves to imitate vertical shiplap siding. One example of such grooved plywood siding is the type called Texture 1–11, T1-11, or T111 ("tee-one-eleven"). There is also a product known as reverse board-and-batten RBB that looks similar but has deeper grooves. Some of these products may be thick enough and rated for structural applications if properly fastened to studs. Both T-11 and RBB sheets are quick and easy to install as long as they are installed with compatible flashing at butt joints. Stone siding Slate shingles may be simple in form but many buildings with slate siding are highly decorative. Plastic siding Wood clapboard is often imitated using vinyl siding or uPVC weatherboarding. It is usually produced in units twice as high as clapboard. Plastic imitations of wood shingle and wood shakes also exist. Since plastic siding is a manufactured product, it may come in unlimited color choices and styles. Historically vinyl sidings would fade, crack and buckle over time, requiring the siding to be replaced. However, newer vinyl options have improved and resist damage and wear better. Vinyl siding is sensitive to direct heat from grills, barbecues or other sources. Unlike wood, vinyl siding does not provide additional insulation for the building, unless an insulation material (e.g., foam) has been added to the product. It has also been criticized by some fire safety experts for its heat sensitivity. This sensitivity makes it easier for a house fire to jump to neighboring houses in comparison to materials such as brick, metal or masonry. Vinyl siding has a potential environmental cost. While vinyl siding can be recycled, it cannot be burned (due to toxic dioxin gases that would be released). If dumped in a landfill, plastic siding does not break down quickly. Vinyl siding is also considered one of the more unattractive siding choices by many. Although newer options and proper installation can eliminate this complaint, vinyl siding often has visible seam lines between panels and generally do not have the quality appearance of wood, brick, or masonry. The fading and cracking of older types of plastic siding compound this issue. In many areas of newer housing development, particularly in North America, entire neighbourhoods are often built with all houses clad in vinyl siding, given an unappealing uniformity. Some cities now campaign for house developers to incorporate varied types of siding during construction. Imitation brick or stone–asphalt siding A predecessor to modern maintenance free sidings was asphalt brick siding. Asphalt impregnated panels (about ) give the appearance of brick or even stone. Many buildings have this siding, especially old sheds and garages. If the panels are straight and level and not damaged, the only indication that they are not real brick may be seen at the corner caps. Trademarked names included Insulbrick, Insulstone, Insulwood. Commonly used names now are faux brick, lick-it-and-stick-it brick, and ghetto brick. Often such siding is now covered with newer metal or plastic siding. Today thin panels of real brick are manufactured for veneer or siding. Insulated siding Insulated siding has emerged as a new siding category in recent years. Considered an improvement over vinyl siding, insulated siding is custom fit with expanded polystyrene foam (EPS) that is fused to the back of the siding, which fills the gap between the home and the siding. Products provide environmental advantages by reducing energy use by up to 20 percent. On average, insulated siding products have an R-value of 3.96, triple that of other exterior cladding materials. Insulated siding products are typically Energy Star qualified, engineered in compliance with environmental standards set by the U.S. Department of Energy and the United States Environmental Protection Agency. In addition to reducing energy consumption, insulated siding is a durable exterior product, designed to last more than 50 years, according to manufacturers. The foam provides rigidity for a more ding- and wind-resistant siding, maintaining a quality look for the life of the products. The foam backing also creates straighter lines when hung, providing a look more like that of wood siding, while remaining low maintenance. Manufacturers report that insulated siding is permeable or "breathable", allowing water vapor to escape, which can protect against rot, mold and mildew, and help maintain healthy indoor air quality. Metal siding Metal siding comes in a variety of metals, styles, and colors. It is most often associated with modern, industrial, and retro buildings. Utilitarian buildings often use corrugated galvanized steel sheet siding or cladding, which often has a coloured vinyl finish. Corrugated aluminium cladding is also common where a more durable finish is required, while also being lightweight for easy shaping and installing making it a popular metal siding choice. Formerly, imitation wood clapboard was made of aluminium (aluminium siding). That role is typically played by vinyl siding today. Aluminium siding is ideal for homes in coastal areas with much moisture and salt, since aluminium reacts with air to form aluminium oxide, an extremely hard coating that seals the aluminium surface from further degradation. In contrast, steel forms rust, which can weaken the structure of the material, and corrosion-resistant coatings for steel, such as zinc, sometimes fail around the edges as years pass. However, an advantage of steel siding can be its dent-resistance, which is excellent for regions with severe storms—especially if the area is prone to hail. The first architectural application of aluminium was the mounting of a small grounding cap on the Washington Monument in 1884. Sheet-iron or steel clapboard siding units had been patented in 1903, and Sears, Roebuck & Company had been offering embossed steel siding in stone and brick patterns in their catalogues for several years by the 1930s. Alcoa began promoting the use of aluminium in architecture by the 1920s when it produced ornamental spandrel panels for the Cathedral of Learning and the Chrysler and Empire State Buildings in New York. The exterior of the A.O. Smith Corporation Building in Milwaukee was clad entirely in aluminium by 1930, and siding panels of Duralumin sheet from Alcoa sheathed an experimental exhibit house for the Architectural League of New York in 1931. Most architectural applications of aluminium in the 1930s were on a monumental scale, and it was another six years before it was put to use on residential construction. In the first few years after World War II, manufacturers began developing and widely distributing aluminium siding. Among them Indiana businessman Frank Hoess was credited with the invention of the configuration seen on modern aluminium siding. His experiments began in 1937 with steel siding in imitation of wooden clapboards. Other types of sheet metal and steel siding on the market at the time presented problems with warping, creating openings through which water could enter, introducing rust. Hoess remedied this problem through the use of a locking joint, which was formed by small flap at the top of each panel that joined with a U-shaped flange on the lower edge of the previous panel thus forming a watertight horizontal seam. After he had received a patent for his siding in 1939, Hoess produced a small housing development of about forty-four houses covered in his clapboard-style steel siding for blue-collar workers in Chicago. His operations were curtailed when war plants commandeered the industry. In 1946 Hoess allied with Metal Building Products of Detroit, a corporation that promoted and sold Hoess siding of Alcoa aluminium. Their product was used on large housing projects in the northeast and was purportedly the siding of choice for a 1947 Pennsylvania development, the first subdivision to solely use aluminium siding. Products such as by unpainted aluminium panels, starter strips, corner pieces and specialized application clips were assembled in the Indiana shop of the Hoess brothers. Siding could be applied over conventional wooden clapboards, or it could be nailed to studs via special clips affixed to the top of each panel. Insulation was placed between studs. While the Hoess Brothers company continued to function for about twelve more years after the dissolution of the Metal Building Products Corporation in 1948, they were less successful than rising siding companies like Reynolds Metals. Thatch siding Thatch is an ancient and very widespread building material used on roofs and walls. Thatch siding is made with dry vegetation such as longstraw, water reeds, or combed wheat reed. The materials are overlapped and weaved in patterns designed to deflect and direct water. Masonry siding Stone and masonry veneer is sometimes considered siding, are varied and can accommodate a variety of styles—from formal to rustic. Though masonry can be painted or tinted to match many color palettes, it is most suited to neutral earth tones, and coatings such as roughcast and pebbeldash. Masonry has excellent durability (over 100 years), and minimal maintenance is required. The primary drawback to masonry siding is the initial cost. Precipitation can threaten the structure of buildings, so it is important that the siding will be able to withstand the weather conditions in the local region. For rainy regions, exterior insulation finishing systems (EIFS) have been known to suffer underlying wood rot problems with excessive moisture exposure. The environmental impact of masonry depends on the type of material used. In general, concrete and concrete based materials are intensive energy materials to produce. However, the long durability and minimal maintenance of masonry sidings mean that less energy is required over the life of the siding. Composite siding Various composite materials are also used for siding: asphalt shingles, asbestos, fiber cement, aluminium (ACM), fiberboard, hardboard, etc. They may be in the form of shingles or boards, in which case they are sometimes called clapboard. Composite sidings are available in many styles and can mimic the other siding options. Composite materials are ideal for achieving a certain style or 'look' that may not be suited to the local environment (e.g., corrugated aluminium siding in an area prone to severe storms; steel in coastal climates; wood siding in termite-infested regions). Costs of composites tend to be lower than wood options, but vary widely as do installation, maintenance and repair requirements. Not surprisingly, the durability and environmental impact of composite sidings depends on the specific materials used in the manufacturing process. Fiber cement siding is a class of composite siding that is usually made from a combination of cement, cellulose (wood), sand, and water. They are either coated or painted in the factory or installed and then painted after installation. Fiber cement is popular for its realistic look, durability, low-maintenance properties, fire resistance, and its lightweight properties compared to traditional wood siding. Composite siding products containing cellulose (wood fibers) have been shown to have problems with deterioration, delamination, or loss of coating adhesion in certain climates or under certain environmental conditions. A younger class of non-wood synthetic siding has sprouted in the past 15 years. These products are usually made from a combination of non-wood materials such as polymeric resins, fiberglass, stone, sand, and fly ash and are chosen for their durability, curb appeal, and ease of maintenance. Given the newness of such technologies, product lifespan can only be estimated, varieties are limited, and distribution is sporadic.
Technology
Building materials
null
740885
https://en.wikipedia.org/wiki/Calcium%20fluoride
Calcium fluoride
Calcium fluoride is the inorganic compound of the elements calcium and fluorine with the formula CaF2. It is a white solid that is practically insoluble in water. It occurs as the mineral fluorite (also called fluorspar), which is often deeply coloured owing to impurities. Chemical structure The compound crystallizes in a cubic motif called the fluorite structure. Ca2+ centres are eight-coordinate, being centred in a cube of eight F− centres. Each F− centre is coordinated to four Ca2+ centres in the shape of a tetrahedron. Although perfectly packed crystalline samples are colorless, the mineral is often deeply colored due to the presence of F-centers. The same crystal structure is found in numerous ionic compounds with formula AB2, such as CeO2, cubic ZrO2, UO2, ThO2, and PuO2. In the corresponding anti-structure, called the antifluorite structure, anions and cations are swapped, such as Be2C. Gas phase The gas phase is noteworthy for failing the predictions of VSEPR theory; the molecule is not linear like , but bent with a bond angle of approximately 145°; the strontium and barium dihalides also have a bent geometry. It has been proposed that this is due to the fluoride ligands interacting with the electron core or the d-subshell of the calcium atom. Preparation The mineral fluorite is abundant, widespread, and mainly of interest as a precursor to HF. Thus, little motivation exists for the industrial production of CaF2. High purity CaF2 is produced by treating calcium carbonate with hydrofluoric acid: CaCO3 + 2 HF → CaF2 + CO2 + H2O Applications Naturally occurring CaF2 is the principal source of hydrogen fluoride, a commodity chemical used to produce a wide range of materials. Calcium fluoride in the fluorite state is of significant commercial importance as a fluoride source. Hydrogen fluoride is liberated from the mineral by the action of concentrated sulfuric acid: CaF2 + H2SO4 → CaSO4(solid) + 2 HF Others Calcium fluoride is used to manufacture optical components such as windows and lenses, used in thermal imaging systems, spectroscopy, telescopes, and excimer lasers (used for photolithography in the form of a fused lens). It is transparent over a broad range from ultraviolet (UV) to infrared (IR) frequencies. Its low refractive index reduces the need for anti-reflection coatings. Its insolubility in water is convenient as well. It also allows much smaller wavelengths to pass through. Doped calcium fluoride, like natural fluorite, exhibits thermoluminescence and is used in thermoluminescent dosimeters. It forms when fluorine combines with calcium. Safety CaF2 is classified as "not dangerous", although reacting it with sulfuric acid produces hydrofluoric acid, which is highly corrosive and toxic. With regards to inhalation, the NIOSH-recommended concentration of fluorine-containing dusts is 2.5 mg/m3 in air.
Physical sciences
Halide salts
Chemistry
740929
https://en.wikipedia.org/wiki/Sylvite
Sylvite
Sylvite, or sylvine, is potassium chloride (KCl) in natural mineral form. It forms crystals in the isometric system very similar to normal rock salt, halite (NaCl). The two are, in fact, isomorphous. Sylvite is colorless to white with shades of yellow and red due to inclusions. It has a Mohs hardness of 2.5 and a specific gravity of 1.99. It has a refractive index of 1.4903. Sylvite has a salty taste with a distinct bitterness. Sylvite is one of the last evaporite minerals to precipitate out of solution. As such, it is found only in very dry saline areas. Its principal use is as a potassium fertilizer. Sylvite is found in many evaporite deposits worldwide. Massive bedded deposits occur in New Mexico and western Texas, and in Utah in the US, but the largest world source is in Saskatchewan, Canada. The vast deposits in Saskatchewan were formed by the evaporation of a Devonian seaway. Sylvite is the official mineral of Saskatchewan. Sylvite was first described in 1832 at Mount Vesuvius near Napoli in Italy and named after historical KCl designations sal degistivum Sylvii and sal febrifugum Sylvii, which are named after the Dutch physician and chemist François Sylvius de le Boe (1614–1672). Sylvite, along with quartz, fluorite and halite, is used for spectroscopic prisms and lenses.
Physical sciences
Minerals
Earth science
741426
https://en.wikipedia.org/wiki/Afternoon
Afternoon
Afternoon is the time between noon and sunset or evening. It is the time when the sun is descending from its peak in the sky to somewhat before its terminus at the horizon in the west. In human life, it occupies roughly the latter half of the standard work and school day. In literal terms, it refers to a time specifically after noon. Terminology Afternoon is often defined as the period between noon and sunset. If this definition is adopted, the specific range of time varies in one direction: noon is defined as the time when the sun reaching its highest point in the sky, but the boundary between afternoon and evening has no standard definition. However, before a period of transition from the 12th to 14th centuries, noon instead referred to 3:00 pm. Possible explanations include shifting times for prayers and midday meals, along which one concept of noon was defined—and so afternoon would have referred to a narrower timeframe. The word afternoon, which derives from after and noon, has been attested from about the year 1300; Middle English contained both afternoon and the synonym aftermete. The standard phrasing was at afternoon in the 15th and 16th centuries, but has shifted to in the afternoon since then. In American English dialects, the word evening is sometimes used to encompass all times between noon and midnight. The Irish language contains four different words to mark time intervals from late afternoon to nightfall; this period is considered mystical. Metaphorically, the word afternoon refers to a relatively late period in the expanse of time or in one's life. The equivalent of Earth's afternoon on another planet would refer to the time the principal star of that planetary system would be in descent from its prime meridian, as seen from the planet's surface. Events Afternoon is a time when the sun is descending from its daytime peak. During the afternoon, the sun moves from roughly the center of the sky to deep in the west. In late afternoon, sunlight is particularly bright and glaring, because the sun is at a low angle in the sky. The standard working time in most industrialized countries goes from the morning to the late afternoon or early evening — archetypally, 9:00 am to 5:00 pm — so the latter part of this time takes place in the afternoon. Schools usually let their students out around 3:00 pm during the mid afternoon. In Denmark, afternoon is considered between 1:00 and 5:00 pm. Effects on life Hormones In diurnal animals, it is typical for blood levels of the hormone cortisol—which is used to increase blood sugar, aid metabolism and is also produced in response to stress—to be most stable in the afternoon after decreasing throughout the morning. However, cortisol levels are also the most reactive to environmental changes unrelated to sleep and daylight during the afternoon. As a result, this time of day is considered optimal for researchers studying stress and hormone levels. Plants generally have their highest photosynthetic levels of the day at noon and in the early afternoon, owing to the sun's high angle in the sky. The large proliferation of maize crops across Earth has caused tiny, harmless fluctuations in the normal pattern of atmospheric carbon dioxide levels, since these crops photosynthesize large amounts of carbon dioxide during these times and this process sharply drops down during the late afternoon and evening. Body temperature In humans, body temperature is typically highest during the mid to late afternoon. However, human athletes being tested for physical vigor on exercise machines showed no statistically significant difference after lunch. Owners of factory farms are advised to use buildings with an east–west (as opposed to north–south) orientation to house their livestock, because an east–west orientation generally means thicker walls on the east and west to accommodate the sun's acute angle and intense glare during late afternoon. When these animals are too hot, they are more likely to become belligerent and unproductive. Alertness The afternoon, especially the early afternoon, is associated with a dip in a variety of areas of human cognitive and productive functioning. Notably, motor vehicle accidents occur more frequently in the early afternoon, when drivers presumably have recently finished lunch. A study of motor accidents in Sweden between 1987 and 1991 found that the time around 5:00 pm had by far the most accidents: around 1,600 at 5:00 pm, compared to around 1,000 each at 4:00 pm and 6:00 pm. This trend may have been influenced by the afternoon rush hour, but the morning rush hour showed a much smaller increase. In Finland, accidents in the agriculture industry are most common in the afternoon, specifically Monday afternoons in September. One psychology professor studying circadian rhythms found that his students performed somewhat worse on exams in the afternoon than in the morning, but even worse in the evening. Neither of these differences, however, was statistically significant. Four studies carried out in 1997 found that subjects who were given tests on differentiating traffic signs had longer reaction times when tested at 3:00 pm and 6:00 pm than at 9:00 am and 12:00 pm. These trends are held across all four studies and for both complex and abstract questions. However, one UK-based researcher failed to find any difference in exam performance on over 300,000 A-level exam papers sat in either the morning or afternoon. Human productivity routinely decreases in the afternoon. Power plants have shown significant reductions in productivity in the afternoon compared to the morning, the largest differences occurring on Saturdays and the smallest on Mondays. One 1950s study covering two female factory workers for six months found that their productivity was 13 percent lower in the afternoon, the least productive time being their last hour at work. It was summarized that the differences came from personal breaks and unproductive activities at the workplace. Another, larger study found that afternoon declines in productivity were greater during longer work shifts. Not all humans share identical circadian rhythms. One study across Italy and Spain had students fill out a questionnaire, then ranked them on a "morningness–eveningness" scale. The results were a fairly standard bell curve. Levels of alertness over the course of the day had a significant correlation with scores on the questionnaire. All categories of participants—evening types, morning types, and intermediate types—had high levels of alertness from roughly 2:00 pm to 8:00 pm, but outside this window their alertness levels corresponded to their scores.
Physical sciences
Celestial mechanics
Astronomy
741823
https://en.wikipedia.org/wiki/Optical%20filter
Optical filter
An optical filter is a device that selectively transmits light of different wavelengths, usually implemented as a glass plane or plastic device in the optical path, which are either dyed in the bulk or have interference coatings. The optical properties of filters are completely described by their frequency response, which specifies how the magnitude and phase of each frequency component of an incoming signal is modified by the filter. Filters mostly belong to one of two categories. The simplest, physically, is the absorptive filter; then there are interference or dichroic filters. Many optical filters are used for optical imaging and are manufactured to be transparent; some used for light sources can be translucent. Optical filters selectively transmit light in a particular range of wavelengths, that is, colours, while absorbing the remainder. They can usually pass long wavelengths only (longpass), short wavelengths only (shortpass), or a band of wavelengths, blocking both longer and shorter wavelengths (bandpass). The passband may be narrower or wider; the transition or cutoff between maximal and minimal transmission can be sharp or gradual. There are filters with more complex transmission characteristic, for example with two peaks rather than a single band; these are more usually older designs traditionally used for photography; filters with more regular characteristics are used for scientific and technical work. Optical filters are commonly used in photography (where some special effect filters are occasionally used as well as absorptive filters), in many optical instruments, and to colour stage lighting. In astronomy optical filters are used to restrict light passed to the spectral band of interest, e.g., to study infrared radiation without visible light which would affect film or sensors and overwhelm the desired infrared. Optical filters are also essential in fluorescence applications such as fluorescence microscopy and fluorescence spectroscopy. Photographic filters are a particular case of optical filters, and much of the material here applies. Photographic filters do not need the accurately controlled optical properties and precisely defined transmission curves of filters designed for scientific work, and sell in larger quantities at correspondingly lower prices than many laboratory filters. Some photographic effect filters, such as star effect filters, are not relevant to scientific work. Measurement In general, a given optical filter transmits a certain percentage of the incoming light as the wavelength changes. This is measured by a spectrophotometer. As a linear material, the absorption for each wavelength is independent of the presence of other wavelengths. A very few materials are non-linear, and the transmittance depends on the intensity and the combination of wavelengths of the incident light. Transparent fluorescent materials can work as an optical filter, with an absorption spectrum, and also as a light source, with an emission spectrum. Also in general, light which is not transmitted is absorbed; for intense light, that can cause significant heating of the filter. However, the optical term absorbance refers to the attenuation of the incident light, regardless of the mechanism by which it is attenuated. Some filters, like mirrors, interference filters, or metal meshes, reflect or scatter much of the non-transmitted light. The (dimensionless) Optical Density of a filter at a particular wavelength of light is defined as where is the (dimensionless) transmittance of the filter at that wavelength. Absorptive Optical filtering was first done with liquid-filled, glass-walled cells; they are still used for special purposes. The widest range of color-selection is now available as colored-film filters, originally made from animal gelatin but now usually a thermoplastic such as acetate, acrylic, polycarbonate, or polyester depending upon the application. They were standardized for photographic use by Wratten in the early 20th century, and also by color gel manufacturers for theater use. There are now many absorptive filters made from glass to which various inorganic or organic compounds have been added. Colored glass optical filters, although harder to make to precise transmittance specifications, are more durable and stable once manufactured. Dichroic filter Alternately, dichroic filters (also called "reflective" or "thin film" or "interference" filters) can be made by coating a glass substrate with a series of optical coatings. Dichroic filters usually reflect the unwanted portion of the light and transmit the remainder. Dichroic filters use the principle of interference. Their layers form a sequential series of reflective cavities that resonate with the desired wavelengths. Other wavelengths destructively cancel or reflect as the peaks and troughs of the waves overlap. Dichroic filters are particularly suited for precise scientific work, since their exact colour range can be controlled by the thickness and sequence of the coatings. They are usually much more expensive and delicate than absorption filters. They can be used in devices such as the dichroic prism of a camera to separate a beam of light into different coloured components. The basic scientific instrument of this type is a Fabry–Pérot interferometer. It uses two mirrors to establish a resonating cavity. It passes wavelengths that are a multiple of the cavity's resonance frequency. Etalons are another variation: transparent cubes or fibers whose polished ends form mirrors tuned to resonate with specific wavelengths. These are often used to separate channels in telecommunications networks that use wavelength division multiplexing on long-haul optic fibers. Monochromatic Monochromatic filters only allow a narrow range of wavelengths (essentially a single color) to pass. Infrared The term "infrared filter" can be ambiguous, as it may be applied to filters to pass infrared (blocking other wavelengths) or to block infrared (only). Infrared-passing filters are used to block visible light but pass infrared; they are used, for example, in infrared photography. Infrared cut-off filters are designed to block or reflect infrared wavelengths but pass visible light. Mid-infrared filters are often used as heat-absorbing filters in devices with bright incandescent light bulbs (such as slide and overhead projectors) to prevent unwanted heating due to infrared radiation. There are also filters which are used in solid state video cameras to block IR due to the high sensitivity of many camera sensors to unwanted near-infrared light. Ultraviolet Ultraviolet (UV) filters block ultraviolet radiation, but let visible light through. Because photographic film and digital sensors are sensitive to ultraviolet (which is abundant in skylight) but the human eye is not, such light would, if not filtered out, make photographs look different from the scene visible to people, for example making images of distant mountains appear unnaturally hazy. An ultraviolet-blocking filter renders images closer to the visual appearance of the scene. As with infrared filters there is a potential ambiguity between UV-blocking and UV-passing filters; the latter are much less common, and more usually known explicitly as UV pass filters and UV bandpass filters. Neutral density Neutral density (ND) filters have a constant attenuation across the range of visible wavelengths, and are used to reduce the intensity of light by reflecting or absorbing a portion of it. They are specified by the optical density (OD) of the filter, which is the negative of the common logarithm of the transmission coefficient. They are useful for making photographic exposures longer. A practical example is making a waterfall look blurry when it is photographed in bright light. Alternatively, the photographer might want to use a larger aperture (so as to limit the depth of field); adding an ND filter permits this. ND filters can be reflective (in which case they look like partially reflective mirrors) or absorptive (appearing grey or black). Longpass A longpass (LP) Filter is an optical interference or coloured glass filter that attenuates shorter wavelengths and transmits (passes) longer wavelengths over the active range of the target spectrum (ultraviolet, visible, or infrared). Longpass filters, which can have a very sharp slope (referred to as edge filters), are described by the cut-on wavelength at 50 percent of peak transmission. In fluorescence microscopy, longpass filters are frequently utilized in dichroic mirrors and barrier (emission) filters. Use of the older term 'low pass' to describe longpass filters has become uncommon; filters are usually described in terms of wavelength rather than frequency, and a "low pass filter", without qualification, would be understood to be an electronic filter. Band-pass Band-pass filters only transmit a certain wavelength band, and block others. The width of such a filter is expressed in the wavelength range it lets through and can be anything from much less than an Ångström to a few hundred nanometers. Such a filter can be made by combining an LP- and an SP filter. Examples of band-pass filters are the Lyot filter and the Fabry–Pérot interferometer. Both of these filters can also be made tunable, such that the central wavelength can be chosen by the user. Band-pass filters are often used in astronomy when one wants to observe a certain process with specific associated spectral lines. The Dutch Open Telescope and Swedish Solar Telescope are examples where Lyot and Fabry–Pérot filters are being used. Shortpass A shortpass (SP) Filter is an optical interference or coloured glass filter that attenuates longer wavelengths and transmits (passes) shorter wavelengths over the active range of the target spectrum (usually the ultraviolet and visible region). In fluorescence microscopy, shortpass filters are frequently employed in dichromatic mirrors and excitation filters. Guided-mode resonance filters A relatively new class of filters introduced around 1990. These filters are normally filters in reflection, that is they are notch filters in transmission. They consist in their most basic form of a substrate waveguide and a subwavelength grating or 2D hole array. Such filters are normally transparent, but when a leaky guided mode of the waveguide is excited they become highly reflective (a record of over 99% experimentally) for a particular polarization, angular orientations, and wavelength range. The parameters of the filters are designed by proper choice of the grating parameters. The advantage of such filters are the few layers needed for ultra-narrow bandwidth filters (in contrast to dichroic filters), and the potential decoupling between spectral bandwidth and angular tolerance when more than 1 mode is excited. Metal mesh filters Filters for sub-millimeter and near infrared wavelengths in astronomy are metal mesh grids that are stacked together to form LP, BP, and SP filters for these wavelengths. Polarizer Another kind of optical filter is a polarizer or polarization filter, which blocks or transmits light according to its polarization. They are often made of materials such as Polaroid and are used for sunglasses and photography. Reflections, especially from water and wet road surfaces, are partially polarized, and polarized sunglasses will block some of this reflected light, allowing an angler to better view below the surface of the water and better vision for a driver. Light from a clear blue sky is also polarized, and adjustable filters are used in colour photography to darken the appearance of the sky without introducing colours to other objects, and in both colour and black-and-white photography to control specular reflections from objects and water. Much older than g.m.r.f (just above) these first (and some still) use fine mesh integrated in the lens. Polarized filters are also used to view certain types of stereograms, so that each eye will see a distinct image from a single source. Arc welding An arc source puts out visible, infrared and ultraviolet light that may be harmful to human eyes. Therefore, optical filters on welding helmets must meet ANSI Z87:1 (a safety glasses specification) in order to protect human vision. Some examples of filters that would provide this kind of filtering would be earth elements embedded or coated on glass, but practically speaking it is not possible to do perfect filtering. A perfect filter would remove particular wavelengths and leave plenty of light so a worker can see what he/she is working on. Wedge filter A wedge filter is an optical filter so constructed that its thickness varies continuously or in steps in the shape of a wedge. The filter is used to modify the intensity distribution in a radiation beam. It is also known as linearly variable filter (LVF). It is used in various optical sensors where wavelength separation is required e.g. in hyperspectral sensors.
Technology
Optical components
null
742238
https://en.wikipedia.org/wiki/Metre%20per%20second%20squared
Metre per second squared
The metre per second squared is the unit of acceleration in the International System of Units (SI). As a derived unit, it is composed from the SI base units of length, the metre, and time, the second. Its symbol is written in several forms as m/s2, m·s−2 or ms−2, , or less commonly, as (m/s)/s. As acceleration, the unit is interpreted physically as change in velocity or speed per time interval, i.e. metre per second per second and is treated as a vector quantity. Example When an object experiences a constant acceleration of one metre per second squared (1 m/s2) from a state of rest, it achieves the speed of 5 m/s after 5 seconds and 10 m/s after 10 seconds. The average acceleration a can be calculated by dividing the speed v (m/s) by the time t (s), so the average acceleration in the first example would be calculated: . Related units Newton's second law states that force equals mass multiplied by acceleration. The unit of force is the newton (N), and mass has the SI unit kilogram (kg). One newton equals one kilogram metre per second squared. Therefore, the unit metre per second squared is equivalent to newton per kilogram, N·kg−1, or N/kg. Thus, the Earth's gravitational field (near ground level) can be quoted as 9.8 metres per second squared, or the equivalent 9.8 N/kg. Acceleration can be measured in ratios to gravity, such as g-force, and peak ground acceleration in earthquakes. Unicode character The "metre per second squared" symbol is encoded by Unicode at code point . This is for compatibility with East Asian encodings and not intended to be used in new documents. Conversions
Physical sciences
Acceleration
Basics and measurement
742288
https://en.wikipedia.org/wiki/Faraday%27s%20law%20of%20induction
Faraday's law of induction
Faraday's law of induction (or simply Faraday's law) is a law of electromagnetism predicting how a magnetic field will interact with an electric circuit to produce an electromotive force (emf). This phenomenon, known as electromagnetic induction, is the fundamental operating principle of transformers, inductors, and many types of electric motors, generators and solenoids. The Maxwell–Faraday equation (listed as one of Maxwell's equations) describes the fact that a spatially varying (and also possibly time-varying, depending on how a magnetic field varies in time) electric field always accompanies a time-varying magnetic field, while Faraday's law states that emf (electromagnetic work done on a unit charge when it has traveled one round of a conductive loop) appears on a conductive loop when the magnetic flux through the surface enclosed by the loop varies in time. Once Faraday's law had been discovered, one aspect of it (transformer emf) was formulated as the Maxwell–Faraday equation. The equation of Faraday's law can be derived by the Maxwell–Faraday equation (describing transformer emf) and the Lorentz force (describing motional emf). The integral form of the Maxwell–Faraday equation describes only the transformer emf, while the equation of Faraday's law describes both the transformer emf and the motional emf. History Electromagnetic induction was discovered independently by Michael Faraday in 1831 and Joseph Henry in 1832. Faraday was the first to publish the results of his experiments. Faraday's notebook on August 29, 1831 describes an experimental demonstration of electromagnetic induction (see figure) that wraps two wires around opposite sides of an iron ring (like a modern toroidal transformer). His assessment of newly-discovered properties of electromagnets suggested that when current started to flow in one wire, a sort of wave would travel through the ring and cause some electrical effect on the opposite side. Indeed, a galvanometer's needle measured a transient current (which he called a "wave of electricity") on the right side's wire when he connected or disconnected the left side's wire to a battery. This induction was due to the change in magnetic flux that occurred when the battery was connected and disconnected. His notebook entry also noted that fewer wraps for the battery side resulted in a greater disturbance of the galvanometer's needle. Within two months, Faraday had found several other manifestations of electromagnetic induction. For example, he saw transient currents when he quickly slid a bar magnet in and out of a coil of wires, and he generated a steady (DC) current by rotating a copper disk near the bar magnet with a sliding electrical lead ("Faraday's disk"). Michael Faraday explained electromagnetic induction using a concept he called lines of force. However, scientists at the time widely rejected his theoretical ideas, mainly because they were not formulated mathematically. An exception was James Clerk Maxwell, who in 1861–62 used Faraday's ideas as the basis of his quantitative electromagnetic theory. In Maxwell's papers, the time-varying aspect of electromagnetic induction is expressed as a differential equation which Oliver Heaviside referred to as Faraday's law even though it is different from the original version of Faraday's law, and does not describe motional emf. Heaviside's version (see Maxwell–Faraday equation below) is the form recognized today in the group of equations known as Maxwell's equations. Lenz's law, formulated by Emil Lenz in 1834, describes "flux through the circuit", and gives the direction of the induced emf and current resulting from electromagnetic induction (elaborated upon in the examples below). According to Albert Einstein, much of the groundwork and discovery of his special relativity theory was presented by this law of induction by Faraday in 1834. Faraday's law The most widespread version of Faraday's law states: Mathematical statement For a loop of wire in a magnetic field, the magnetic flux is defined for any surface whose boundary is the given loop. Since the wire loop may be moving, we write for the surface. The magnetic flux is the surface integral: where is an element of area vector of the moving surface , is the magnetic field, and is a vector dot product representing the element of flux through . In more visual terms, the magnetic flux through the wire loop is proportional to the number of magnetic field lines that pass through the loop. When the flux changes—because changes, or because the wire loop is moved or deformed, or both—Faraday's law of induction says that the wire loop acquires an emf, defined as the energy available from a unit charge that has traveled once around the wire loop. (Although some sources state the definition differently, this expression was chosen for compatibility with the equations of special relativity.) Equivalently, it is the voltage that would be measured by cutting the wire to create an open circuit, and attaching a voltmeter to the leads. Faraday's law states that the emf is also given by the rate of change of the magnetic flux: where is the electromotive force (emf) and is the magnetic flux. The direction of the electromotive force is given by Lenz's law. The laws of induction of electric currents in mathematical form was established by Franz Ernst Neumann in 1845. Faraday's law contains the information about the relationships between both the magnitudes and the directions of its variables. However, the relationships between the directions are not explicit; they are hidden in the mathematical formula. It is possible to find out the direction of the electromotive force (emf) directly from Faraday’s law, without invoking Lenz's law. A left hand rule helps doing that, as follows: Align the curved fingers of the left hand with the loop (yellow line). Stretch your thumb. The stretched thumb indicates the direction of (brown), the normal to the area enclosed by the loop. Find the sign of , the change in flux. Determine the initial and final fluxes (whose difference is ) with respect to the normal , as indicated by the stretched thumb. If the change in flux, , is positive, the curved fingers show the direction of the electromotive force (yellow arrowheads). If is negative, the direction of the electromotive force is opposite to the direction of the curved fingers (opposite to the yellow arrowheads). For a tightly wound coil of wire, composed of identical turns, each with the same , Faraday's law of induction states that where is the number of turns of wire and is the magnetic flux through a single loop. Maxwell–Faraday equation The Maxwell–Faraday equation states that a time-varying magnetic field always accompanies a spatially varying (also possibly time-varying), non-conservative electric field, and vice versa. The Maxwell–Faraday equation is (in SI units) where is the curl operator and again is the electric field and is the magnetic field. These fields can generally be functions of position and time . The Maxwell–Faraday equation is one of the four Maxwell's equations, and therefore plays a fundamental role in the theory of classical electromagnetism. It can also be written in an integral form by the Kelvin–Stokes theorem, thereby reproducing Faraday's law: where, as indicated in the figure, is a surface bounded by the closed contour , is an infinitesimal vector element of the contour , and is an infinitesimal vector element of surface . Its direction is orthogonal to that surface patch, the magnitude is the area of an infinitesimal patch of surface. Both and have a sign ambiguity; to get the correct sign, the right-hand rule is used, as explained in the article Kelvin–Stokes theorem. For a planar surface , a positive path element of curve is defined by the right-hand rule as one that points with the fingers of the right hand when the thumb points in the direction of the normal to the surface . The line integral around is called circulation. A nonzero circulation of is different from the behavior of the electric field generated by static charges. A charge-generated -field can be expressed as the gradient of a scalar field that is a solution to Poisson's equation, and has a zero path integral. See gradient theorem. The integral equation is true for any path through space, and any surface for which that path is a boundary. If the surface is not changing in time, the equation can be rewritten: The surface integral at the right-hand side is the explicit expression for the magnetic flux through . The electric vector field induced by a changing magnetic flux, the solenoidal component of the overall electric field, can be approximated in the non-relativistic limit by the volume integral equation Proof The four Maxwell's equations (including the Maxwell–Faraday equation), along with Lorentz force law, are a sufficient foundation to derive everything in classical electromagnetism. Therefore, it is possible to "prove" Faraday's law starting with these equations. The starting point is the time-derivative of flux through an arbitrary surface (that can be moved or deformed) in space: (by definition). This total time derivative can be evaluated and simplified with the help of the Maxwell–Faraday equation and some vector identities; the details are in the box below: The result is: where is the boundary (loop) of the surface , and is the velocity of a part of the boundary. In the case of a conductive loop, emf (Electromotive Force) is the electromagnetic work done on a unit charge when it has traveled around the loop once, and this work is done by the Lorentz force. Therefore, emf is expressed as where is emf and is the unit charge velocity. In a macroscopic view, for charges on a segment of the loop, consists of two components in average; one is the velocity of the charge along the segment , and the other is the velocity of the segment (the loop is deformed or moved). does not contribute to the work done on the charge since the direction of is same to the direction of . Mathematically, since is perpendicular to as and are along the same direction. Now we can see that, for the conductive loop, emf is same to the time-derivative of the magnetic flux through the loop except for the sign on it. Therefore, we now reach the equation of Faraday's law (for the conductive loop) as where . With breaking this integral, is for the transformer emf (due to a time-varying magnetic field) and is for the motional emf (due to the magnetic Lorentz force on charges by the motion or deformation of the loop in the magnetic field). Exceptions It is tempting to generalize Faraday's law to state: If is any arbitrary closed loop in space whatsoever, then the total time derivative of magnetic flux through equals the emf around . This statement, however, is not always true and the reason is not just from the obvious reason that emf is undefined in empty space when no conductor is present. As noted in the previous section, Faraday's law is not guaranteed to work unless the velocity of the abstract curve matches the actual velocity of the material conducting the electricity. The two examples illustrated below show that one often obtains incorrect results when the motion of is divorced from the motion of the material. One can analyze examples like these by taking care that the path moves with the same velocity as the material. Alternatively, one can always correctly calculate the emf by combining Lorentz force law with the Maxwell–Faraday equation: where "it is very important to notice that (1) is the velocity of the conductor ... not the velocity of the path element and (2) in general, the partial derivative with respect to time cannot be moved outside the integral since the area is a function of time." Faraday's law and relativity Two phenomena Faraday's law is a single equation describing two different phenomena: the motional emf generated by a magnetic force on a moving wire (see the Lorentz force), and the transformer emf generated by an electric force due to a changing magnetic field (described by the Maxwell–Faraday equation). James Clerk Maxwell drew attention to this fact in his 1861 paper On Physical Lines of Force. In the latter half of Part II of that paper, Maxwell gives a separate physical explanation for each of the two phenomena. A reference to these two aspects of electromagnetic induction is made in some modern textbooks. As Richard Feynman states: Explanation based on four-dimensional formalism In the general case, explanation of the motional emf appearance by action of the magnetic force on the charges in the moving wire or in the circuit changing its area is unsatisfactory. As a matter of fact, the charges in the wire or in the circuit could be completely absent, will then the electromagnetic induction effect disappear in this case? This situation is analyzed in the article, in which, when writing the integral equations of the electromagnetic field in a four-dimensional covariant form, in the Faraday’s law the total time derivative of the magnetic flux through the circuit appears instead of the partial time derivative. Thus, electromagnetic induction appears either when the magnetic field changes over time or when the area of the circuit changes. From the physical point of view, it is better to speak not about the induction emf, but about the induced electric field strength , that occurs in the circuit when the magnetic flux changes. In this case, the contribution to from the change in the magnetic field is made through the term , where is the vector potential. If the circuit area is changing in case of the constant magnetic field, then some part of the circuit is inevitably moving, and the electric field emerges in this part of the circuit in the comoving reference frame K’ as a result of the Lorentz transformation of the magnetic field , present in the stationary reference frame K, which passes through the circuit. The presence of the field in K’ is considered as a result of the induction effect in the moving circuit, regardless of whether the charges are present in the circuit or not. In the conducting circuit, the field causes motion of the charges. In the reference frame K, it looks like appearance of emf of the induction , the gradient of which in the form of , taken along the circuit, seems to generate the field . Einstein's view Reflection on this apparent dichotomy was one of the principal paths that led Albert Einstein to develop special relativity:
Physical sciences
Electrodynamics
Physics
10319171
https://en.wikipedia.org/wiki/Density%20wave%20theory
Density wave theory
Density wave theory or the Lin–Shu density wave theory is a theory proposed by C.C. Lin and Frank Shu in the mid-1960s to explain the spiral arm structure of spiral galaxies. The Lin–Shu theory introduces the idea of long-lived quasistatic spiral structure (QSSS hypothesis). In this hypothesis, the spiral pattern rotates with a particular angular frequency (pattern speed), whereas the stars in the galactic disk orbit at varying speeds, which depend on their distance to the galaxy center. The presence of spiral density waves in galaxies has implications on star formation, since the gas orbiting around the galaxy may be compressed and cause shock waves periodically. Theoretically, the formation of a global spiral pattern is treated as an instability of the stellar disk caused by the self-gravity, as opposed to tidal interactions. The mathematical formulation of the theory has also been extended to other astrophysical disk systems, such as Saturn's rings. Galactic spiral arms Originally, astronomers had the idea that the arms of a spiral galaxy were material. However, if this were the case, then the arms would become more and more tightly wound, since the matter nearer to the center of the galaxy rotates faster than the matter at the edge of the galaxy. The arms would become indistinguishable from the rest of the galaxy after only a few orbits. This is called the winding problem. Lin & Shu proposed in 1964 that the arms were not material in nature, but instead made up of areas of greater density, similar to a traffic jam on a highway. The cars move through the traffic jam: the density of cars increases in the middle of it. The traffic jam itself, however, moves more slowly. In the galaxy, stars, gas, dust, and other components move through the density waves, are compressed, and then move out of them. More specifically, the density wave theory argues that the "gravitational attraction between stars at different radii" prevents the so-called winding problem, and actually maintains the spiral pattern. The rotation speed of the arms is defined to be , the global pattern speed. (Thus, within a certain non-inertial reference frame, which is rotating at , the spiral arms appear to be at rest). The stars within the arms are not necessarily stationary, though at a certain distance from the center, , the corotation radius, the stars and the density waves move together. Inside that radius, stars move more quickly () than the spiral arms, and outside, stars move more slowly (). For an m-armed spiral, a star at radius R from the center will move through the structure with a frequency . So, the gravitational attraction between stars can only maintain the spiral structure if the frequency at which a star passes through the arms is less than the epicyclic frequency, , of the star. This means that a long-lived spiral structure will only exist between the inner and outer Lindblad resonance (ILR, OLR, respectively), which are defined as the radii such that: and , respectively. Past the OLR and within the ILR, the extra density in the spiral arms pulls more often than the epicyclic rate of the stars, and the stars are thus unable to react and move in such a way as to "reinforce the spiral density enhancement". Further implications The density wave theory also explains a number of other observations that have been made about spiral galaxies. For example, "the ordering of H I clouds and dust bands on the inner edges of spiral arms, the existence of young, massive stars and H II regions throughout the arms, and an abundance of old, red stars in the remainder of the disk". When clouds of gas and dust enter into a density wave and are compressed, the rate of star formation increases as some clouds meet the Jeans criterion, and collapse to form new stars. Since star formation does not happen immediately, the stars are slightly behind the density waves. The hot OB stars that are created ionize the gas of the interstellar medium, and form H II regions. These stars have relatively short lifetimes, however, and expire before fully leaving the density wave. The smaller, redder stars do leave the wave, and become distributed throughout the galactic disk. Density waves have also been described as pressurizing gas clouds and thereby catalyzing star formation. Application to Saturn's rings Beginning in the late 1970s, Peter Goldreich, Frank Shu, and others applied density wave theory to the rings of Saturn. Saturn's rings (particularly the A Ring) contain a great many spiral density waves and spiral bending waves excited by Lindblad resonances and vertical resonances (respectively) with Saturn's moons. The physics are largely the same as with galaxies, though spiral waves in Saturn's rings are much more tightly wound (extending a few hundred kilometers at most) due to the very large central mass (Saturn itself) compared to the mass of the disk. The Cassini mission revealed very small density waves excited by the ring-moons Pan and Atlas and by high-order resonances with the larger moons, as well as waves whose form changes with time due to the varying orbits of Janus and Epimetheus.
Physical sciences
Galaxy classification
Astronomy
19373997
https://en.wikipedia.org/wiki/Elevator
Elevator
An elevator (American English) or lift (Commonwealth English) is a machine that vertically transports people or freight between levels. They are typically powered by electric motors that drive traction cables and counterweight systems such as a hoist, although some pump hydraulic fluid to raise a cylindrical piston like a jack. Elevators are used in agriculture and manufacturing to lift materials. There are various types, like chain and bucket elevators, grain augers, and hay elevators. Modern buildings often have elevators to ensure accessibility, especially where ramps aren't feasible. High-speed elevators are common in skyscrapers. Some elevators can even move horizontally. History Pre-industrial era The earliest known reference to an elevator is in the works of the Roman architect Vitruvius, who reported that Archimedes ( – ) built his first elevator probably in 236 BC. Sources from later periods mention elevators as cabs on a hemp rope, powered by people or animals. The Roman Colosseum, completed in 80 AD, had roughly 25 elevators that were used for raising animals up to the floor. Each elevator could carry about (roughly the weight of two lions) up when powered by up to eight men. In 1000, the Book of Secrets by Ibn Khalaf al-Muradi in Islamic Spain described the use of an elevator-like lifting device to raise a large battering ram to destroy a fortress. In the 17th century, prototypes of elevators were installed in the palace buildings of England and France. Louis XV of France had a so-called 'flying chair' built for one of his mistresses at the Château de Versailles in 1743. Ancient and medieval elevators used drive systems based on hoists and windlasses. The invention of a system based on the screw drive was perhaps the most important step in elevator technology since ancient times, leading to the creation of modern passenger elevators. The first screw-drive elevator was built by Ivan Kulibin and installed in the Winter Palace in 1793, although there may have been an earlier design by Leonardo da Vinci. Several years later, another of Kulibin's elevators was installed in the Arkhangelskoye near Moscow. Industrial Era The development of elevators was led by the need for movement of raw materials, including coal and lumber, from hillsides. The technology developed by these industries, and the introduction of steel beam construction, worked together to provide the passenger and freight elevators in use today. Starting in coal mines, elevators in the mid-19th century operated with steam power, and were used for moving goods in bulk in mines and factories. These devices were soon applied to a diverse set of purposes. In 1823, Burton and Homer, two architects in London, built and operated a novel tourist attraction which they called the "ascending room", which elevated customers to a considerable height in the center of London, providing a panoramic view. Early, crude steam-driven elevators were refined in the ensuing decade. In 1835, an innovative elevator, the Teagle, was developed by the company Frost and Stutt in England. It was belt-driven and used a counterweight for extra power. In 1845, Neapolitan architect Gaetano Genovese installed the "Flying chair", an elevator ahead of its time in the Royal Palace of Caserta. It was covered with chestnut wood outside and with maple wood inside. It included a light, two benches, and a hand-operated signal, and could be activated from the outside, without any effort by the occupants. Traction was controlled by a motor mechanic utilizing a system of toothed wheels. A safety system was designed to take effect if the cords broke, consisting of a beam pushed outwards by a steel spring. The hydraulic crane was invented by Sir William Armstrong in 1846, primarily for use at the Tyneside docks for loading cargo. They quickly supplanted the earlier steam-driven elevators, exploiting Pascal's law to provide much greater force. A water pump supplied a variable level of water pressure to a plunger encased inside a vertical cylinder, allowing the platform, carrying a heavy load, to be raised and lowered. Counterweights and balances were also used to increase lifting power. Henry Waterman of New York is credited with inventing the "standing rope control" for an elevator in 1850. In 1852, Elisha Otis introduced the safety elevator, which prevented the fall of the cab if the cable broke. He demonstrated it at the New York exposition in the Crystal Palace in a dramatic, death-defying presentation in 1854, and the first such passenger elevator was installed at 488 Broadway in New York City on 23 March 1857. The first elevator shaft preceded the first elevator by four years. Construction for Peter Cooper's Cooper Union Foundation building in New York began in 1853. An elevator shaft was included in the design because Cooper was confident that a safe passenger elevator would soon be invented. The shaft was cylindrical because Cooper thought it was the most efficient design. Otis later designed a special elevator for the building. Peter Ellis, an English architect, installed the first elevators that could be described as paternoster elevators in Oriel Chambers in Liverpool in 1868. The Equitable Life Building, completed in 1870 in New York City, is thought to be the first office building with passenger elevators. In 1872, American inventor James Wayland patented a novel method of securing elevator shafts with doors that are automatically opened and closed as the elevator car approaches and leaves them. In 1874, J. W. Meaker patented a method permitting elevator doors to open and close safely. The first electric elevator was built by Werner von Siemens in 1880 in Germany. Inventor Anton Freissler further developed von Siemens' ideas and created a successful elevator enterprise in Austria-Hungary. The safety and speed of electric elevators were significantly enhanced by Frank Sprague, who added floor control, automatic operation, acceleration control, and further safety devices. His elevator ran faster and with larger loads than hydraulic or steam elevators. 584 of Sprague's elevators were installed before he sold his company to the Otis Elevator Company in 1895. Sprague also developed the idea and technology for multiple elevators in a single shaft. In 1871, when hydraulic power was a well established technology, Edward B. Ellington founded Wharves and Warehouses Steam Power and Hydraulic Pressure Company, which became the London Hydraulic Power Company in 1883. It constructed a network of high-pressure mains on both sides of the Thames which ultimately extended and powered some 8,000 machines, predominantly elevators and cranes. Schuyler Wheeler patented his electric elevator design in 1883. In 1884, American inventor D. Humphreys of Norfolk, Virginia, patented an elevator with automatic doors that closed off the elevator shaft when the car was not being entered or exited. In 1887, American inventor Alexander Miles of Duluth, Minnesota, patented an elevator with automatic doors that closed off the elevator shaft when the car was not being entered or exited. In 1891, American inventors Joseph Kelly and William L. Woods co-patented a novel way to guard elevator shafts against accident, by way of hatches that would automatically open and close as the car passed through them. The first elevator in India was installed at the Raj Bhavan in Kolkata by Otis in 1892. By 1900, completely automated elevators were available, but passengers were reluctant to use them. Their adoption was aided by a 1945 elevator operator strike in New York City, and the addition of an emergency stop button, emergency telephone, and a soothing explanatory automated voice. An inverter-controlled gearless drive system is applied in high-speed elevators worldwide. The Toshiba company continued research on thyristors for use in inverter control and dramatically enhanced their switching capacity, resulting in the development of insulated gate bipolar transistors (IGBTs) at the end of the 1980s. The IGBT realized increased switching frequency and reduced magnetic noise in the motor, eliminating the need for a filter circuit and allowing a more compact system. The IGBT also allowed the development of a small, highly integrated, highly sophisticated all-digital control device, consisting of a high-speed processor, specially customized gate arrays, and a circuit capable of controlling large currents of several kHz. In 2000, the first vacuum elevator was offered commercially in Argentina. Design Some people argue that elevators began as simple rope or chain hoists (see Traction elevators below). An elevator is essentially a platform that is either pulled or pushed up by mechanical means. A modern-day elevator consists of a cab (also called a "cabin", "cage", "carriage" or "car") mounted on a platform within an enclosed space called a shaft or sometimes a "hoistway". In the past, elevator drive mechanisms were powered by steam and water hydraulic pistons or by hand. In a "traction" elevator, cars are pulled up by means of rolling steel ropes over a deeply grooved pulley, commonly called a sheave in the industry. The weight of the car is balanced by a counterweight. Oftentimes two elevators (or sometimes three) are built so that their cars always move synchronously in opposite directions, and are each other's counterweight. The friction between the ropes and the pulley furnishes the traction which gives this type of elevator its name. Hydraulic elevators use the principles of hydraulics (in the sense of hydraulic power) to pressurize an above-ground or in-ground piston to raise and lower the car (see Hydraulic elevators below). Roped hydraulics use a combination of both ropes and hydraulic power to raise and lower cars. Recent innovations include permanent magnet motors, machine room-less rail mounted gearless machines, and microprocessor controls. The technology used in new installations depends on a variety of factors. Hydraulic elevators are cheaper, but installing cylinders greater than a certain length becomes impractical for very-high lift hoistways. For buildings of much over seven floors, traction elevators must be employed instead. Hydraulic elevators are usually slower than traction elevators. Elevators are a candidate for mass customization. There are economies to be made from mass production of the components, but each building comes with its own requirements like different number of floors, dimensions of the well and usage patterns. Doors Elevator doors prevent riders from falling into, entering, or tampering with anything in the shaft. The most common configuration is to have two panels that meet in the middle and slide open laterally. These are known as "center-opening". In a cascading telescopic configuration (potentially allowing wider entryways within limited space), the doors roll on independent tracks so that while open, they are tucked behind one another, and while closed, they form cascading layers on one side. This can be configured so that two sets of such cascading doors operate like the center opening doors described above, allowing for a very wide elevator cab. In less expensive installations the elevator can also use one large "slab" door: a single panel door the width of the doorway that opens to the left or right laterally. These are known as "single slide" doors. Some buildings have elevators with the single door on the shaftway, and double cascading doors on the cab. Machine room-less (MRL) elevators Elevators that do not require separate machine rooms are designed so that most of their power and control components fit within the hoistway (the shaft containing the elevator car), and a small cabinet houses the controller. The equipment is otherwise similar to that of a normal traction or hole-less hydraulic elevator. The world's first machine-room-less elevator, the Kone MonoSpace, was introduced in the year 1996, by Kone. Compared to traditional elevators, it: Required less space Used 70–80% less energy Used no hydraulic oil (in contrast to traditional hydraulic units) Had all components above ground (avoiding the environmental concern created by the hydraulic cylinder on direct hydraulic-type elevators being underground) Cost somewhat less than other systems, and significantly less than the hydraulic MRL elevator Could operate at faster speeds than hydraulics, but not normal traction units Its disadvantage was that it could be harder, and significantly more dangerous, to service and maintain. Other facts Noise level of 50–55 dBA (A-weighted decibels), lower than some but not all types of elevators Usually used for low-rise to mid-rise buildings National and local building codes did not address elevators without machine rooms. Residential MRL elevators are still not allowed by the ASME A17 code in the US. MRL elevators have been recognized in the 2005 supplement to the 2004 A17.1 Elevator Code. Today, some machine room-less hydraulic elevators by Otis and TK Elevator exist. They do not involve the use of an underground piston or a machine room, mitigating environmental concerns; however, they are not allowed by codes in some parts of the United States. Double-decker elevators Double-decker elevators are traction elevators with cars that have an upper and lower deck. Both decks, which can serve a floor at the same time, are usually driven by the same motor. The system increases efficiency in high-rise buildings, and saves space so additional shafts and cars are not required. In 2003, TK Elevator invented a system called TWIN, with two elevator cars independently running in one shaft. Traffic calculations Round-trip time calculations History In 1901, consulting engineer Charles G. Darrach (1846–1927) proposed the first formula to determine elevator service. In 1908, Reginald P. Bolton published the first book devoted to this subject, Elevator Service. The summation of his work was a massive fold-out chart (placed at the back of his book) that allowed users to determine the number of express and local elevators needed for a given building to meet a desired interval of service. In 1912, commercial engineer Edmund F. Tweedy and electrical engineer Arthur Williams co-authored a book titled Commercial Engineering for Central Stations. He followed Bolton's lead and developed a "Chart for determining the number and size of elevators required for office buildings of a given total occupied floor area". In 1920, Howard B. Cook presented a paper titled "Passenger Elevator Service". This paper marked the first time a member of the elevator industry offered a mathematical means of determining elevator service. His formula determined the round trip time (RTT) by finding the single trip time, doubling it, and adding 10 seconds. In 1923, Bassett Jones published an article titled "The Probable Number of Stops Made by an Elevator". He based his equations on the theory of probabilities and found a reasonably accurate method of calculating the average stop count. The equation in this article assumed a consistent population on every floor. He went on to write an updated version of his equations in 1926 which accounted for variable population on each floor. Jones credited David Lindquist for the development of the equation but provides no indication as to when it was first proposed. Although the equations were there, elevator traffic analysis was still a very specialist task that could only be done by world experts. That was until 1967 when Strakosch wrote an eight step method for finding the efficiency of a system in "Vertical transportation: Elevators and Escalators". Uppeak calculations In 1975, Barney and Dos Santos developed and published the "Round Trip Time (RTT) formula", which followed Strakosch's work. This was the first formulized mathematical model and is the simplest form that is still used by traffic analyzers today. Modification and improvements have been made to this equation over the years, most significantly in 2000 when Peters published "Improvements to the Up Peak Round Trip Time Calculation" which improved the accuracy of the flight time calculation, making allowances for short elevator journeys when the car does not reach maximum rated speed or acceleration, and added the functionality of express zones. This equation is now referred to as the 'Up peak Calculation' as it uses the assumption that all the passengers are coming into the building from the ground floor (incoming traffic) and that there are no passengers traveling from a higher floor to the ground floor (outgoing traffic) and no passengers traveling from one internal floor to another (interfloor traffic). This model works well if a building is at its most busy first thing in the morning; however, in more complicated elevator systems, this model does not work. General analysis In 1990, Peters published a paper titled "Lift Traffic Analysis: Formulae for the General Case" in which he developed a new formula which would account for mixed traffic patterns as well as accounting for passenger bunching using Poisson approximation. This new General Analysis equation enabled much more complex systems to be analyzed however the equations had now become so complex that it was almost impossible to do manually and it became necessary to use software to run the calculations. The GA formula was extended even further in 1996 to account for double deck elevators. Simulations RTT calculations establish an elevator system's handling capacity by using a set of repeatable calculations which, for a given set of inputs, always produce the same answer. It works well for simple systems; but as systems get more complex, the calculations are harder to develop and implement. For very complex systems, the solution is to simulate the building. Dispatcher-based simulation In this method, a virtual version of a building is created on a computer, modeling passengers and elevators as realistically as possible, and random numbers are used to model probability rather than mathematical equations and percentage probability. Dispatcher-based simulation has had major improvements over the years, but the principle remains the same. The most widely used simulator, Elevate, was first showcased in 1998 as Elevate Lite. Although it is currently the most accurate method of modeling an elevator system, the method does have drawbacks. Unlike calculations, it does not find a RTT value because it does not run standard round trips; thus it does not conform with standardized elevator-traffic analysis methodology, and cannot be used to find values such as average interval; instead, it is generally used to find the average waiting time. Monte Carlo simulation At the first Elevator and Escalator symposium in 2011, Al-Sharif proposed an alternative form of simulation that modeled a car's single round trip before restarting and running again. This method is still capable of modeling complex systems, and also conforms with standard methodology by producing an RTT value. The model was improved further in 2018 when Al-Sharif demonstrated a way to reintroduce a dispatcher-like function which can model destination control systems. While this does successfully remove simulation's major drawback, it is not quite as accurate as dispatcher-based simulations given its simplifications and non-continual nature. The Monte Carlo method also requires passenger count as an input, rather than passengers per second, in other methodologies. Types of hoist mechanisms Elevators can be rope dependent or rope-free. There are at least four means of moving an elevator: Traction elevators Geared traction machines are driven by AC or DC electric motors. Geared machines use worm gears to control mechanical movement of elevator cars by "rolling" steel hoist ropes over a drive sheave which is attached to a gearbox driven by a high-speed motor. These machines are generally the best option for basement or overhead traction use for speeds up to . Historically, AC motors were used for single or double-speed elevator machines on the grounds of cost and lower usage applications where car speed and passenger comfort were less of an issue, but for higher speed, larger capacity elevators, the need for infinitely variable speed control over the traction machine becomes an issue. Therefore, DC machines powered by an AC/DC motor generator were the preferred solution. The MG set also typically powered the relay controller of the elevator, which has the added advantage of electrically isolating the elevators from the rest of a building's electrical system, thus eliminating the transient power spikes in the building's electrical supply caused by the motors starting and stopping (causing lighting to dim every time the elevators are used for example), as well as interference to other electrical equipment caused by the arcing of the relay contactors in the control system. The widespread availability of variable frequency AC drives has allowed AC motors to be used universally, bringing with it the advantages of the older motor-generator, DC-based systems, without the penalties in terms of efficiency and complexity. The older MG-based installations are gradually being replaced in older buildings due to their poor energy efficiency. Gearless traction machines are low-speed (low-RPM), high-torque electric motors powered either by AC or DC. In this case, the drive sheave is directly attached to the end of the motor. Gearless traction elevators can reach speeds of up to , A brake is mounted between the motor and gearbox or between the motor and drive sheave or at the end of the drive sheave to hold the elevator stationary at a floor. This brake is usually an external drum type and is actuated by spring force and held open electrically; a power failure will cause the brake to engage and prevent the elevator from falling (see inherent safety and safety engineering). But it can also be some form of disc type like one or more calipers over a disc in one end of the motor shaft or drive sheave which is used in high speed, high rise and large capacity elevators with machine rooms (an exception is the Kone MonoSpace's EcoDisc which is not high speed, high rise and large capacity and is machine room less but it uses the same design as is a thinner version of a conventional gearless traction machine) for braking power, compactness and redundancy (assuming there are at least two calipers on the disc), or one or more disc brakes with a single caliper at one end of the motor shaft or drive sheave which is used in machine room less elevators for compactness, braking power, and redundancy (assuming there are two or more brakes). In each case, steel or kevlar cables are attached to a hitch plate on top of the cab or may be "underslung" below a cab, and then looped over the drive sheave to a counterweight attached to the opposite end of the cables which reduces the amount of power needed to move the cab. The counterweight is located in the hoist-way and is carried along a separate railway system; as the car goes up, the counterweight goes down, and vice versa. This action is powered by the traction machine which is directed by the controller, typically a relay logic or computerized device that directs starting, acceleration, deceleration and stopping of the elevator cab. The weight of the counterweight is typically equal to the weight of the elevator cab plus 40–50% of the capacity of the elevator. The grooves in the drive sheave are specially designed to prevent the cables from slipping. "Traction" is provided to the ropes by the grip of the grooves in the sheave, thereby the name. As the ropes age and the traction grooves wear, some traction is lost and the ropes must be replaced and the sheave repaired or replaced. Sheave and rope wear may be significantly reduced by ensuring that all ropes have equal tension, thus sharing the load evenly. Rope tension equalization may be achieved using a rope tension gauge, and is a simple way to extend the lifetime of the sheaves and ropes. Elevators with more than of travel have a system called compensation. This is a separate set of cables or a chain attached to the bottom of the counterweight and the bottom of the elevator cab. This makes it easier to control the elevator, as it compensates for the differing weight of cable between the hoist and the cab. If the elevator cab is at the top of the hoist-way, there is a short length of hoist cable above the car and a long length of compensating cable below the car and vice versa for the counterweight. If the compensation system uses cables, there will be an additional sheave in the pit below the elevator, to guide the cables. If the compensation system uses chains, the chain is guided by a bar mounted between the counterweight tracks. Regenerative drives Another energy-saving improvement is the regenerative drive, which works analogously to regenerative braking in vehicles, using the elevator's electric motor as a generator to capture some of the gravitational potential energy of descent of a full cab (heavier than its counterweight) or ascent of an empty cab (lighter than its counterweight) and return it to the building's electrical system. Hydraulic elevators Conventional hydraulic elevators. They use an underground hydraulic cylinder, are quite common for low level buildings with two to five floors (sometimes but seldom up to six to eight floors), and have speeds of up to . Holeless hydraulic elevators were developed in the 1970s, and use a pair of above-ground cylinders, which makes it practical for environmentally or cost-sensitive buildings with two, three, or four floors. Roped hydraulic elevators use both above-ground cylinders and a rope system, allowing the elevator to travel further than the piston has to move. The low mechanical complexity of hydraulic elevators in comparison to traction elevators makes them ideal for low-rise, low-traffic installations. They are less energy-efficient as the pump works against gravity to push the car and its passengers upwards; this energy is lost when the car descends on its own weight. The high current draw of the pump when starting up also places higher demands on a building's electrical system. There are also environmental concerns should the lifting cylinder leak fluid into the ground, hence the development of holeless hydraulic elevators, which also eliminate the need for a relatively deep hole in the bottom of the elevator shaft. Electromagnetic propulsion Cable-free elevators using electromagnetic propulsion, capable of moving both vertically and horizontally, have been developed by German engineering firm Thyssen Krupp for use in high rise, high density buildings. Climbing elevator A climbing elevator is a self-ascending elevator with its own propulsion. The propulsion can be done by an electric or a combustion engine. Climbing elevators are used in guyed masts or towers, in order to make easy access to parts of these constructions, such as flight safety lamps for maintenance. An example would be the moonlight towers in Austin, Texas, where the elevator holds only one person and equipment for maintenance. The Glasgow Tower—an observation tower in Glasgow, Scotland—also makes use of two climbing elevators. Temporary climbing elevators are commonly used in the construction of new high-rise buildings to move materials and personnel before the building's permanent elevator system is installed, at which point the climbing elevators are dismantled. Pneumatic elevator An elevator of this kind uses a vacuum on top of the cab and a valve on the top of the "shaft" to move the cab upwards and closes the valve in order to keep the cab at the same level. A diaphragm or a piston is used as a "brake", if there is a sudden increase in pressure above the cab. To go down, it opens the valve so that the air can pressurize the top of the "shaft", allowing the cab to go down by its own weight. This also means that in case of a power failure, the cab will automatically go down. The "shaft" is made of acrylic, and is always round due to the shape of the vacuum pump. To keep the air inside of the cab, rubber seals are used. Due to technical limitations, these elevators have a low capacity, they usually allow 1–3 passengers and up to . Controls Manual controls In the first half of the twentieth century, almost all elevators had no automatic positioning of the floor on which the cab would stop. Some of the older freight elevators were controlled by switches operated by pulling on adjacent ropes. In general, most elevators before WWII were manually controlled by elevator operators using a rheostat connected to the motor. This rheostat (see picture) was enclosed within a cylindrical container about the size and shape of a cake. This was mounted upright or sideways on the cab wall and operated via a projecting handle, which was able to slide around the top half of the cylinder. The elevator motor was located at the top of the shaft or beside the bottom of the shaft. Pushing the handle forward would cause the cab to rise; backwards would make it sink. The harder the pressure, the faster the elevator would move. The handle also served as a dead man switch: if the operator let go of the handle, it would return to its upright position, causing the elevator cab to stop. In time, safety interlocks would ensure that the inner and outer doors were closed before the elevator was allowed to move. This lever would allow some control over the energy supplied to the motor and so enabled the elevator to be accurately positioned—if the operator was sufficiently skilled. More typically, the operator would have to "jog" the control, moving the cab in small increments until the elevator was reasonably close to the landing point. Then the operator would direct the outgoing and incoming passengers to "watch the step". The Otis Autotronic system of the early 1950s brought the earliest predictive systems which could anticipate traffic patterns within a building to deploy elevator movement in the most efficient manner. Relay-controlled elevator systems remained common until the 1980s; they were gradually replaced with solid-state systems, and microprocessor-based controls are now the industry standard. Most older, manually-operated elevators have been retrofitted with automatic or semi-automatic controls. General controls A typical modern passenger elevator will have: Outside the elevator, buttons to go up or down (the bottom floor only has the up button, the top floor only has the down button, and every floor in between (usually) has both) Space to stand in, guardrails, seating cushion (luxury) Overload sensor – prevents the elevator from moving until excess load has been removed. It may trigger a voice prompt or buzzer alarm. This may also trigger a "full car" indicator, indicating the car's inability to accept more passengers until some are unloaded. Electric fans or air conditioning units to enhance circulation and comfort. A control panel with various buttons. In many countries, button text and icons are raised to allow blind users to operate the elevator; many have Braille text besides. Buttons include: Call buttons to choose a floor. Some of these may be key switches (to control access). In some elevators, such as those in some hotels, certain floors are inaccessible unless one swipes a security card or enters a passcode. An alarm button or switch, which passengers can use to warn the premises manager that they have been trapped in the elevator. Some elevators also have emergency telephones to summon help in the event of entrapment. In many jurisdictions this is required by law. Door open and door close buttons. The operation of the door open button is transparent, immediately opening and holding the door, typically until a timeout occurs and the door closes. The operation of the door close button is less transparent, and it often appears to do nothing, leading to frequent but incorrect reports that the door close button is a placebo button: either not wired up at all, or inactive in normal service. On many older elevators, if one is present, the door close button is functional because the elevator is not ADA compliant and/or it does not have a fire service mode. Working door open and door close buttons are required by code in many jurisdictions, including the United States, specifically for emergency operation: in independent mode, the door open and door close buttons are used to manually open or close the door. Beyond this, programming varies significantly, with some door close buttons immediately closing the door, but in other cases being delayed by an overall timeout, so the door cannot be closed until a few seconds after opening. In this case (hastening normal closure), the door close button has no effect. However, the door close button will cause a hall call to be ignored (so the door will not reopen), and once the timeout has expired, the door close will immediately close the door, for example, to cancel a door open push. The minimum timeout for automatic door closing in the US is 5 seconds, which is a noticeable delay if not over-ridden. A set of doors kept locked on each floor to prevent unintentional access into the elevator shaft by the unsuspecting individual. The door is unlocked and opened by a machine sitting on the roof of the car, which also drives the doors that travel with the car. Door controls are provided to close immediately or reopen the doors, although the button to close them immediately is often disabled during normal operations, especially on more recent elevators. Objects in the path of the moving doors will either be detected by sensors or physically activate a switch that reopens the doors. Otherwise, the doors will close after a preset time. Some elevators are configured to remain open at the floor until they are required to move again. Regulations often require doors to close after use to prevent smoke from entering the elevator shaft in event of fire. A stop switch to halt the elevator while in motion, which is often used to hold an elevator open while freight is loaded. Keeping an elevator stopped for too long may set off an alarm. Unless local codes require otherwise, this will most likely be a key switch. Some elevators may have one or more of the following: An elevator telephone, which can be used (in addition to the alarm) by a trapped passenger to call for help. This may consist of a transceiver, or simply a button. This feature is often required by local regulations. Hold button: This button delays the door closing timer, useful for loading freight and hospital beds. Call cancellation: A destination floor may be deselected by double clicking. Access restriction by key switches, RFID reader, code keypad, hotel room card, etc. One or more additional sets of doors. This is primarily used to serve different floor plans: on each floor only one set of doors opens. For example, in an elevated crosswalk setup, the front doors may open on the street level, and the rear doors open on the crosswalk level. This is also common in garages, rail stations, and airports. Alternatively, both doors may open on a given floor. This is sometimes timed so that one side opens first for getting off, and then the other side opens for getting on, to improve boarding/exiting speed. This is particularly useful when passengers have luggage or carts, as at an airport, due to reduced maneuverability. In case of dual doors, there may be two sets of door open and door close buttons, with one pair controlling the front doors, from the perspective of the console, typically denoted <> and ><, with the other pair controlling the rear doors, typically denoted with a line in the middle, <|> and >|<, or double lines, |<>| and >||<. This second set is required in the US if both doors can be opened at the same landing, so that the doors can both be controlled in independent service. Security camera Plain walls or mirrored walls Glass windowpane providing a view of the building interior or onto the streets An audible signal button, labeled "S": in the US, for elevators installed between 1991 and 2012 (initial passage of ADA and coming into force of 2010 revision), a button which if pushed, sounds an audible signal as each floor is passed, to assist visually impaired passengers. This button is no longer used on new elevators, where the sound is normally obligatory. Other controls, which are not available for the public (either because they are key switches, or because they are kept behind a locked panel), include: Fireman's service, phase II key switch Switch to enable or disable the elevator. An inspector's switch, which places the elevator in inspection mode (this may be situated on the top of the elevator) Manual up/down controls for elevator technicians, to be used in inspection mode, for example. An independent service/exclusive mode (also known as "car preference"), which will prevent the car from answering to hall calls and only arrive at floors selected via the panel. The door should stay open while parked on a floor. This mode may be used for temporarily transporting goods. Attendant service mode Large buildings with multiple elevators of this type also had an elevator dispatcher stationed in the lobby to direct passengers and to signal the operator to leave with the use of a mechanical "cricket" noisemaker. External controls Elevators are typically controlled from the outside by a call box, which has up and down buttons, at each stop. When pressed at a certain floor, the button (also known as a "hall call" button) calls the elevator to pick up more passengers. If the particular elevator is currently serving traffic in a certain direction, it will only answer calls in the same direction unless there are no more calls beyond that floor. In a group of two or more elevators, the call buttons may be linked to a central dispatch computer, such that they illuminate and cancel together. This is done to ensure that only one car is called at one time. Key switches may be installed on the ground floor so that the elevator can be remotely switched on or off from the outside. In destination control systems, one selects the intended destination floor (in lieu of pressing "up" or "down") and is then notified which elevator will serve their request. Floor numbering To distinguish between floors, the different landings are given numbers and sometimes letters. See the above article for more information. Elevator algorithm The elevator algorithm, a simple algorithm by which a single elevator can decide where to stop, is summarized as follows: Continue traveling in the same direction while there are remaining requests in that same direction. If there are no further requests in that direction, then stop and become idle, or change direction if there are requests in the opposite direction. The elevator algorithm has found an application in computer operating systems as an algorithm for scheduling hard disk requests. Modern elevators use more complex heuristic algorithms to decide which request to service next. In taller buildings with high traffic, such as the New York Marriott Marquis or the Burj Khalifa, the destination dispatch algorithm is used to group passengers going to similar floors, maximizing load by up to 25%. Destination control system Some skyscraper buildings and other types of installation feature a destination operating panel where a passenger registers their floor calls before entering the car. The system lets them know which car to wait for, instead of everyone boarding the next car. In this way, travel time is reduced as the elevator makes fewer stops for individual passengers, and the computer distributes adjacent stops to different cars in the bank. Although travel time is reduced, passenger waiting times may be longer as they will not necessarily be allocated the next car to depart. During the down peak period the benefit of destination control will be limited as passengers have a common destination. It can also improve accessibility, as a mobility-impaired passenger can move to their designated car in advance. Inside the elevator there is no call button to push, or the buttons are there but they cannot be pushed—except door opening and alarm button—they only indicate stopping floors. The idea of destination control was originally conceived by Leo Port from Sydney in 1961, but at that time elevator controllers were implemented in relays and were unable to optimize the performance of destination control allocations. The system was first pioneered by Schindler Elevator in 1992 as the Miconic 10. Manufacturers of such systems claim that average traveling time can be reduced by up to 30%. However, performance enhancements cannot be generalized as the benefits and limitations of the system are dependent on many factors. One problem is that the system is subject to gaming. Sometimes, one person enters the destination for a large group of people going to the same floor. The dispatching algorithm is usually unable to completely cater for the variation, and latecomers may find the elevator they are assigned to is already full. Also, occasionally, one person may press the floor multiple times. This is common with up/down buttons when people believe this to be an effective way to hurry elevators. However, this will make the computer think multiple people are waiting and will allocate empty cars to serve this one person. To prevent this problem, in one implementation of destination control, every user is given an RFID card, for identification and tracking, so that the system knows every user call and can cancel the first call if the passenger decides to travel to another destination, preventing empty calls. The newest invention knows even where people are located and how many on which floor because of their identification, either for the purposes of evacuating the building or for security reasons. Another way to prevent this issue is to treat everyone traveling from one floor to another as one group and to allocate only one car for that group. The same destination scheduling concept can also be applied to public transit such as in group rapid transit. Special operating modes Anti-crime protection The anti-crime protection (ACP) feature will force each car to stop at a pre-defined landing and open its doors. This allows a security guard or a receptionist at the landing to visually inspect the passengers. The car stops at this landing as it passes to serve further demand. Up-peak During up-peak mode (also called moderate incoming traffic), elevator cars in a group are recalled to the lobby to provide expeditious service to passengers arriving at the building, most typically in the morning as people arrive for work or at the conclusion of a lunch-time period when people are going back to work. Elevators are dispatched one-by-one when they reach a pre-determined passenger load, or when they have had their doors opened for a certain period of time. The next elevator to be dispatched usually has its hall lantern or a "this car leaving next" sign illuminated to encourage passengers to make maximum use of the available elevator system capacity. Some elevator banks are programmed so that at least one car will always return to the lobby floor and park whenever it becomes free. The commencement of up-peak may be triggered by a time clock, by the departure of a certain number of fully loaded cars leaving the lobby within a given time period, or by a switch manually operated by a building attendant. Down-peak During down-peak mode, elevator cars in a group are sent away from the lobby towards the highest floor served, after which they commence running down the floors in response to hall calls placed by passengers wishing to leave the building. This allows the elevator system to provide maximum passenger handling capacity for people leaving the building. The commencement of down-peak may be triggered by a time clock, by the arrival of a certain number of fully loaded cars at the lobby within a given time period, or by a switch manually operated by a building attendant. Sabbath service In areas with large populations of observant Jews or in facilities catering to Jews, one may find a "Sabbath elevator". In this mode, an elevator will stop automatically at every floor, allowing people to step on and off without having to push any buttons. This prevents violation of the Sabbath prohibition against operating electrical devices when Sabbath is in effect for those who observe this ritual. However, Sabbath mode has the side effect of using considerable amounts of energy, running the elevator car sequentially up and down every floor of a building, repeatedly servicing floors where it is not needed. For a tall building with many floors, the car must move on a frequent enough basis so as to not cause undue delay for potential users that will not touch the controls as it opens the doors on every floor up the building. Independent service Independent service or car preference is a special mode found on most elevators. It is activated by a key switch either inside the elevator itself or on a centralized control panel in the lobby. When an elevator is placed on this mode, it will no longer respond to hall calls. (In a bank of elevators, traffic is rerouted to the other elevators, while in a single elevator, the hall buttons are disabled). The elevator will remain parked on a floor with its doors open until a floor is selected and the door close button is held until the elevator starts to travel. Independent service is useful when transporting large goods or moving groups of people between certain floors. Inspection service Inspection service is designed to provide access to the hoistway and car top for inspection and maintenance purposes by qualified elevator mechanics. It is first activated by a key switch on the car operating panel usually labeled 'Inspection', 'Car Top', 'Access Enable' or 'HWENAB' (short for HoistWay access ENABled). When this switch is activated the elevator will come to a stop if moving, car calls will be canceled (and the buttons disabled), and hall calls will be assigned to other elevator cars in the group (or canceled in a single elevator configuration). The elevator can now only be moved by the corresponding 'Access' key switches, usually located at the highest (to access the top of the car) and lowest (to access the elevator pit) landings. The access key switches will allow the car to move at reduced inspection speed with the hoistway door open. This speed can range from anywhere up to 60% of normal operating speed on most controllers, and is usually defined by local safety codes. Elevators have a car top inspection station that allows the car to be operated by a mechanic in order to move it through the hoistway. Generally, there are three buttons: UP, RUN, and DOWN. Both the RUN and a direction button must be held to move the car in that direction, and the elevator will stop moving as soon as the buttons are released. Most other elevators have an up/down toggle switch and a RUN button. The inspection panel also has standard power outlets for work lamps and powered tools. Fire service Depending on the location of the elevator, fire service code will vary state to state and country to country. Fire service is usually split up into two modes: phase one and phase two. These are separate modes that the elevator can go into. Phase one mode is activated by a corresponding smoke sensor, heat sensor, or manual key switch in the building. Once an alarm has been activated, the elevator will automatically go into phase one. The elevator will wait an amount of time, then proceed to go into nudging mode to tell everyone the elevator is leaving the floor. Once the elevator has left the floor, depending on where the alarm was set off, the elevator will go to the fire-recall floor. However, if the alarm was activated on the fire-recall floor, the elevator will have an alternate floor to recall to. When the elevator is recalled, it proceeds to the recall floor and stops with its doors open. The elevator will no longer respond to calls or move in any direction. Located on the fire-recall floor is a fire-service key switch. The fire-service key switch has the ability to turn fire service off, turn fire service on or bypass fire service. The only way to return the elevator to normal service is to switch it to bypass after the alarms have reset. Phase-two mode can only be activated by a key switch located inside the elevator on the car operating panel. This mode was created for firefighters so that they may rescue people from a burning building. The phase-two key switch has three positions: off, on, and hold. By turning phase two on, the firefighter enables the car to move. However, like independent-service mode, the car will not respond to a car call unless the firefighter manually pushes and holds the door close button. Once the elevator gets to the desired floor it will not open its doors unless the firefighter holds the door open button. This is in case the floor is burning and the firefighter can feel the heat and knows not to open the door. The firefighter must hold the door open button until the door is completely opened. If for any reason the firefighter wishes to leave the elevator, they will use the hold position on the key switch to make sure the elevator remains at that floor. If the firefighter wishes to return to the recall floor, they simply turn the key off and close the doors. In the UK and Europe the requirements for firefighters lifts are defined in the standard EN81-72. Medical emergency or code-blue service Commonly found in hospitals, code-blue service allows an elevator to be summoned to any floor for use in an emergency situation. Each floor will have a code-blue recall key switch, and when activated, the elevator system will immediately select the elevator car that can respond the fastest, regardless of direction of travel and passenger load. Passengers inside the elevator will be notified with an alarm and indicator light to exit the elevator when the doors open. Once the elevator arrives at the floor, it will park with its doors open and the car buttons will be disabled to prevent a passenger from taking control of the elevator. Medical personnel must then activate the code-blue key switch inside the car, select their floor and close the doors with the door close button. The elevator will then travel non-stop to the selected floor, and will remain in code-blue service until switched off in the car. Some hospital elevators will feature a 'hold' position on the code-blue key switch (similar to fire service) which allows the elevator to remain at a floor locked out of service until code blue is deactivated. Riot mode In the event of civil disturbance, insurrection, or rioting, management can prevent elevators from stopping at the lobby or parking areas, preventing undesired persons from using the elevators while still allowing the building tenants to use them within the rest of the building. Emergency power operation Many elevator installations now feature emergency power systems such as uninterruptible power supply (UPS) which allow elevator use in blackout situations and prevent people from becoming trapped in elevators. To be compliant with BS 9999 safety standards, a passenger lift being used in an emergency situation must have a secondary source of power. Where a generator is being used as the secondary power supply in a hospital, a UPS must also be present to meet regulations stating that healthcare facilities must test their emergency generators under load at least once per month. During the test period only one supply of power is feeding the lift, in a blackout situation without a UPS, the lifts would not be operational. Traction elevators When power is lost in a traction elevator system, all elevators will initially come to a halt. One by one, each car in the group will return to the lobby floor, open its doors, and shut down. People in the remaining elevators may see an indicator light or hear a voice announcement informing them that the elevator will return to the lobby shortly. Once all cars have successfully returned, the system will then automatically select one or more cars to be used for normal operations and these cars will return to service. The car(s) selected to run under emergency power can be manually over-ridden by a key or strip switch in the lobby. To help prevent entrapment, when the system detects that it is running low on power, it will bring the running cars to the lobby or nearest floor, open the doors, and shut down. Hydraulic elevators In hydraulic elevator systems, emergency power will lower the elevators to the lowest landing and open the doors to allow passengers to exit. The doors then close after an adjustable time period and the car remains unusable until reset, usually by cycling the elevator main power switch. Typically, due to the high current draw when starting the pump motor, hydraulic elevators are not run using standard emergency power systems. Buildings like hospitals and nursing homes usually size their emergency generators to accommodate this draw. However, the increasing use of current-limiting motor starters, commonly known as "soft-start" contactors, avoids much of this problem, and the current draw of the pump motor is less of a limiting concern. Modernization Most elevators are built to provide about 30 to 40 years of service, as long as service intervals specified and periodic maintenance/inspections by the manufacturer are followed. As the elevator ages and equipment become increasingly difficult to find or replace, along with code changes and deteriorating ride performance, a complete overhaul of the elevator may be suggested to the building owners. A typical modernization consists of controller equipment, electrical wiring and buttons, position indicators and direction arrows, hoist machines and motors (including door operators), and sometimes door hanger tracks. Rarely are car slings, rails, or other heavy structures changed. The cost of an elevator modernization can range greatly depending on which type of equipment is to be installed. Modernization can greatly improve operational reliability by replacing electrical relays and contacts with solid-state electronics. Ride quality can be improved by replacing motor-generator-based drive designs with Variable-Voltage, Variable Frequency (V3F) drives, providing near-seamless acceleration and deceleration. Passenger safety is also improved by updating systems and equipment to conform to current codes. Safety On 26 February 2014, the European Union released their adoption of safety standards through a directive notification. Traction elevators Statistically speaking, traction elevators are extremely safe. Of the 20 to 30 elevator-related deaths each year, most of them are maintenance-related—for example, technicians leaning too far into the shaft or getting caught between moving parts, and most of the rest are attributed to other kinds of accidents, such as people stepping blindly through doors that open into empty shafts or being strangled by scarves caught in the doors. While it is possible (though extraordinarily unlikely) for an elevator's cable to snap, all elevators in the modern era have been fitted with several safety devices which prevent the elevator from simply free-falling and crashing. An elevator cab is typically borne by 2 to 6 (up to 12 or more in high rise installations) redundant hoist cables or belts, each of which is capable on its own of supporting the rated load of the elevator plus twenty-five percent more weight. In addition, there is a device which detects whether the elevator is descending faster than its maximum designed speed; if this happens, the device causes copper (or silicon nitride ceramic in high rise installations) brake shoes to clamp down along the vertical rails in the shaft, stopping the elevator quickly, but not so abruptly as to cause injury. This device, called the governor, was invented by Elisha Graves Otis. For example, in 2007 an elevator at a Seattle children's hospital experienced cable failure, causing it to free-fall until its governor engaged. In addition, an oil/hydraulic or spring or polyurethane or telescopic oil/hydraulic buffer or a combination (depending on the travel height and travel speed) is installed at the bottom of the shaft (or in the bottom of the cab and sometimes also in the top of the cab or shaft) to somewhat cushion any impact. Lethal accidents do occur, however: in 1989, seven people were killed at a hospital in L'Hospitalet, Spain, when the pulleys connecting the cables to the elevator cabin broke loose and the safety mechanism failed to activate, causing the elevator to plunge seven stories to the ground. A similar accident occurred in 2019 in Santos, Brazil, killing four. Hydraulic elevators Past problems with hydraulic elevators include underground electrolytic destruction of the cylinder and bulkhead, pipe failures, and control failures. Single bulkhead cylinders, typically built prior to a 1972 ASME A17.1 Elevator Safety Code change requiring a second dished bulkhead, were subject to possible catastrophic failure. The code previously permitted only single-bottom hydraulic cylinders. In the event of a cylinder breach, the fluid loss results in uncontrolled down movement of the elevator. This creates two significant hazards: being subject to an impact at the bottom when the elevator stops suddenly and being in the entrance for a potential shear if the rider is partly in the elevator. Because it is impossible to verify the system at all times, the code requires periodic testing of the pressure capability. Another solution to protect against a cylinder blowout is to install a plunger gripping device. Two commercially available are known by the marketing names "LifeJacket" and "HydroBrake". The plunger gripper is a device which, in the event of an uncontrolled downward acceleration, nondestructively grips the plunger and stops the car. A device known as an overspeed or rupture valve is attached to the hydraulic inlet/outlet of the cylinder and is adjusted for a maximum flow rate. If a pipe or hose were to break (rupture), the flow rate of the rupture valve will surpass a set limit and mechanically stop the outlet flow of hydraulic fluid, thus stopping the plunger and the car in the down direction. In addition to the safety concerns for older hydraulic elevators, there is risk of leaking hydraulic oil into the aquifer and causing potential environmental contamination. This has led to the introduction of PVC liners (casings) around hydraulic cylinders which can be monitored for integrity. In the past decade, recent innovations in inverted hydraulic jacks have eliminated the costly process of drilling the ground to install a borehole jack. This also eliminates the threat of corrosion to the system and increases safety. Mine-shaft elevators Safety testing of mine shaft elevator rails is routinely undertaken. The method involves destructive testing of a segment of the cable. The ends of the segment are frayed, then set in conical zinc molds. Each end of the segment is then secured in a large, hydraulic stretching machine. The segment is then placed under increasing load to the point of failure. Data about elasticity, load, and other factors is compiled and a report is produced. The report is then analyzed to determine whether or not the entire rail is safe to use. Uses Passenger service A passenger elevator is designed to move people between a building's floors. Passenger elevators capacity is related to the available floor space. Generally passenger elevators in buildings of eight floors or fewer are hydraulic or electric, which can reach speeds up to hydraulic and up to electric. Sometimes passenger elevators are used as a city transport along with funiculars. For example, there is a three-station underground public elevator in Yalta, Ukraine, which takes passengers from the top of a hill above the Black Sea on which hotels are perched, to a tunnel located on the beach below. At Casco Viejo station in the Bilbao Metro, the elevator that provides access to the station from a hilltop neighborhood doubles as city transportation: the station's ticket barriers are set up in such a way that passengers can pay to reach the elevator from the entrance in the lower city, or vice versa.
Technology
Architectural elements
null
19375577
https://en.wikipedia.org/wiki/ALS
ALS
Amyotrophic lateral sclerosis (ALS), also known as motor neurone disease (MND) or (in the United States) Lou Gehrig's disease (LGD), is a rare, terminal neurodegenerative disorder that results in the progressive loss of both upper and lower motor neurons that normally control voluntary muscle contraction. ALS is the most common form of the motor neuron diseases. ALS often presents in its early stages with gradual muscle stiffness, twitches, weakness, and wasting. Motor neuron loss typically continues until the abilities to eat, speak, move, and, lastly, breathe are all lost. While only 15% of people with ALS also fully develop frontotemporal dementia, an estimated 50% face at least some minor difficulties with thinking and behavior. Depending on which of the aforementioned symptoms develops first, ALS is classified as limb-onset (begins with weakness in the arms or legs) or bulbar-onset (begins with difficulty in speaking or swallowing). Most cases of ALS (about 90–95%) have no known cause, and are known as sporadic ALS. However, both genetic and environmental factors are believed to be involved. The remaining 5–10% of cases have a genetic cause, often linked to a family history of the disease, and these are known as familial ALS (hereditary). About half of these genetic cases are due to disease-causing variants in one of four specific genes. The diagnosis is based on a person's signs and symptoms, with testing conducted to rule out other potential causes. There is no known cure for ALS. The goal of treatment is to slow the disease progression, and improve symptoms. FDA-approved treatments that slow the progression of ALS include riluzole and edaravone. Non-invasive ventilation may result in both improved quality and length of life. Mechanical ventilation can prolong survival but does not stop disease progression. A feeding tube may help maintain weight and nutrition. Death is usually caused by respiratory failure. The disease can affect people of any age, but usually starts around the age of 60. The average survival from onset to death is two to four years, though this can vary, and about 10% of those affected survive longer than ten years. Descriptions of the disease date back to at least 1824 by Charles Bell. In 1869, the connection between the symptoms and the underlying neurological problems was first described by French neurologist Jean-Martin Charcot, who in 1874 began using the term amyotrophic lateral sclerosis. Classification ALS is a motor neuron disease, which is a group of neurological disorders that selectively affect motor neurons, the cells that control voluntary muscles of the body. Other motor neuron diseases include primary lateral sclerosis (PLS), progressive muscular atrophy (PMA), progressive bulbar palsy, pseudobulbar palsy, and monomelic amyotrophy (MMA). As a disease, ALS itself can be classified in a few different ways: by which part of the motor neurons are affected; by the parts of the body first affected; whether it is genetic; and by the age at which it started. Each individual diagnosed with the condition will sit at a unique place at the intersection of these complex and overlapping subtypes, which presents a challenge to diagnosis, understanding, and prognosis. Subtypes of motor neuron disease ALS can be classified by the types of motor neurons that are affected. To successfully control any voluntary muscle in the body, a signal must be sent from the motor cortex in the brain down the upper motor neuron as it travels down the spinal cord. There, it connects via a synapse to the lower motor neuron which connects to the muscle itself. Damage to either the upper or lower motor neuron, as it makes its way from the brain to muscle, causes different types of symptoms. Damage to the upper motor neuron typically causes spasticity including stiffness and increased tendon reflexes or clonus, while damage to the lower motor neuron typically causes weakness, muscle atrophy, and fasciculations. Classical, or classic ALS, involves degeneration to both the upper motor neurons in the brain and the lower motor neurons in the spinal cord. Primary lateral sclerosis (PLS) involves degeneration of only the upper motor neurons, and progressive muscular atrophy (PMA) involves only the lower motor neurons. There is debate over whether PLS and PMA are separate diseases or simply variants of ALS. Classical ALS accounts for about 70% of all cases of ALS and can be subdivided into where symptoms first appear as these are usually focussed to one region of the body at initial presentation before later spread. Limb-onset ALS (also known as spinal-onset) and bulbar-onset ALS. Limb-onset ALS begins with weakness in the hands, arms, feet, and/or legs and accounts for about two-thirds of all classical ALS cases. Bulbar-onset ALS begins with weakness in the muscles of speech, chewing, and swallowing and accounts for about 25% of classical ALS cases. A rarer type of classical ALS affecting around 3% of patients is respiratory-onset, in which the initial symptoms are difficulty breathing (dyspnea) upon exertion, at rest, or while lying flat (orthopnea). Primary lateral sclerosis (PLS) is a subtype of the overall ALS category which accounts for about 5% of all cases and only affects the upper motor neurons in the arms, legs, and bulbar region. However, more than 75% of people with apparent PLS go on to later develop lower motor neuron signs within four years of symptom onset, meaning that a definitive diagnosis of PLS cannot be made until several years have passed. PLS has a better prognosis than classical ALS, as it progresses slower, results in less functional decline, does not affect the ability to breathe, and causes less severe weight loss than classical ALS. Progressive muscular atrophy (PMA) is another subtype that accounts for about 5% of the overall ALS category and affects lower motor neurons in the arms, legs, and bulbar region. While PMA is associated with longer survival on average than classical ALS, it is still progressive over time, eventually leading to respiratory failure and death. As with PLS developing into classical ALS, PMA can also develop into classical ALS over time if the lower motor neuron involvement progresses to include upper motor neurons, in which case the diagnosis might be changed to classic ALS. Rare isolated variants of ALS Isolated variants of ALS have symptoms that are limited to a single region for at least a year; they progress more slowly than classical ALS and are associated with longer survival. These regional variants of ALS can only be considered as a diagnosis should the initial symptoms fail to spread to other spinal cord regions for an extended period of time (at least 12 months). Flail arm syndrome is characterized by lower motor neuron damage affecting the arm muscles, typically starting with the upper arms symmetrically and progressing downwards to the hands. Flail leg syndrome is characterized by lower motor neuron damage leading to asymmetrical weakness and wasting in the legs starting around the feet. Isolated bulbar palsy is characterized by upper or lower motor neuron damage in the bulbar region (in the absence of limb symptoms for at least 20 months), leading to gradual onset of difficulty with speech (dysarthria) and swallowing (dysphagia). Age of onset ALS can also be classified based on the age of onset. People with familial ALS have an age of onset about 5 years younger than those with apparently sporadic ALS. About 10% of all cases of ALS begin before age 45 ("young-onset" ALS), and about 1% of all cases begin before age 25 ("juvenile" ALS). People who develop young-onset ALS are more likely to be male, less likely to have bulbar onset of symptoms, and more likely to have a slower progression of the disease. Juvenile ALS is more likely to be genetic in origin than adult-onset ALS; the most common genes associated with juvenile ALS are FUS, ALS2, and SETX. Although most people with juvenile ALS live longer than those with adult-onset ALS, some of them have specific mutations in FUS and SOD1 that are associated with a poor prognosis. Late onset (after age 65) is generally associated with a more rapid functional decline and shorter survival. Signs and symptoms The disorder causes muscle weakness, atrophy, and muscle spasms throughout the body due to the degeneration of the upper motor and lower motor neurons. Sensory nerves and the autonomic nervous system are generally unaffected, meaning the majority of people with ALS maintain hearing, sight, touch, smell, and taste. Initial symptoms The start of ALS may be so subtle that the symptoms are overlooked. The earliest symptoms of ALS are muscle weakness or muscle atrophy, typically on one side of the body. Other presenting symptoms include trouble swallowing or breathing, cramping, or stiffness of affected muscles; muscle weakness affecting an arm or a leg; or slurred and nasal speech. The parts of the body affected by early symptoms of ALS depend on which motor neurons in the body are damaged first. In limb-onset ALS, the first symptoms are in the arms or the legs. If the legs are affected first, people may experience awkwardness, tripping, or stumbling when walking or running; this is often marked by walking with a "dropped foot" that drags gently on the ground. If the arms are affected first, they may experience difficulty with tasks requiring manual dexterity, such as buttoning a shirt, writing, or turning a key in a lock. In bulbar-onset ALS, the first symptoms are difficulty speaking or swallowing. Speech may become slurred, nasal in character, or quieter. There may be difficulty with swallowing and loss of tongue mobility. A smaller proportion of people experience "respiratory-onset" ALS, where the intercostal muscles that support breathing are affected first. Over time, people experience increasing difficulty moving, swallowing (dysphagia), and speaking or forming words (dysarthria). Symptoms of upper motor neuron involvement include tight and stiff muscles (spasticity) and exaggerated reflexes (hyperreflexia), including an overactive gag reflex. While the disease does not cause pain directly, pain is a symptom experienced by most people with ALS caused by reduced mobility. Symptoms of lower motor neuron degeneration include muscle weakness and atrophy, muscle cramps, and fleeting twitches of muscles that can be seen under the skin (fasciculations). Progression Although the initial site of symptoms and subsequent rate of disability progression vary from person to person, the initially affected body region is usually the most affected over time, and symptoms usually spread to a neighbouring body region. For example, symptoms starting in one arm usually spread next to either the opposite arm or to the leg on the same side. Bulbar-onset patients most typically get their next symptoms in their arms rather than legs, arm-onset patients typically spread to the legs before the bulbar region, and leg-onset patients typically spread to the arms rather than the bulbar region. Over time, regardless of where symptoms began, most people eventually lose the ability to walk or use their hands and arms independently. Less consistently, they may lose the ability to speak and to swallow food. It is the eventual development of weakness of the respiratory muscles, with the loss of ability to cough and to breathe without support, that is ultimately life-shortening in ALS. The rate of progression can be measured using the ALS Functional Rating Scale - Revised (ALSFRS-R), a 12-item instrument survey administered as a clinical interview or self-reported questionnaire that produces a score between 48 (normal function) and 0 (severe disability). The ALSFRS-R is the most frequently used outcome measure in clinical trials and is used by doctors to track disease progression. Though the degree of variability is high and a small percentage of people have a much slower progression, on average people with ALS lose about 1 ALSFRS-R point per month. Brief periods of stabilization ("plateaus") and even small reversals in ALSFRS-R score are not uncommon, due to the fact the tool is subjective, can be affected by medication, and different forms of compensation for changes in function. However, it is rare (<1%) for these improvements to be large (i.e. greater than 4 ALSFRS-R points) or sustained (i.e. greater than 12 months). A survey-based study among clinicians showed that they rated a 20% change in the slope of the ALSFRS-R as being clinically meaningful, which is the most common threshold used to determine whether a new treatment is working in clinical trials. Late-stage disease management Difficulties with chewing and swallowing make eating very difficult (dysphagia) and increase the risk of choking or of aspirating food into the lungs. In later stages of the disorder, aspiration pneumonia can develop, and maintaining a healthy weight can become a significant problem that may require the insertion of a feeding tube. As the diaphragm and intercostal muscles of the rib cage that support breathing weaken, measures of lung function such as vital capacity and inspiratory pressure diminish. In respiratory-onset ALS, this may occur before significant limb weakness is apparent. Individuals affected by the disorder may ultimately lose the ability to initiate and control all voluntary movement, known as locked-in syndrome. Bladder and bowel function are usually spared, meaning urinary and fecal incontinence are uncommon, although trouble getting to a toilet can lead to difficulties. The extraocular muscles responsible for eye movement are usually spared, meaning the use of eye tracking technology to support augmentative communication is often feasible, albeit slow, and needs may change over time. Despite these challenges, many people in an advanced state of disease report satisfactory wellbeing and quality of life. Prognosis, staging, and survival Although respiratory support using non-invasive ventilation can ease problems with breathing and prolong survival, it does not affect the progression rate of ALS. Most people with ALS die between two and four years after the diagnosis. Around 50% of people with ALS die within 30 months of their symptoms beginning, about 20% live between five and ten years, and about 10% survive for 10 years or longer. The most common cause of death among people with ALS is respiratory failure, often accelerated by pneumonia. Most ALS patients die at home after a period of worsening difficulty breathing, a decline in their nutritional status, or a rapid worsening of symptoms. Sudden death or acute respiratory distress are uncommon. Access to palliative care is recommended from an early stage to explore options, ensure psychosocial support for the patient and caregivers, and to discuss advance healthcare directives. As with cancer staging, ALS has staging systems numbered between 1 and 4 that are used for research purposes in clinical trials. Two very similar staging systems emerged around a similar time, the King's staging system and Milano-Torino (MiToS) functional staging. Providing individual patients with a precise prognosis is not currently possible, though research is underway to provide statistical models on the basis of prognostic factors including age at onset, progression rate, site of onset, and presence of frontotemporal dementia. Those with a bulbar onset have a worse prognosis than limb-onset ALS; a population-based study found that bulbar-onset ALS patients had a median survival of 2.0 years and a 10-year survival rate of 3%, while limb-onset ALS patients had a median survival of 2.6 years and a 10-year survival rate of 13%. Those with respiratory-onset ALS had a shorter median survival of 1.4 years and 0% survival at 10 years. While astrophysicist Stephen Hawking lived for 55 more years following his diagnosis, his was an unusual case. Cognitive, emotional, and behavioral symptoms Cognitive impairment or behavioral dysfunction is present in 30–50% of individuals with ALS, and can appear more frequently in later stages of the disease. Language dysfunction, executive dysfunction, and troubles with social cognition and verbal memory are the most commonly reported cognitive symptoms in ALS. Cognitive impairment is found more frequently in patients with C9orf72 gene repeat expansions, bulbar onset, bulbar symptoms, family history of ALS, and/or a predominantly upper motor neuron phenotype. Emotional lability is a symptom in which patients cry, smile, yawn, or laugh, either in the absence of emotional stimuli, or when they are feeling the opposite emotion to that being expressed; it is experienced by about half of ALS patients and is more common in those with bulbar-onset ALS. While relatively benign relative to other symptoms, it can cause increased stigma and social isolation as people around the patient struggle to react appropriately to what can be frequent and inappropriate outbursts in public. In addition to mild changes in cognition that may only emerge during neuropsychological testing, around 10–15% of individuals have signs of frontotemporal dementia (FTD). Repeating phrases or gestures, apathy, and loss of inhibition are the most frequently reported behavioral features of ALS. ALS and FTD are now considered to be part of a common disease spectrum (ALS–FTD) because of genetic, clinical, and pathological similarities. Genetically, repeat expansions in the C9orf72 gene account for about 40% of genetic ALS and 25% of genetic FTD. Cognitive and behavioral issues are associated with a poorer prognosis as they may reduce adherence to medical advice, and deficits in empathy and social cognition which may increase caregiver burden. Cause It is not known what causes sporadic ALS, hence it is described as an idiopathic disease. Though its exact cause is unknown, genetic and environmental factors are thought to be of roughly equal importance. The genetic factors are better understood than the environmental factors; no specific environmental factor has been definitively shown to cause ALS. A multi-step liability threshold model for ALS proposes that cellular damage accumulates over time due to genetic factors present at birth and exposure to environmental risks throughout life. ALS can strike at any age, but its likelihood increases with age. Most people who develop ALS are between the ages of 40 and 70, with an average age of 55 at the time of diagnosis. ALS is 20% more common in men than women, but this difference in sex distribution is no longer present in patients with onset after age 70. Genetics and genetic testing While they appear identical clinically and pathologically, ALS can be classified as being either familial or sporadic, depending on whether there is a known family history of the disease and/or whether an ALS-associated genetic mutation has been identified via genetic testing. Familial ALS is thought to account for 10–15% of cases overall and can include monogenic, oligogenic, and polygenic modes of inheritance. There is considerable variation among clinicians on how to approach genetic testing in ALS, and only about half discuss the possibility of genetic inheritance with their patients, particularly if there is no discernible family history of the disease. In the past, genetic counseling and testing was only offered to those with obviously familial ALS. But it is increasingly recognized that cases of sporadic ALS may also be due to disease-causing de novo mutations in SOD1, or C9orf72, an incomplete family history, or incomplete penetrance, meaning that a patient's ancestors carried the gene but did not express the disease in their lifetimes. The lack of positive family history may be caused by lack of historical records, having a smaller family, older generations dying earlier of causes other than ALS, genetic non-paternity, and uncertainty over whether certain neuropsychiatric conditions (e.g. frontotemporal dementia, other forms of dementia, suicide, psychosis, schizophrenia) should be considered significant when determining a family history. There have been calls in the research community to routinely counsel and test all diagnosed ALS patients for familial ALS, particularly as there is now a licensed gene therapy (tofersen) specifically targeted to carriers of SOD-1 ALS. A shortage of genetic counselors and limited clinical capacity to see such at-risk individuals makes this challenging in practice, as does the unequal access to genetic testing around the world. More than 40 genes have been associated with ALS, of which four account for nearly half of familial cases, and around 5% of sporadic cases: C9orf72 (40% of familial cases, 7% sporadic), SOD1 (12% of familial cases, 1–2% sporadic), FUS (4% of familial cases, 1% sporadic), and TARDBP (4% of familial cases, 1% sporadic), with the remaining genes mostly accounting for fewer than 1% of either familial or sporadic cases. ALS genes identified to date explain the cause of about 70% of familial ALS and about 15% of sporadic ALS. Overall, first-degree relatives of an individual with ALS have a ~1% risk of developing ALS themselves. Environmental and other factors The multi-step hypothesis suggests the disease is caused by some interaction between an individual's genetic risk factors and their cumulative lifetime of exposures to environmental factors, termed their exposome. The most consistent lifetime exposures associated with developing ALS (other than genetic mutations) include heavy metals (e.g. lead and mercury), chemicals (e.g. pesticides and solvents), electric shock, physical injury (including head injury), and smoking (in men more than women). Overall these effects are small, with each exposure in isolation only increasing the likelihood of a very rare condition by a small amount. For instance, an individual's lifetime risk of developing ALS might go from "1 in 400" without exposure to between "1 in 300" and "1 in 200" if they were exposed to heavy metals. Some industries are heavily dependent upon the use or exposure to these environmental factors, increasing employees' susceptibility. Agricultural tasks can be intertwined with as many as 5 such risk factors excluding workers' smoking preferences. A range of other factors have weaker evidence supporting them and include participation in professional sports, having a lower body mass index, lower educational attainment, manual occupations, military service, exposure to Beta-N-methylamino-L-alanin (BMAA), and viral infections. Although some personality traits, such as openness, agreeableness and conscientiousness appear remarkably common among patients with ALS, it remains open whether personality can increase susceptibility to ALS directly. Instead, genetic factors giving rise to personality might simultaneously predispose people to develop ALS, or the above personality traits might underlie lifestyle choices which are in turn risk factors for ALS. Pathophysiology Neuropathology Upon examination at autopsy, features of the disease that can be seen with the naked eye include skeletal muscle atrophy, motor cortex atrophy, sclerosis of the corticospinal and corticobulbar tracts, thinning of the hypoglossal nerves (which control the tongue), and thinning of the anterior roots of the spinal cord. The defining feature of ALS is the death of both upper motor neurons (located in the motor cortex of the brain) and lower motor neurons (located in the brainstem and spinal cord). In ALS with frontotemporal dementia, neurons throughout the frontal and temporal lobes of the brain die as well. The pathological hallmark of ALS is the presence of inclusion bodies (abnormal aggregations of protein) known as Bunina bodies in the cytoplasm of motor neurons. In about 97% of people with ALS, the main component of the inclusion bodies is TDP-43 protein; however, in those with SOD1 or FUS mutations, the main component of the inclusion bodies is SOD1 protein or FUS protein, respectively. Prion-like propagation of misfolded proteins from cell to cell may explain why ALS starts in one area and spreads to others. The glymphatic system may also be involved in the pathogenesis of ALS. Biochemistry It is still not fully understood why neurons die in ALS, but this neurodegeneration is thought to involve many different cellular and molecular processes. The genes known to be involved in ALS can be grouped into three general categories based on their normal function: protein degradation, the cytoskeleton, and RNA processing. Mutant SOD1 protein forms intracellular aggregations that inhibit protein degradation. Cytoplasmic aggregations of wild-type (normal) SOD1 protein are common in sporadic ALS. It is thought that misfolded mutant SOD1 can cause misfolding and aggregation of wild-type SOD1 in neighboring neurons in a prion-like manner. Other protein degradation genes that can cause ALS when mutated include VCP, OPTN, TBK1, and SQSTM1. Three genes implicated in ALS that are important for maintaining the cytoskeleton and for axonal transport include DCTN1, PFN1, and TUBA4A. Several ALS genes encode RNA-binding proteins. The first to be discovered was TDP-43 protein, a nuclear protein that aggregates in the cytoplasm of motor neurons in almost all cases of ALS; however, mutations in TARDBP, the gene that codes for TDP-43, are a rare cause of ALS. FUS codes for FUS, another RNA-binding protein with a similar function to TDP-43, which can cause ALS when mutated. It is thought that mutations in TARDBP and FUS increase the binding affinity of the low-complexity domain, causing their respective proteins to aggregate in the cytoplasm. Once these mutant RNA-binding proteins are misfolded and aggregated, they may be able to misfold normal proteins both within and between cells in a prion-like manner. This also leads to decreased levels of RNA-binding protein in the nucleus, which may mean that their target RNA transcripts do not undergo normal processing . Other RNA metabolism genes associated with ALS include ANG, SETX, and MATR3. C9orf72 is the most commonly mutated gene in ALS and causes motor neuron death through a number of mechanisms. The pathogenic mutation is a hexanucleotide repeat expansion (a series of six nucleotides repeated over and over); people with up to 30 repeats are considered normal, while people with hundreds or thousands of repeats can have familial ALS, frontotemporal dementia, or sometimes sporadic ALS. The three mechanisms of disease associated with these C9orf72 repeats are deposition of RNA transcripts in the nucleus, translation of the RNA into toxic dipeptide repeat proteins in the cytoplasm, and decreased levels of the normal C9orf72 protein. Mitochondrial bioenergetic dysfunction leading to dysfunctional motor neuron axonal homeostasis (reduced axonal length and fast axonal transport of mitochondrial cargo) has been shown to occur in C9orf72-ALS using human induced pluripotent stem cell (iPSC) technologies coupled with CRISPR/Cas9 gene-editing, and human post-mortem spinal cord tissue examination. Excitotoxicity, or nerve cell death caused by high levels of intracellular calcium due to excessive stimulation by the excitatory neurotransmitter glutamate, is a mechanism thought to be common to all forms of ALS. Motor neurons are more sensitive to excitotoxicity than other types of neurons because they have a lower calcium-buffering capacity and a type of glutamate receptor (the AMPA receptor) that is more permeable to calcium. In ALS, there are decreased levels of excitatory amino acid transporter 2 (EAAT2), which is the main transporter that removes glutamate from the synapse; this leads to increased synaptic glutamate levels and excitotoxicity. Riluzole, a drug that modestly prolongs survival in ALS, inhibits glutamate release from pre-synaptic neurons; however, it is unclear if this mechanism is responsible for its therapeutic effect. Diagnosis No single test can provide a definite diagnosis of ALS. Instead, the diagnosis of ALS is primarily made based on a physician's clinical assessment after ruling out other diseases. Physicians often obtain the person's full medical history and conduct neurologic examinations at regular intervals to assess whether signs and symptoms such as muscle weakness, muscle atrophy, hyperreflexia, Babinski's sign, and spasticity are worsening. Many biomarkers are being studied for the condition, but as of 2023 are not in general medical use. Differential diagnosis Because symptoms of ALS can be similar to those of a wide variety of other, more treatable diseases or disorders, appropriate tests must be conducted to exclude the possibility of other conditions. One of these tests is electromyography (EMG), a special recording technique that detects electrical activity in muscles. Certain EMG findings can support the diagnosis of ALS. Another common test measures nerve conduction velocity (NCV). Specific abnormalities in the NCV results may suggest, for example, that the person has a form of peripheral neuropathy (damage to peripheral nerves) or myopathy (muscle disease) rather than ALS. While a magnetic resonance imaging (MRI) is often normal in people with early-stage ALS, it can reveal evidence of other problems that may be causing the symptoms, such as a spinal cord tumor, multiple sclerosis, a herniated disc in the neck, syringomyelia, or cervical spondylosis. Based on the person's symptoms and findings from the examination and from these tests, the physician may order tests on blood and urine samples to eliminate the possibility of other diseases, as well as routine laboratory tests. In some cases, for example, if a physician suspects the person may have a myopathy rather than ALS, a muscle biopsy may be performed. A number of infectious diseases can sometimes cause ALS-like symptoms, including human immunodeficiency virus (HIV), human T-lymphotropic virus (HTLV), Lyme disease, and syphilis. Neurological disorders such as multiple sclerosis, post-polio syndrome, multifocal motor neuropathy, CIDP, spinal muscular atrophy, and spinal and bulbar muscular atrophy can also mimic certain aspects of the disease and should be considered. ALS must be differentiated from the "ALS mimic syndromes", which are unrelated disorders that may have a similar presentation and clinical features to ALS or its variants. Because the prognosis of ALS and closely related subtypes of motor neuron disease are generally poor, neurologists may carry out investigations to evaluate and exclude other diagnostic possibilities. Disorders of the neuromuscular junction, such as myasthenia gravis (MG) and Lambert–Eaton myasthenic syndrome, may also mimic ALS, although this rarely presents diagnostic difficulty over time. Benign fasciculation syndrome and cramp fasciculation syndrome may also, occasionally, mimic some of the early symptoms of ALS. Nonetheless, the absence of other neurological features that develop inexorably with ALS means that, over time, the distinction will not present any difficulty to the experienced neurologist; where doubt remains, EMG may be helpful. Management There is no cure for ALS. Management focuses on treating symptoms and providing supportive care, to improve quality of life and prolong survival. This care is best provided by multidisciplinary teams of healthcare professionals; attending a multidisciplinary ALS clinic is associated with longer survival, fewer hospitalizations, and improved quality of life. Non-invasive ventilation (NIV) is the main treatment for respiratory failure in ALS. In people with normal bulbar function, it prolongs survival by about seven months and improves the quality of life. One study found that NIV is ineffective for people with poor bulbar function while another suggested that it may provide a modest survival benefit. Many people with ALS have difficulty tolerating NIV. Invasive ventilation is an option for people with advanced ALS when NIV is not enough to manage their symptoms. While invasive ventilation prolongs survival, disease progression, and functional decline continue. It may decrease the quality of life of people with ALS or their caregivers. Invasive ventilation is more commonly used in Japan than in North America or Europe. Physical therapy can promote functional independence through aerobic, range of motion, and stretching exercises. Occupational therapy can assist with activities of daily living through adaptive equipment. Speech therapy can assist people with ALS who have difficulty speaking. Preventing weight loss and malnutrition in people with ALS improves both survival and quality of life. Initially, difficulty swallowing (dysphagia) can be managed by dietary changes and swallowing techniques. A feeding tube should be considered if someone with ALS loses 5% or more of their body weight or if they cannot safely swallow food and water. The feeding tube is usually inserted by percutaneous endoscopic gastrostomy (PEG). There is weak evidence that PEG tubes improve survival. PEG insertion is usually performed with the intent of improving quality of life. Palliative care should begin shortly after someone is diagnosed with ALS. Discussion of end-of-life issues gives people with ALS time to reflect on their preferences for end-of-life care and can help avoid unwanted interventions or procedures. Hospice care can improve symptom management at the end of life and increase the likelihood of a peaceful death. In the final days of life, opioids can be used to treat pain and dyspnea, while benzodiazepines can be used to treat anxiety. Medications Disease-slowing treatments Riluzole has been found to modestly prolong survival by about 2–3 months. It may have a greater survival benefit for those with bulbar-onset ALS. It may work by decreasing release of the excitatory neurotransmitter glutamate from pre-synaptic neurons. The most common side effects are nausea and a lack of energy (asthenia). People with ALS should begin treatment with riluzole as soon as possible following their diagnosis. Riluzole is available as a tablet, liquid, or dissolvable oral film. Edaravone has been shown to modestly slow the decline in function in a small group of people with early-stage ALS. It may work by protecting motor neurons from oxidative stress. The most common side effects are bruising and gait disturbance. Edaravone is available as an intravenous infusion or as an oral suspension. AMX0035 (Relyvrio) is a combination of sodium phenylbutyrate and taurursodiol, which was initially shown to prolong the survival of patients by an average of six months. Relyvrio was withdrawn by the manufacturer in April 2024 following the completion of the Phase 3 PHOENIX trial which did not show substantial benefit to ALS patients. Tofersen (Qalsody) is an antisense oligonucleotide that was approved for medical use in the United States in April 2023, for the treatment of SOD1-associated ALS. In a study of 108 patients with SOD1-associated ALS there was a non-significant trend towards a slowing of progression, as well as a significant reduction in neurofilament light chain, a putative ALS biomarker thought to indicate neuronal damage. A follow-up study and open-label extension suggested that earlier treatment initiation had a beneficial effect on slowing disease progression. Tofersen is available as an intrathecal injection into the lumbar cistern at the base of the spine. Symptomatic treatments Other medications may be used to help reduce fatigue, ease muscle cramps, control spasticity, and reduce excess saliva and phlegm. Gabapentin, pregabalin, and tricyclic antidepressants (e.g., amitriptyline) can be used for neuropathic pain, while nonsteroidal anti-inflammatory drugs (NSAIDs), acetaminophen, and opioids can be used for nociceptive pain. Depression can be treated with selective serotonin reuptake inhibitors (SSRIs) or tricyclic antidepressants, while benzodiazepines can be used for anxiety. There are no medications to treat cognitive impairment/frontotemporal dementia (FTD); however, SSRIs and antipsychotics can help treat some of the symptoms of FTD. Baclofen and tizanidine are the most commonly used oral drugs for treating spasticity; an intrathecal baclofen pump can be used for severe spasticity. Atropine, scopolamine, amitriptyline, or glycopyrrolate may be prescribed when people with ALS begin having trouble swallowing their saliva (sialorrhea). A 2017 review concluded that mexiletine is safe and effective for treating cramps in ALS based on a randomized controlled trial from 2016. Breathing support Non-invasive ventilation Non-invasive ventilation (NIV) is the primary treatment for respiratory failure in ALS and was the first treatment shown to improve both survival and quality of life. NIV uses a face or nasal mask connected to a ventilator that provides intermittent positive pressure to support breathing. Continuous positive pressure is not recommended for people with ALS because it makes breathing more difficult. Initially, NIV is used only at night because the first sign of respiratory failure is decreased gas exchange (hypoventilation) during sleep; symptoms associated with this nocturnal hypoventilation include interrupted sleep, anxiety, morning headaches, and daytime fatigue. As the disease progresses, people with ALS develop shortness of breath when lying down, during physical activity or talking, and eventually at rest. Other symptoms include poor concentration, poor memory, confusion, respiratory tract infections, and a weak cough. Respiratory failure is the most common cause of death in ALS. It is important to monitor the respiratory function of people with ALS every three months because beginning NIV soon after the start of respiratory symptoms is associated with increased survival. This involves asking the person with ALS if they have any respiratory symptoms and measuring their respiratory function. The most commonly used measurement is upright forced vital capacity (FVC), but it is a poor detector of early respiratory failure and is not a good choice for those with bulbar symptoms, as they have difficulty maintaining a tight seal around the mouthpiece. Measuring FVC while the person is lying on their back (supine FVC) is a more accurate measure of diaphragm weakness than upright FVC. Sniff nasal inspiratory pressure (SNIP) is a rapid, convenient test of diaphragm strength that is not affected by bulbar muscle weakness. If someone with ALS has signs and symptoms of respiratory failure, they should undergo daytime blood gas analysis to look for hypoxemia (low oxygen in the blood) and hypercapnia (too much carbon dioxide in the blood). If their daytime blood gas analysis is normal, they should then have nocturnal pulse oximetry to look for hypoxemia during sleep. Non-invasive ventilation prolongs survival longer than riluzole. A 2006 randomized controlled trial found that NIV prolongs survival by about 48 days and improves the quality of life; however, it also found that some people with ALS benefit more from this intervention than others. For those with normal or only moderately impaired bulbar function, NIV prolongs survival by about seven months and significantly improves the quality of life. For those with poor bulbar function, NIV neither prolongs survival nor improves the quality of life, though it does improve some sleep-related symptoms. Despite the clear benefits of NIV, about 25–30% of all people with ALS are unable to tolerate it, especially those with cognitive impairment or bulbar dysfunction. Results from a large 2015 cohort study suggest that NIV may prolong survival in those with bulbar weakness, so NIV should be offered to all people with ALS, even if it is likely that they will have difficulty tolerating it. Invasive ventilation Invasive ventilation bypasses the nose and mouth (the upper airways) by making a cut in the trachea (tracheostomy) and inserting a tube connected to a ventilator. It is an option for people with advanced ALS whose respiratory symptoms are poorly managed despite continuous NIV use. While invasive ventilation prolongs survival, especially for those younger than 60, it does not treat the underlying neurodegenerative process. The person with ALS will continue to lose motor function, making communication increasingly difficult and sometimes leading to locked-in syndrome, in which they are completely paralyzed except for their eye muscles. About half of the people with ALS who choose to undergo invasive ventilation report a decrease in their quality of life but most still consider it to be satisfactory. However, invasive ventilation imposes a heavy burden on caregivers and may decrease their quality of life. Attitudes toward invasive ventilation vary from country to country; about 30% of people with ALS in Japan choose invasive ventilation, versus less than 5% in North America and Europe. Therapy Physical therapy plays a large role in rehabilitation for individuals with ALS. Specifically, physical, occupational, and speech therapists can set goals and promote benefits for individuals with ALS by delaying loss of strength, maintaining endurance, limiting pain, improving speech and swallowing, preventing complications, and promoting functional independence. Occupational therapy and special equipment such as assistive technology can also enhance people's independence and safety throughout the course of ALS. Gentle, low-impact aerobic exercise such as performing activities of daily living, walking, swimming, and stationary bicycling can strengthen unaffected muscles, improve cardiovascular health, and help people fight fatigue and depression. Range of motion and stretching exercises can help prevent painful spasticity and shortening (contracture) of muscles. Physical and occupational therapists can recommend exercises that provide these benefits without overworking muscles because muscle exhaustion can lead to a worsening of symptoms associated with ALS, rather than providing help to people with ALS. They can suggest devices such as ramps, braces, walkers, bathroom equipment (shower chairs, toilet risers, etc.), and wheelchairs that help people remain mobile. Occupational therapists can provide or recommend equipment and adaptations to enable ALS people to retain as much safety and independence in activities of daily living as possible. Since respiratory insufficiency is the primary cause of mortality, physical therapists can help improve respiratory outcomes in people with ALS by implementing pulmonary physical therapy. This includes inspiratory muscle training, lung volume recruitment training, and manual assisted cough therapy aimed at increasing respiratory muscle strength as well as increasing survival rates. People with ALS who have difficulty speaking or swallowing may benefit from working with a speech-language pathologist. These health professionals can teach people adaptive strategies such as techniques to help them speak louder and more clearly. As ALS progresses, speech-language pathologists can recommend the use of augmentative and alternative communication such as voice amplifiers, speech-generating devices (or voice output communication devices), or low-tech communication techniques such as head-mounted laser pointers, alphabet boards or yes/no signals. Nutrition Preventing weight loss and malnutrition in people with ALS improves both survival and quality of life. Weight loss in ALS is often caused by muscle wasting and increased resting energy expenditure. Weight loss may also be secondary to reduced food intake since dysphagia develops in about 85% of people with ALS at some point throughout their disease course. Therefore, regular periodic assessment of the weight and swallowing ability in people with ALS is very important. Dysphagia is often initially managed via dietary changes and modified swallowing techniques. People with ALS are often instructed to avoid dry or chewy foods in their diet and instead have meals that are soft, moist, and easy to swallow. Switching to thick liquids (like fruit nectar or smoothies) or adding thickeners (to thin fluids like water and coffee) may also help people facing difficulty swallowing liquids. There is tentative evidence that high-calorie diets may prevent further weight loss and improve survival, but more research is still needed. A feeding tube should be considered if someone with ALS loses 5% or more of their body weight or if they cannot safely swallow food and water. This can take the form of a gastrostomy tube, in which a tube is placed through the wall of the abdomen into the stomach, or (less commonly) a nasogastric tube, in which a tube is placed through the nose and down the esophagus into the stomach. A gastrostomy tube is more appropriate for long-term use than a nasogastric tube, which is uncomfortable and can cause esophageal ulcers. The feeding tube is usually inserted by a percutaneous endoscopic gastrostomy procedure (PEG). While there is weak evidence that PEG tubes improve survival in people with ALS, no randomized controlled trials (RCTs) have yet been conducted to indicate whether enteral tube feeding has benefits compared to continuation of feeding by mouth. Nevertheless, PEG tubes are still offered with the intent of improving the person's quality of life by sustaining nutrition, hydration status, and medication intake. End-of-life care Palliative care, which relieves symptoms and improves the quality of life without treating the underlying disease, should begin shortly after someone is diagnosed with ALS. Early discussion of end-of-life issues gives people with ALS time to reflect on their preferences for end-of-life care and can help avoid unwanted interventions or procedures. Once they have been fully informed about all aspects of various life-prolonging measures, they can fill out advance directives indicating their attitude toward noninvasive ventilation, invasive ventilation, and feeding tubes. Late in the disease course, difficulty speaking due to muscle weakness (dysarthria) and cognitive dysfunction may impair their ability to communicate their wishes regarding care. Continued failure to solicit the preferences of the person with ALS may lead to unplanned and potentially unwanted emergency interventions, such as invasive ventilation. If people with ALS or their family members are reluctant to discuss end-of-life issues, it may be useful to use the introduction of gastrostomy or noninvasive ventilation as an opportunity to bring up the subject. Hospice care, or palliative care at the end of life, is especially important in ALS because it helps to optimize the management of symptoms and increases the likelihood of a peaceful death. It is unclear exactly when the end-of-life phase begins in ALS, but it is associated with significant difficulty moving, communicating, and, in some cases, thinking. Although many people with ALS fear choking to death (suffocating), they can be reassured that this occurs rarely, less than 1% of the time. Most patients die at home, and in the final days of life, opioids can be used to treat pain and dyspnea, while benzodiazepines can be used to treat anxiety. Epidemiology ALS is the most common motor neuron disease in adults and the third most common neurodegenerative disease after Alzheimer's disease and Parkinson's disease. Worldwide the number of people who develop ALS yearly is estimated to be 1.9 people per 100,000 per year, while the number of people who have ALS at any given time is estimated to be about 4.5 people per 100,000. In Europe, the number of new cases a year is about 2.6 people per 100,000, while the number affected is 7–9 people per 100,000. The lifetime risk of developing ALS is 1:350 for European men and 1:400 for European women. Men have a higher risk mainly because spinal-onset ALS is more common in men than women. The number of those with ALS in the United States in 2015 was 5.2 people per 100,000, and was higher in whites, males, and people over 60 years old. The number of new cases is about 0.8 people per 100,000 per year in East Asia and about 0.7 people per 100,000 per year in South Asia. About 80% of ALS epidemiology studies have been conducted in Europe and the United States, mostly in people of northern European descent. There is not enough information to determine the rates of ALS in much of the world, including Africa, parts of Asia, India, Russia, and South America. There are several geographic clusters in the Western Pacific where the prevalence of ALS was reported to be 50–100 times higher than in the rest of the world, including Guam, the Kii Peninsula of Japan, and Western New Guinea. The incidence in these areas has decreased since the 1960s; the cause remains unknown. People of all races and ethnic backgrounds may be affected by ALS, but it is more common in whites than in Africans, Asians, or Hispanics. In the United States in 2015, the prevalence of ALS in whites was 5.4 people per 100,000, while the prevalence in blacks was 2.3 people per 100,000. The Midwest had the highest prevalence of the four US Census regions with 5.5 people per 100,000, followed by the Northeast (5.1), the South (4.7), and the West (4.4). The Midwest and Northeast likely had a higher prevalence of ALS because they have a higher proportion of whites than the South and West. Ethnically mixed populations may be at a lower risk of developing ALS; a study in Cuba found that people of mixed ancestry were less likely to die from ALS than whites or blacks. There are also differences in the genetics of ALS between different ethnic groups; the most common ALS gene in Europe is C9orf72, followed by SOD1, TARDBP, and FUS, while the most common ALS gene in Asia is SOD1, followed by FUS, C9orf72, and TARDBP. ALS can affect people at any age, but the peak incidence is between 50 and 75 years and decreases dramatically after 80 years. The reason for the decreased incidence in the elderly is unclear. One thought is that people who survive into their 80s may not be genetically susceptible to developing ALS; alternatively, ALS in the elderly might go undiagnosed because of comorbidities (other diseases they have), difficulty seeing a neurologist, or dying quickly from an aggressive form of ALS. In the United States in 2015, the lowest prevalence was in the 18–39 age group, while the highest prevalence was in the 70–79 age group. Sporadic ALS usually starts around the ages of 58 to 63 years, while genetic ALS starts earlier, usually around 47 to 52 years. The number of ALS cases worldwide is projected to increase from 222,801 in 2015 to 376,674 in 2040, an increase of 69%. This will largely be due to the aging of the world's population, especially in developing countries. History Descriptions of the disease date back to at least 1824 by Charles Bell. In 1850, François-Amilcar Aran was the first to describe a disorder he named "progressive muscular atrophy", a form of ALS in which only the lower motor neurons are affected. In 1869, the connection between the symptoms and the underlying neurological problems was first described by Jean-Martin Charcot, who initially introduced the term amyotrophic lateral sclerosis in his 1874 paper. Flail arm syndrome, a regional variant of ALS, was first described by Alfred Vulpian in 1886. Flail leg syndrome, another regional variant of ALS, was first described by Pierre Marie and his student Patrikios in 1918. Diagnostic criteria In the 1950s, electrodiagnostic testing (EMG) and nerve conduction velocity (NCV) testing began to be used to evaluate clinically suspected ALS. In 1969 Edward H. Lambert published the first EMG/NCS diagnostic criteria for ALS, consisting of four findings he considered to strongly support the diagnosis. Since then several diagnostic criteria have been developed, which are mostly in use for research purposes for inclusion/exclusion criteria, and to stratify patients for analysis in trials. Research diagnostic criteria for ALS include the "El Escorial" in 1994, revised in 1998. In 2006, the "Awaji" criteria proposed using EMG and NCV tests to help diagnose ALS earlier, and most recently the "Gold Coast" criteria in 2019. Name Amyotrophic comes from Greek: a- means "no", myo- (from mûs) refers to "muscle", and trophḗ means "nourishment". Therefore, amyotrophy means "muscle malnourishment" or the wasting of muscle tissue. Lateral identifies the locations in the spinal cord of the affected motor neurons. Sclerosis means "scarring" or "hardening" and refers to the death of the motor neurons in the spinal cord. ALS is sometimes referred to as Charcot's disease (not to be confused with Charcot–Marie–Tooth disease or Charcot joint disease), because Jean-Martin Charcot was the first to connect the clinical symptoms with the pathology seen at autopsy. The British neurologist Russell Brain coined the term motor neurone disease in 1933 to reflect his belief that ALS, progressive bulbar palsy, and progressive muscular atrophy were all different forms of the same disease. In some countries, especially the United States, ALS is called Lou Gehrig's disease after the American baseball player Lou Gehrig, who was diagnosed with ALS in 1939. In the United States and continental Europe, the term ALS (as well as Lou Gehrig's disease in the US) refers to all forms of the disease, including "classical" ALS, progressive bulbar palsy, progressive muscular atrophy, and primary lateral sclerosis. In the United Kingdom and Australia, the term motor neurone disease refers to all forms of the disease while ALS only refers to "classical" ALS, meaning the form with both upper and lower motor neuron involvement. Society and culture In addition to the baseball player Lou Gehrig and the theoretical physicist Stephen Hawking (who notably lived longer than any other known person with the condition), several other notable individuals have or have had ALS. Several books have been written and films have been made about patients of the disease as well. American sociology professor and ALS patient Morrie Schwartz was the subject of the memoir Tuesdays with Morrie and the film of the same name, and Stephen Hawking was the subject of the critically acclaimed biopic The Theory of Everything. In August 2014, the "Ice Bucket Challenge" to raise money for ALS research went viral online. Participants filmed themselves filling a bucket full of ice water and pouring it onto themselves; they then nominated other individuals to do the same. Many participants donated to ALS research at the ALS Association, the ALS Therapy Development Institute, ALS Society of Canada, or Motor Neurone Disease Association in the UK.
Biology and health sciences
Specific diseases
Health
19375754
https://en.wikipedia.org/wiki/Craigslist
Craigslist
Craigslist (stylized as craigslist) is a privately held American company operating a classified advertisements website with sections devoted to jobs, housing, for sale, items wanted, services, community service, gigs, résumés, and discussion forums. Craig Newmark began the service in 1995 as an email distribution list to friends, featuring local events in the San Francisco Bay Area. It became a web-based service in 1996 and expanded into other classified categories. It started expanding to other U.S. and Canadian cities in 2000. In 2023 Craigslist listed seven hundred cities in 70 countries on its website and generated 560 million visits per month. Despite such global presence, 90% of the website visitors are from the USA. Nevertheless, according to Alexa, Craigslist was the #19 most visited website in the United States in 2022 and #16 in the World in 2023. History Having observed people helping one another in friendly, social, and trusting communal ways on the Internet via the WELL, MindVox and Usenet, and feeling isolated as a relative newcomer to San Francisco, Craigslist founder Craig Newmark decided to create something similar for local events. In early 1995, he began an email distribution list to friends. Most of the early postings were submitted by Newmark and were notices of social events of interest to software and Internet developers living and working in the San Francisco Bay Area. The number of subscribers and postings grew rapidly via manual advertising. There was no moderation and Newmark was surprised when people started using the mailing list for non-event postings. People trying to get technical positions filled found that the list was a good way to reach people with the skills they were looking for. This led to the addition of a jobs category. User demand for more categories caused the list of categories to grow. The initial technology encountered some limits, so by Majordomo had been installed, and the mailing list "Craigslist" resumed operations. Community members started asking for a web interface. Newmark registered "craigslist.org", and the website went live in 1996. In the fall of 1998, the name "List Foundation" was introduced, and Craigslist started transitioning to the use of this name. In , when Newmark learned of other organizations called "List Foundation", the use of this name was dropped. Craigslist was incorporated as a private for-profit company in 1999. Around the time of these events, Newmark realized the site was growing so fast that he could stop working as a software engineer and devote his full attention to running Craigslist. By , nine employees were working out of Newmark's San Francisco apartment. In January 2000, current CEO Jim Buckmaster joined the company as lead programmer and CTO. Buckmaster contributed to the site's multi-city architecture, search engine, discussion forums, flagging system, self-posting process, homepage design, personals categories, and best-of-Craigslist feature. He was promoted to CEO in . The website expanded into nine more U.S. cities in 2000, four in 2001 and 2002, and 14 in 2003. On August 1, 2004, Craigslist began charging $25 to post job openings on the New York and Los Angeles pages. On the same day, a new section called "Gigs" was added, where low-cost and unpaid jobs can be posted for free. In March 2008, Spanish, French, Italian, German, and Portuguese became the first non-English languages Craigslist supported. As of August 9, 2012, over 700 cities and areas in 70 countries had Craigslist sites. Some Craigslist sites cover large regions instead of individual metropolitan areas—for example, the U.S. states of Delaware and Wyoming, the Colorado Western Slope, the California Gold Country, and the Upper Peninsula of Michigan. Craigslist sites for some large cities, such as Los Angeles, also include the ability for the user to focus on a specific area of a city (such as central Los Angeles). Operations The site serves more than 20 billion page views per month, putting it in 72nd place overall among websites worldwide and 11th place overall among websites in the United States (per Alexa.com on , 2016), with more than unique monthly visitors in the United States alone (per Compete.com on , 2010). With more than new classified advertisements each month, Craigslist is the leading classifieds service in any medium. Back in 2009, the site received more than new job listings each month, making it one of the top job boards in the world. The 23 largest U.S. cities listed on the Craigslist home page collectively receive more than 300,000 postings per day just in the "for sale" and "housing" sections as of October 2011. The classified advertisements range from traditional buy/sell ads and community announcements to personal ads. In 2009, Craigslist operated with a staff of 28 people. By 2019 this number grew to 50 people. In that year alone the company made more than $1 billion in revenue, while charging only ca.5 US$ for most ads (the exact price depends on the type of ad and service/property location). Financials and ownership In December 2006, at the UBS Global Media Conference in New York, Craigslist CEO Jim Buckmaster told Wall Street analysts that Craigslist had little interest in maximizing profit, and instead preferred to help users find cars, apartments, jobs and dates. Craigslist's main source of revenue is paid job ads in select American cities. The company does not formally disclose financial or ownership information. Analysts and commentators have reported varying figures for its annual revenue, ranging from $10 million in 2004, $20 million in 2005, and in 2006 to possibly in 2007. Fortune has described their revenue model as "quasi-socialist", citing their focus on features for users regardless of profitability. Eric Baker of StubHub has described the site as a "potential gold mine of revenue if only it would abandon its communist manifesto." On August 13, 2004, Newmark announced on his blog that auction giant eBay had purchased a 25% stake in the company from a former employee. Some fans of Craigslist expressed concern that this development would affect the site's longtime non-commercial nature. , there have been no substantive changes to the usefulness or the non-advertising nature of the site; neither banner ads, nor charges for a few services provided to businesses. The company was believed to be owned principally by Newmark, Buckmaster, and eBay (the three board members). eBay owned approximately 25%, and Newmark is believed to own the largest stake. In April 2008, eBay announced it was suing Craigslist to "safeguard its four-year financial investment". eBay claimed that in , Craigslist executives took actions that "unfairly diluted eBay's economic interest by more than 10%". Craigslist filed a counter-suit in to "remedy the substantial and ongoing harm to fair competition" that Craigslist claimed was constituted by eBay's actions as Craigslist shareholders; the company claimed that it had used its minority stake to gain access to confidential information, which it then used as part of its competing service Kijiji. On June 19, 2015, eBay Inc. announced that it would divest its stake back to Craigslist for an undisclosed amount, and settle its litigation with the company. The move came shortly before eBay's planned spin-off of PayPal, and an effort to divest other units to focus on its core business. The Swedish luxury marketplace website Jameslist.com received a lawsuit filed on July 11, 2012, which among unspecified damages also asked for a complete shutdown of Jameslist.com As a consequence, the young company was forced to rename to JamesEdition. Content policies As of 2012, mashup sites such as padmapper.com and housingmaps.com were overlaying Craigslist data with Google Maps and adding their own search filters to improve usability. In June 2012, Craigslist changed its terms of service to disallow the practice. In July 2012, Craigslist filed a lawsuit against padmapper.com. Following the shutdown of Padmapper.com, some users complained that the service was useful to them and therefore should have remained intact. App In December 2019, Craigslist introduced a platform for iOS and a beta version on Android. Site characteristics Craigslist is famous for its repeated refusal to update its website appearance. In 2024 it still features a 1995 barebone design, which became its most recognizable feature. Personals Over the years Craigslist had become a very popular online destination for arranging for dates and sex. The personals section allows for postings that are for "strictly platonic", "dating/romance", and "casual encounters". The site was considered particularly useful by lesbians and gay men seeking to make connections, because of the service's free and open nature and because of the difficulty of otherwise finding each other in more conservative areas. In 2005, San Francisco Craigslist's men seeking men section was attributed to facilitating sexual encounters and was the second most common correlation to syphilis infections. The company has been pressured by San Francisco Department of Public Health officials, prompting Jim Buckmaster to state that the site has a very small staff and that the public "must police themselves". The site has, however, added links to San Francisco City Clinic and STD forums. On March 22, 2018, Craigslist discontinued its "Personals" section in the United States in response to the passing of the Stop Enabling Sex Traffickers Act (SESTA), which removes Section 230 safe harbours for interactive services knowingly involved in illegal sex trafficking. The service stated that US Congress just passed HR 1865, 'FOSTA', seeking to subject websites to criminal and civil liability when third parties (users) misuse online personals unlawfully. Any tool or service can be misused. We can't take such risk without jeopardizing all our other services, so we are regretfully taking craigslist personals offline. To the millions of spouses, partners, and couples who met through craigslist, we wish you every happiness! Adult services controversy Advertisements for "adult" (previously "erotic") services were initially given special treatment, then closed entirely on , 2010, following a controversy over claims by state attorneys general that the advertisements promoted prostitution. In 2002, a disclaimer was put on the "men seeking men", "casual encounters", "erotic services", and "rants and raves" boards to ensure that those who clicked on these sections were over the age of 18, but no disclaimer was put on the "men seeking women", "women seeking men" or "women seeking women" boards. As a response to charges of sex discrimination and negative stereotyping, Buckmaster explained that the company's policy is a response to user feedback requesting the warning on the more sexually explicit sections, including "men seeking men". On May 13, 2009, Craigslist announced that it would close the erotic services section, replacing it with an adult services section to be reviewed by Craigslist employees. This decision came after allegations by several U.S. states that the erotic services ads were being used for prostitution. On September 4, 2010, Craigslist closed the adult services section of its website in the United States. The site initially replaced the adult services page link with the word "censored" in white-on-black text. The site received criticism and complaints from attorneys general that the section's ads were facilitating prostitution and child sex trafficking. The adult services section link was still active in countries outside of the U.S. Matt Zimmerman, senior staff attorney for the Electronic Frontier Foundation, said, "Craigslist isn't legally culpable for these posts, but the public pressure has increased and Craigslist is a small company." Brian Carver, attorney and assistant professor at UC Berkeley, said that legal threats could have a chilling effect on online expression. "If you impose liability on Craigslist, YouTube and Facebook for anything their users do, then they're not going to take chances. It would likely result in the takedown of what might otherwise be perfectly legitimate free expression." On , 2010, the "censored" label and its dead link to adult services were completely removed. Craigslist announced on September 15, 2010, that it had closed its adult services in the United States; however, it defended its right to carry such ads. Free speech and some sex crime victim advocates criticized the removal of the section, saying that it threatened free speech and that it diminished law enforcement's ability to track criminals. However, the removal was applauded by many state attorneys general and some other groups fighting sex crimes. Craigslist said that there is some indication that those who posted ads in the adult services section are posting elsewhere. Sex ads had cost $10 initially and it was estimated they would have brought in $44 million in 2010 had they continued. In the four months following the closure, monthly revenue from sex ads on six other sites (primarily Backpage) increased from $2.1 to $3.1 million, partly due to price increases. The company has tried to fight prostitution and sex trafficking, and in 2015, Craig Newmark received an award from the FBI for cooperation with law enforcement to fight human trafficking. On , 2010, after pressure from Ottawa and several provinces, Craigslist closed 'Erotic Services' and 'Adult Gigs' from its Canadian website, even though prostitution was not itself illegal in Canada at the time. When the Fight Online Sex Trafficking Act was signed into law on April 11, 2018, Craigslist chose to close its "Personals" section within all US domains to avoid civil lawsuits. About their decision, Craigslist stated "Any tool or service can be misused. We can't take such risk without jeopardizing all our other services." Flagging Craigslist has a user flagging system to report illegal and inappropriate postings. Flagging does not require account login or registration, and can be done anonymously by anyone. Postings are subject to automated removal when a certain number of users flag them. The number of flags required for a posting's removal is dynamically variable and remains unknown to all but Craigslist staff. Some users allege that flagging may also occur as acts of vandalism by groups of individuals at different ISPs, but no evidence of this has ever been shown. Flagging can also alert Craigslist staff to blocks of ads requiring manual oversight or removal. Flagging is also done by Craigslist itself (Craigslist's automated systems) and the posts will never appear on the search results. Bartering Craigslist includes a barter option in its "for sale" section. This growing trade economy has been documented on the television program Barter Kings and the blog one red paperclip. Criticism From its earliest days Craigslist faced criticism for allowing illegal/unethical activities and for poor protection of buyers and sellers. Some well-recognized types of illegal activities comprise: counterfeit goods, advanced fee fraud and buyer-seller collusion. In "counterfeit goods" scam, the seller uses a marketplace to sell illegal or counterfeit products, while misrepresenting them as legitimate goods. In "advanced fee fraud" the seller tricks the buyer's into making an unsecured (i.e. not via a credit card or another legitimate escrow service) payment before receiving the product/service. In "buyer-seller collusion" a fraudulent buyer uses the victim's payment information (such as a credit card number) to buy fake goods from a colluding fake seller, thus getting the money from the victim and evading the traditional credit card safeguard practices. In its defense Craigslist successfully used Section 230 of the US Communications Decency Act (CDA), which states that websites cannot be held liable for the actions of their users. Craigslist also successfully challenged a 2017 FOSTA amendment to the CDA, which created an exception allowing legal actions against a platform, if its users violate federal sex-trafficking laws (see Dart v. Craigslist, Inc.). In July 2005, the San Francisco Chronicle criticized Craigslist for allowing ads from dog breeders, stating that this could encourage the over-breeding and irresponsible selling of pit bulls in the Bay Area. According to Craigslist's terms of service, the sale of pets is prohibited, though re-homing with small adoption fees is acceptable. In addition to allowing illegal activities and to poor customer protection, Craigslist has been numerous times accused of unfair competition. For example, in January 2006, the San Francisco Bay Guardian published an editorial claiming that Craigslist could threaten the business of local alternative newspapers. L. Gordon Crovitz, writing for The Wall Street Journal, criticized the company for using lawsuits "to prevent anyone from doing to it what it did to newspapers", contrary to the spirit of the website, which describes itself as a "noncommercial nature, public service mission, and noncorporate culture". This article was a reaction to lawsuits from Craigslist, which Crovitz says were intended to prevent competition. Craigslist filed a trademark lawsuit against the Swedish luxury marketplace website Jameslist.com on July 11, 2012, forcing the company to rename to JamesEdition. In 2012, Craigslist sued PadMapper, a site that hoped to improve the user interface for browsing housing ads, and 3Taps, a company that helped PadMapper obtain data from Craigslist, in Craigslist v. 3Taps. This led users to criticize Craigslist for trying to shut down a service that was useful to them. Nonprofit foundation In 2001, the company started the Craigslist Foundation, a § 501(c)(3) nonprofit organization that offered free and low-cost events and online resources to promote community building at all levels. It accepts charitable donations, and rather than directly funding organizations, it produces "face-to-face events and offers online resources to help grassroots organizations get off the ground and contribute real value to the community". In 2012, the Craigslist Foundation closed, with charity work moving to support charitable funds. In popular culture Films 24 Hours on Craigslist (2005), an American feature-length documentary that captures the people and stories behind a single day's posts on Craigslist Due Date shows one of the lead characters, Ethan (Zach Galifianakis), buying marijuana from a dealer through the site. The Craigslist Killer (January 3, 2011), a Lifetime made-for-TV movie featuring the story of Philip Markoff, who was accused of robbing and/or murdering several prostitutes he met through Craigslist's adult services section. Craigslist Joe (August 2012), a documentary featuring a 29-year-old man living for 31 days solely from donations of food, shelter, and transportation throughout the U.S., found via Craigslist Mike and Dave Need Wedding Dates (2016), a comedy based on a real Craigslist ad placed by two brothers who wanted dates for their cousin's wedding that went viral in February 2013, which they then turned into a book, Mike and Dave Need Wedding Dates: And a Thousand Cocktails. Television The American comedy series Bored to Death revolves around a fictional Jonathan Ames (played by Jason Schwartzman) who posts an ad on Craigslist advertising himself as an unlicensed private detective. The premise of the sitcom New Girl centers around a girl (Zooey Deschanel) who looks on Craigslist to find new roommates. She misunderstands one of the listings and ends up moving in with three men when she had intended to find female roommates. The American television mockumentary comedy sitcom Modern Family in the 10th episode of the third season "Express Christmas" mentions Craigslist when Phil Dunphy played by Ty Burrell buys a signed Joe Dimaggio card for his father-in-law Jay played by Ed ONeill. Theatre In November 2007, Ryan J. Davis directed Jeffery Self's solo show My Life on the Craigslist at off-Broadway's New World Stages. The show focuses on a young man's sexual experiences on Craigslist and was so successful that it returned to New York by popular demand in . Songs In June 2009, "Weird Al" Yankovic released a song entitled "Craigslist", which parodied the types of ads one might see on the site. The song was a style parody of The Doors and featured Doors member Ray Manzarek on the keyboards. In 2006, composer Gabriel Kahane released an album of his satirical art songs for voice and piano, entitled "Craigslistlieder", using excerpts from real Craigslist ads as text. Media Craigslist received attention in the media in 2011 and 2014 when it was reported that convicted murderers had used the platform to lure their victims. The site has been described by Martin Sorrell as "socialistic anarchist".
Technology
E-commerce
null
19378200
https://en.wikipedia.org/wiki/Representation%20theory
Representation theory
Representation theory is a branch of mathematics that studies abstract algebraic structures by representing their elements as linear transformations of vector spaces, and studies modules over these abstract algebraic structures. In essence, a representation makes an abstract algebraic object more concrete by describing its elements by matrices and their algebraic operations (for example, matrix addition, matrix multiplication). The algebraic objects amenable to such a description include groups, associative algebras and Lie algebras. The most prominent of these (and historically the first) is the representation theory of groups, in which elements of a group are represented by invertible matrices such that the group operation is matrix multiplication. Representation theory is a useful method because it reduces problems in abstract algebra to problems in linear algebra, a subject that is well understood. Representations of more abstract objects in terms of familiar linear algebra can elucidate properties and simplify calculations within more abstract theories. For instance, representing a group by an infinite-dimensional Hilbert space allows methods of analysis to be applied to the theory of groups. Furthermore, representation theory is important in physics because it can describe how the symmetry group of a physical system affects the solutions of equations describing that system. Representation theory is pervasive across fields of mathematics. The applications of representation theory are diverse. In addition to its impact on algebra, representation theory generalizes Fourier analysis via harmonic analysis, is connected to geometry via invariant theory and the Erlangen program, has an impact in number theory via automorphic forms and the Langlands program. There are many approaches to representation theory: the same objects can be studied using methods from algebraic geometry, module theory, analytic number theory, differential geometry, operator theory, algebraic combinatorics and topology. The success of representation theory has led to numerous generalizations. One of the most general is in category theory. The algebraic objects to which representation theory applies can be viewed as particular kinds of categories, and the representations as functors from the object category to the category of vector spaces. This description points to two natural generalizations: first, the algebraic objects can be replaced by more general categories; second, the target category of vector spaces can be replaced by other well-understood categories. Definitions and concepts Let be a vector space over a field . For instance, suppose is or , the standard n-dimensional space of column vectors over the real or complex numbers, respectively. In this case, the idea of representation theory is to do abstract algebra concretely by using matrices of real or complex numbers. There are three main sorts of algebraic objects for which this can be done: groups, associative algebras and Lie algebras. The set of all invertible matrices is a group under matrix multiplication, and the representation theory of groups analyzes a group by describing ("representing") its elements in terms of invertible matrices. Matrix addition and multiplication make the set of all matrices into an associative algebra, and hence there is a corresponding representation theory of associative algebras. If we replace matrix multiplication by the matrix commutator , then the matrices become instead a Lie algebra, leading to a representation theory of Lie algebras. This generalizes to any field and any vector space over , with linear maps replacing matrices and composition replacing matrix multiplication: there is a group of automorphisms of , an associative algebra of all endomorphisms of , and a corresponding Lie algebra . Definition Action There are two ways to define a representation. The first uses the idea of an action, generalizing the way that matrices act on column vectors by matrix multiplication. A representation of a group or (associative or Lie) algebra on a vector space is a map with two properties. For any in (or in ), the map is linear (over ). If we introduce the notation g · v for (g, v), then for any g1, g2 in G and v in V: where e is the identity element of G and g1g2 is the group product in G. The definition for associative algebras is analogous, except that associative algebras do not always have an identity element, in which case equation (2.1) is omitted. Equation (2.2) is an abstract expression of the associativity of matrix multiplication. This doesn't hold for the matrix commutator and also there is no identity element for the commutator. Hence for Lie algebras, the only requirement is that for any x1, x2 in A and v in V: where [x1, x2] is the Lie bracket, which generalizes the matrix commutator MN − NM. Mapping The second way to define a representation focuses on the map φ sending g in G to a linear map φ(g): V → V, which satisfies and similarly in the other cases. This approach is both more concise and more abstract. From this point of view: a representation of a group G on a vector space V is a group homomorphism φ: G → GL(V,F); a representation of an associative algebra A on a vector space V is an algebra homomorphism φ: A → EndF(V); a representation of a Lie algebra on a vector space is a Lie algebra homomorphism . Terminology The vector space V is called the representation space of φ and its dimension (if finite) is called the dimension of the representation (sometimes degree, as in ). It is also common practice to refer to V itself as the representation when the homomorphism φ is clear from the context; otherwise the notation (V,φ) can be used to denote a representation. When V is of finite dimension n, one can choose a basis for V to identify V with Fn, and hence recover a matrix representation with entries in the field F. An effective or faithful representation is a representation (V,φ), for which the homomorphism φ is injective. Equivariant maps and isomorphisms If V and W are vector spaces over F, equipped with representations φ and ψ of a group G, then an equivariant map from V to W is a linear map α: V → W such that for all g in G and v in V. In terms of φ: G → GL(V) and ψ: G → GL(W), this means for all g in G, that is, the following diagram commutes: Equivariant maps for representations of an associative or Lie algebra are defined similarly. If α is invertible, then it is said to be an isomorphism, in which case V and W (or, more precisely, φ and ψ) are isomorphic representations, also phrased as equivalent representations. An equivariant map is often called an intertwining map of representations. Also, in the case of a group , it is on occasion called a -map. Isomorphic representations are, for practical purposes, "the same"; they provide the same information about the group or algebra being represented. Representation theory therefore seeks to classify representations up to isomorphism. Subrepresentations, quotients, and irreducible representations If is a representation of (say) a group , and is a linear subspace of that is preserved by the action of in the sense that for all and , (Serre calls these stable under ), then is called a subrepresentation: by defining where is the restriction of to , is a representation of and the inclusion of is an equivariant map. The quotient space can also be made into a representation of . If has exactly two subrepresentations, namely the trivial subspace {0} and itself, then the representation is said to be irreducible; if has a proper nontrivial subrepresentation, the representation is said to be reducible. The definition of an irreducible representation implies Schur's lemma: an equivariant map between irreducible representations is either the zero map or an isomorphism, since its kernel and image are subrepresentations. In particular, when , this shows that the equivariant endomorphisms of form an associative division algebra over the underlying field F. If F is algebraically closed, the only equivariant endomorphisms of an irreducible representation are the scalar multiples of the identity. Irreducible representations are the building blocks of representation theory for many groups: if a representation is not irreducible then it is built from a subrepresentation and a quotient that are both "simpler" in some sense; for instance, if is finite-dimensional, then both the subrepresentation and the quotient have smaller dimension. There are counterexamples where a representation has a subrepresentation, but only has one non-trivial irreducible component. For example, the additive group has a two dimensional representation This group has the vector fixed by this homomorphism, but the complement subspace maps to giving only one irreducible subrepresentation. This is true for all unipotent groups. Direct sums and indecomposable representations If (V,φ) and (W,ψ) are representations of (say) a group G, then the direct sum of V and W is a representation, in a canonical way, via the equation The direct sum of two representations carries no more information about the group G than the two representations do individually. If a representation is the direct sum of two proper nontrivial subrepresentations, it is said to be decomposable. Otherwise, it is said to be indecomposable. Complete reducibility In favorable circumstances, every finite-dimensional representation is a direct sum of irreducible representations: such representations are said to be semisimple. In this case, it suffices to understand only the irreducible representations. Examples where this "complete reducibility" phenomenon occurs (at least over fields of characteristic zero) include finite groups (see Maschke's theorem), compact groups, and semisimple Lie algebras. In cases where complete reducibility does not hold, one must understand how indecomposable representations can be built from irreducible representations by using extensions of quotients by subrepresentations. Tensor products of representations Suppose and are representations of a group . Then we can form a representation of G acting on the tensor product vector space as follows: . If and are representations of a Lie algebra, then the correct formula to use is . This product can be recognized as the coproduct on a coalgebra. In general, the tensor product of irreducible representations is not irreducible; the process of decomposing a tensor product as a direct sum of irreducible representations is known as Clebsch–Gordan theory. In the case of the representation theory of the group SU(2) (or equivalently, of its complexified Lie algebra ), the decomposition is easy to work out. The irreducible representations are labeled by a parameter that is a non-negative integer or half integer; the representation then has dimension . Suppose we take the tensor product of the representation of two representations, with labels and where we assume . Then the tensor product decomposes as a direct sum of one copy of each representation with label , where ranges from to in increments of 1. If, for example, , then the values of that occur are 0, 1, and 2. Thus, the tensor product representation of dimension decomposes as a direct sum of a 1-dimensional representation a 3-dimensional representation and a 5-dimensional representation . Branches and topics Representation theory is notable for the number of branches it has, and the diversity of the approaches to studying representations of groups and algebras. Although, all the theories have in common the basic concepts discussed already, they differ considerably in detail. The differences are at least 3-fold: Representation theory depends upon the type of algebraic object being represented. There are several different classes of groups, associative algebras and Lie algebras, and their representation theories all have an individual flavour. Representation theory depends upon the nature of the vector space on which the algebraic object is represented. The most important distinction is between finite-dimensional representations and infinite-dimensional ones. In the infinite-dimensional case, additional structures are important (for example, whether or not the space is a Hilbert space, Banach space, etc.). Additional algebraic structures can also be imposed in the finite-dimensional case. Representation theory depends upon the type of field over which the vector space is defined. The most important cases are the field of complex numbers, the field of real numbers, finite fields, and fields of p-adic numbers. Additional difficulties arise for fields of positive characteristic and for fields that are not algebraically closed. Finite groups Group representations are a very important tool in the study of finite groups. They also arise in the applications of finite group theory to geometry and crystallography. Representations of finite groups exhibit many of the features of the general theory and point the way to other branches and topics in representation theory. Over a field of characteristic zero, the representation of a finite group G has a number of convenient properties. First, the representations of G are semisimple (completely reducible). This is a consequence of Maschke's theorem, which states that any subrepresentation V of a G-representation W has a G-invariant complement. One proof is to choose any projection π from W to V and replace it by its average πG defined by πG is equivariant, and its kernel is the required complement. The finite-dimensional G-representations can be understood using character theory: the character of a representation φ: G → GL(V) is the class function χφ: G → F defined by where is the trace. An irreducible representation of G is completely determined by its character. Maschke's theorem holds more generally for fields of positive characteristic p, such as the finite fields, as long as the prime p is coprime to the order of G. When p and |G| have a common factor, there are G-representations that are not semisimple, which are studied in a subbranch called modular representation theory. Averaging techniques also show that if F is the real or complex numbers, then any G-representation preserves an inner product on V in the sense that for all g in G and v, w in W. Hence any G-representation is unitary. Unitary representations are automatically semisimple, since Maschke's result can be proven by taking the orthogonal complement of a subrepresentation. When studying representations of groups that are not finite, the unitary representations provide a good generalization of the real and complex representations of a finite group. Results such as Maschke's theorem and the unitary property that rely on averaging can be generalized to more general groups by replacing the average with an integral, provided that a suitable notion of integral can be defined. This can be done for compact topological groups (including compact Lie groups), using Haar measure, and the resulting theory is known as abstract harmonic analysis. Over arbitrary fields, another class of finite groups that have a good representation theory are the finite groups of Lie type. Important examples are linear algebraic groups over finite fields. The representation theory of linear algebraic groups and Lie groups extends these examples to infinite-dimensional groups, the latter being intimately related to Lie algebra representations. The importance of character theory for finite groups has an analogue in the theory of weights for representations of Lie groups and Lie algebras. Representations of a finite group G are also linked directly to algebra representations via the group algebra F[G], which is a vector space over F with the elements of G as a basis, equipped with the multiplication operation defined by the group operation, linearity, and the requirement that the group operation and scalar multiplication commute. Modular representations Modular representations of a finite group G are representations over a field whose characteristic is not coprime to |G|, so that Maschke's theorem no longer holds (because |G| is not invertible in F and so one cannot divide by it). Nevertheless, Richard Brauer extended much of character theory to modular representations, and this theory played an important role in early progress towards the classification of finite simple groups, especially for simple groups whose characterization was not amenable to purely group-theoretic methods because their Sylow 2-subgroups were "too small". As well as having applications to group theory, modular representations arise naturally in other branches of mathematics, such as algebraic geometry, coding theory, combinatorics and number theory. Unitary representations A unitary representation of a group G is a linear representation φ of G on a real or (usually) complex Hilbert space V such that φ(g) is a unitary operator for every g ∈ G. Such representations have been widely applied in quantum mechanics since the 1920s, thanks in particular to the influence of Hermann Weyl, and this has inspired the development of the theory, most notably through the analysis of representations of the Poincaré group by Eugene Wigner. One of the pioneers in constructing a general theory of unitary representations (for any group G rather than just for particular groups useful in applications) was George Mackey, and an extensive theory was developed by Harish-Chandra and others in the 1950s and 1960s. A major goal is to describe the "unitary dual", the space of irreducible unitary representations of G. The theory is most well-developed in the case that G is a locally compact (Hausdorff) topological group and the representations are strongly continuous. For G abelian, the unitary dual is just the space of characters, while for G compact, the Peter–Weyl theorem shows that the irreducible unitary representations are finite-dimensional and the unitary dual is discrete. For example, if G is the circle group S1, then the characters are given by integers, and the unitary dual is Z. For non-compact G, the question of which representations are unitary is a subtle one. Although irreducible unitary representations must be "admissible" (as Harish-Chandra modules) and it is easy to detect which admissible representations have a nondegenerate invariant sesquilinear form, it is hard to determine when this form is positive definite. An effective description of the unitary dual, even for relatively well-behaved groups such as real reductive Lie groups (discussed below), remains an important open problem in representation theory. It has been solved for many particular groups, such as SL(2,R) and the Lorentz group. Harmonic analysis The duality between the circle group S1 and the integers Z, or more generally, between a torus Tn and Zn is well known in analysis as the theory of Fourier series, and the Fourier transform similarly expresses the fact that the space of characters on a real vector space is the dual vector space. Thus unitary representation theory and harmonic analysis are intimately related, and abstract harmonic analysis exploits this relationship, by developing the analysis of functions on locally compact topological groups and related spaces. A major goal is to provide a general form of the Fourier transform and the Plancherel theorem. This is done by constructing a measure on the unitary dual and an isomorphism between the regular representation of G on the space L2(G) of square integrable functions on G and its representation on the space of L2 functions on the unitary dual. Pontrjagin duality and the Peter–Weyl theorem achieve this for abelian and compact G respectively. Another approach involves considering all unitary representations, not just the irreducible ones. These form a category, and Tannaka–Krein duality provides a way to recover a compact group from its category of unitary representations. If the group is neither abelian nor compact, no general theory is known with an analogue of the Plancherel theorem or Fourier inversion, although Alexander Grothendieck extended Tannaka–Krein duality to a relationship between linear algebraic groups and tannakian categories. Harmonic analysis has also been extended from the analysis of functions on a group G to functions on homogeneous spaces for G. The theory is particularly well developed for symmetric spaces and provides a theory of automorphic forms (discussed below). Lie groups A Lie group is a group that is also a smooth manifold. Many classical groups of matrices over the real or complex numbers are Lie groups. Many of the groups important in physics and chemistry are Lie groups, and their representation theory is crucial to the application of group theory in those fields. The representation theory of Lie groups can be developed first by considering the compact groups, to which results of compact representation theory apply. This theory can be extended to finite-dimensional representations of semisimple Lie groups using Weyl's unitary trick: each semisimple real Lie group G has a complexification, which is a complex Lie group Gc, and this complex Lie group has a maximal compact subgroup K. The finite-dimensional representations of G closely correspond to those of K. A general Lie group is a semidirect product of a solvable Lie group and a semisimple Lie group (the Levi decomposition). The classification of representations of solvable Lie groups is intractable in general, but often easy in practical cases. Representations of semidirect products can then be analysed by means of general results called Mackey theory, which is a generalization of the methods used in Wigner's classification of representations of the Poincaré group. Lie algebras A Lie algebra over a field F is a vector space over F equipped with a skew-symmetric bilinear operation called the Lie bracket, which satisfies the Jacobi identity. Lie algebras arise in particular as tangent spaces to Lie groups at the identity element, leading to their interpretation as "infinitesimal symmetries". An important approach to the representation theory of Lie groups is to study the corresponding representation theory of Lie algebras, but representations of Lie algebras also have an intrinsic interest. Lie algebras, like Lie groups, have a Levi decomposition into semisimple and solvable parts, with the representation theory of solvable Lie algebras being intractable in general. In contrast, the finite-dimensional representations of semisimple Lie algebras are completely understood, after work of Élie Cartan. A representation of a semisimple Lie algebra 𝖌 is analysed by choosing a Cartan subalgebra, which is essentially a generic maximal subalgebra 𝖍 of 𝖌 on which the Lie bracket is zero ("abelian"). The representation of 𝖌 can be decomposed into weight spaces that are eigenspaces for the action of 𝖍 and the infinitesimal analogue of characters. The structure of semisimple Lie algebras then reduces the analysis of representations to easily understood combinatorics of the possible weights that can occur. Infinite-dimensional Lie algebras There are many classes of infinite-dimensional Lie algebras whose representations have been studied. Among these, an important class are the Kac–Moody algebras. They are named after Victor Kac and Robert Moody, who independently discovered them. These algebras form a generalization of finite-dimensional semisimple Lie algebras, and share many of their combinatorial properties. This means that they have a class of representations that can be understood in the same way as representations of semisimple Lie algebras. Affine Lie algebras are a special case of Kac–Moody algebras, which have particular importance in mathematics and theoretical physics, especially conformal field theory and the theory of exactly solvable models. Kac discovered an elegant proof of certain combinatorial identities, Macdonald identities, which is based on the representation theory of affine Kac–Moody algebras. Lie superalgebras Lie superalgebras are generalizations of Lie algebras in which the underlying vector space has a Z2-grading, and skew-symmetry and Jacobi identity properties of the Lie bracket are modified by signs. Their representation theory is similar to the representation theory of Lie algebras. Linear algebraic groups Linear algebraic groups (or more generally, affine group schemes) are analogues in algebraic geometry of Lie groups, but over more general fields than just R or C. In particular, over finite fields, they give rise to finite groups of Lie type. Although linear algebraic groups have a classification that is very similar to that of Lie groups, their representation theory is rather different (and much less well understood) and requires different techniques, since the Zariski topology is relatively weak, and techniques from analysis are no longer available. Invariant theory Invariant theory studies actions on algebraic varieties from the point of view of their effect on functions, which form representations of the group. Classically, the theory dealt with the question of explicit description of polynomial functions that do not change, or are invariant, under the transformations from a given linear group. The modern approach analyses the decomposition of these representations into irreducibles. Invariant theory of infinite groups is inextricably linked with the development of linear algebra, especially, the theories of quadratic forms and determinants. Another subject with strong mutual influence is projective geometry, where invariant theory can be used to organize the subject, and during the 1960s, new life was breathed into the subject by David Mumford in the form of his geometric invariant theory. The representation theory of semisimple Lie groups has its roots in invariant theory and the strong links between representation theory and algebraic geometry have many parallels in differential geometry, beginning with Felix Klein's Erlangen program and Élie Cartan's connections, which place groups and symmetry at the heart of geometry. Modern developments link representation theory and invariant theory to areas as diverse as holonomy, differential operators and the theory of several complex variables. Automorphic forms and number theory Automorphic forms are a generalization of modular forms to more general analytic functions, perhaps of several complex variables, with similar transformation properties. The generalization involves replacing the modular group PSL2 (R) and a chosen congruence subgroup by a semisimple Lie group G and a discrete subgroup Γ. Just as modular forms can be viewed as differential forms on a quotient of the upper half space H = PSL2 (R)/SO(2), automorphic forms can be viewed as differential forms (or similar objects) on Γ\G/K, where K is (typically) a maximal compact subgroup of G. Some care is required, however, as the quotient typically has singularities. The quotient of a semisimple Lie group by a compact subgroup is a symmetric space and so the theory of automorphic forms is intimately related to harmonic analysis on symmetric spaces. Before the development of the general theory, many important special cases were worked out in detail, including the Hilbert modular forms and Siegel modular forms. Important results in the theory include the Selberg trace formula and the realization by Robert Langlands that the Riemann–Roch theorem could be applied to calculate the dimension of the space of automorphic forms. The subsequent notion of "automorphic representation" has proved of great technical value for dealing with the case that G is an algebraic group, treated as an adelic algebraic group. As a result, an entire philosophy, the Langlands program has developed around the relation between representation and number theoretic properties of automorphic forms. Associative algebras In one sense, associative algebra representations generalize both representations of groups and Lie algebras. A representation of a group induces a representation of a corresponding group ring or group algebra, while representations of a Lie algebra correspond bijectively to representations of its universal enveloping algebra. However, the representation theory of general associative algebras does not have all of the nice properties of the representation theory of groups and Lie algebras. Module theory When considering representations of an associative algebra, one can forget the underlying field, and simply regard the associative algebra as a ring, and its representations as modules. This approach is surprisingly fruitful: many results in representation theory can be interpreted as special cases of results about modules over a ring. Hopf algebras and quantum groups Hopf algebras provide a way to improve the representation theory of associative algebras, while retaining the representation theory of groups and Lie algebras as special cases. In particular, the tensor product of two representations is a representation, as is the dual vector space. The Hopf algebras associated to groups have a commutative algebra structure, and so general Hopf algebras are known as quantum groups, although this term is often restricted to certain Hopf algebras arising as deformations of groups or their universal enveloping algebras. The representation theory of quantum groups has added surprising insights to the representation theory of Lie groups and Lie algebras, for instance through the crystal basis of Kashiwara. History Generalizations Set-theoretic representations A set-theoretic representation (also known as a group action or permutation representation) of a group G on a set X is given by a function ρ from G to XX, the set of functions from X to X, such that for all g1, g2 in G and all x in X: This condition and the axioms for a group imply that ρ(g) is a bijection (or permutation) for all g in G. Thus we may equivalently define a permutation representation to be a group homomorphism from G to the symmetric group SX of X. Representations in other categories Every group G can be viewed as a category with a single object; morphisms in this category are just the elements of G. Given an arbitrary category C, a representation of G in C is a functor from G to C. Such a functor selects an object X in C and a group homomorphism from G to Aut(X), the automorphism group of X. In the case where C is VectF, the category of vector spaces over a field F, this definition is equivalent to a linear representation. Likewise, a set-theoretic representation is just a representation of G in the category of sets. For another example consider the category of topological spaces, Top. Representations in Top are homomorphisms from G to the homeomorphism group of a topological space X. Three types of representations closely related to linear representations are: projective representations: in the category of projective spaces. These can be described as "linear representations up to scalar transformations". affine representations: in the category of affine spaces. For example, the Euclidean group acts affinely upon Euclidean space. corepresentations of unitary and antiunitary groups: in the category of complex vector spaces with morphisms being linear or antilinear transformations. Representations of categories Since groups are categories, one can also consider representation of other categories. The simplest generalization is to monoids, which are categories with one object. Groups are monoids for which every morphism is invertible. General monoids have representations in any category. In the category of sets, these are monoid actions, but monoid representations on vector spaces and other objects can be studied. More generally, one can relax the assumption that the category being represented has only one object. In full generality, this is simply the theory of functors between categories, and little can be said. One special case has had a significant impact on representation theory, namely the representation theory of quivers. A quiver is simply a directed graph (with loops and multiple arrows allowed), but it can be made into a category (and also an algebra) by considering paths in the graph. Representations of such categories/algebras have illuminated several aspects of representation theory, for instance by allowing non-semisimple representation theory questions about a group to be reduced in some cases to semisimple representation theory questions about a quiver.
Mathematics
Algebra
null
7987684
https://en.wikipedia.org/wiki/Plutonium
Plutonium
Plutonium is a chemical element; it has symbol Pu and atomic number 94. It is a silvery-gray actinide metal that tarnishes when exposed to air, and forms a dull coating when oxidized. The element normally exhibits six allotropes and four oxidation states. It reacts with carbon, halogens, nitrogen, silicon, and hydrogen. When exposed to moist air, it forms oxides and hydrides that can expand the sample up to 70% in volume, which in turn flake off as a powder that is pyrophoric. It is radioactive and can accumulate in bones, which makes the handling of plutonium dangerous. Plutonium was first synthesized and isolated in late 1940 and early 1941, by deuteron bombardment of uranium-238 in the cyclotron at the University of California, Berkeley. First, neptunium-238 (half-life 2.1 days) was synthesized, which then beta-decayed to form the new element with atomic number 94 and atomic weight 238 (half-life 88 years). Since uranium had been named after the planet Uranus and neptunium after the planet Neptune, element 94 was named after Pluto, which at the time was also considered a planet. Wartime secrecy prevented the University of California team from publishing its discovery until 1948. Plutonium is the element with the highest atomic number known to occur in nature. Trace quantities arise in natural uranium deposits when uranium-238 captures neutrons emitted by decay of other uranium-238 atoms. The heavy isotope plutonium-244 has a half-life long enough that extreme trace quantities should have survived primordially (from the Earth's formation) to the present, but so far experiments have not yet been sensitive enough to detect it. Both plutonium-239 and plutonium-241 are fissile, meaning they can sustain a nuclear chain reaction, leading to applications in nuclear weapons and nuclear reactors. Plutonium-240 has a high rate of spontaneous fission, raising the neutron flux of any sample containing it. The presence of plutonium-240 limits a plutonium sample's usability for weapons or its quality as reactor fuel, and the percentage of plutonium-240 determines its grade (weapons-grade, fuel-grade, or reactor-grade). Plutonium-238 has a half-life of 87.7 years and emits alpha particles. It is a heat source in radioisotope thermoelectric generators, which are used to power some spacecraft. Plutonium isotopes are expensive and inconvenient to separate, so particular isotopes are usually manufactured in specialized reactors. Producing plutonium in useful quantities for the first time was a major part of the Manhattan Project during World War II that developed the first atomic bombs. The Fat Man bombs used in the Trinity nuclear test in July 1945, and in the bombing of Nagasaki in August 1945, had plutonium cores. Human radiation experiments studying plutonium were conducted without informed consent, and several criticality accidents, some lethal, occurred after the war. Disposal of plutonium waste from nuclear power plants and dismantled nuclear weapons built during the Cold War is a nuclear-proliferation and environmental concern. Other sources of plutonium in the environment are fallout from many above-ground nuclear tests, which are now banned. Characteristics Physical properties Plutonium, like most metals, has a bright silvery appearance at first, much like nickel, but it oxidizes very quickly to a dull gray, though yellow and olive green are also reported. At room temperature plutonium is in its α (alpha) form. This allotrope is about as hard and brittle as gray cast iron. When plutonium is alloyed with other metals, the high-temperature δ allotrope is stabilized at room temperature, making it soft and ductile. Unlike most metals, it is not a good conductor of heat or electricity. It has a low melting point () and an unusually high boiling point (). This gives a large range of temperatures (over 2,500 kelvin wide) at which plutonium is liquid, but this range is neither the greatest among all actinides nor among all metals, with neptunium theorized to have the greatest range in both instances. The low melting point as well as the reactivity of the native metal compared to the oxide leads to plutonium oxides being a preferred form for applications such as nuclear fission reactor fuel (MOX-fuel). Alpha decay, the release of a high-energy helium nucleus, is the most common form of radioactive decay for plutonium. A 5 kg mass of Pu contains about atoms. With a half-life of 24,100 years, about of its atoms decay each second by emitting a 5.157 MeV alpha particle. This amounts to 9.68 watts of power. Heat produced by the deceleration of these alpha particles makes it warm to the touch. due to its much shorter half life heats up to much higher temperatures and glows red hot with blackbody radiation if left without external heating or cooling. This heat has been used in radioisotope thermoelectric generators (see below). The resistivity of plutonium at room temperature is very high for a metal, and it gets even higher with lower temperatures, which is unusual for metals. This trend continues down to 100 K, below which resistivity rapidly decreases for fresh samples. Resistivity then begins to increase with time at around 20 K due to radiation damage, with the rate dictated by the isotopic composition of the sample. Because of self-irradiation, a sample of plutonium fatigues throughout its crystal structure, meaning the ordered arrangement of its atoms becomes disrupted by radiation with time. Self-irradiation can also lead to annealing which counteracts some of the fatigue effects as temperature increases above 100 K. Unlike most materials, plutonium increases in density when it melts, by 2.5%, but the liquid metal exhibits a linear decrease in density with temperature. Near the melting point, the liquid plutonium has very high viscosity and surface tension compared to other metals. Allotropes Plutonium normally has six allotropes and forms a seventh (zeta, ζ) at high temperature within a limited pressure range. These allotropes, which are different structural modifications or forms of an element, have very similar internal energies but significantly varying densities and crystal structures. This makes plutonium very sensitive to changes in temperature, pressure, or chemistry, and allows for dramatic volume changes following phase transitions from one allotropic form to another. The densities of the different allotropes vary from 16.00 g/cm to 19.86 g/cm. The presence of these many allotropes makes machining plutonium very difficult, as it changes state very readily. For example, the α form exists at room temperature in unalloyed plutonium. It has machining characteristics similar to cast iron but changes to the plastic and malleable β (beta) form at slightly higher temperatures. The reasons for the complicated phase diagram are not entirely understood. The α form has a low-symmetry monoclinic structure, hence its brittleness, strength, compressibility, and poor thermal conductivity. Plutonium in the δ (delta) form normally exists in the 310 °C to 452 °C range but is stable at room temperature when alloyed with a small percentage of gallium, aluminium, or cerium, enhancing workability and allowing it to be welded. The δ form has more typical metallic character, and is roughly as strong and malleable as aluminium. In fission weapons, the explosive shock waves used to compress a plutonium core will also cause a transition from the usual δ phase plutonium to the denser α form, significantly helping to achieve supercriticality. The ε phase, the highest temperature solid allotrope, exhibits anomalously high atomic self-diffusion compared to other elements. Nuclear fission Plutonium is a radioactive actinide metal whose isotope, plutonium-239, is one of the three primary fissile isotopes (uranium-233 and uranium-235 are the other two); plutonium-241 is also highly fissile. To be considered fissile, an isotope's atomic nucleus must be able to break apart or fission when struck by a slow moving neutron and to release enough additional neutrons to sustain the nuclear chain reaction by splitting further nuclei. Pure plutonium-239 may have a multiplication factor (keff) larger than one, which means that if the metal is present in sufficient quantity and with an appropriate geometry (e.g., a sphere of sufficient size), it can form a critical mass. During fission, a fraction of the nuclear binding energy, which holds a nucleus together, is released as a large amount of electromagnetic and kinetic energy (much of the latter being quickly converted to thermal energy). Fission of a kilogram of plutonium-239 can produce an explosion equivalent to . It is this energy that makes plutonium-239 useful in nuclear weapons and reactors. The presence of the isotope plutonium-240 in a sample limits its nuclear bomb potential, as Pu has a relatively high spontaneous fission rate (~440 fissions per second per gram; over 1,000 neutrons per second per gram), raising the background neutron levels and thus increasing the risk of predetonation. Plutonium is identified as either weapons-grade, fuel-grade, or reactor-grade based on the percentage of Pu that it contains. Weapons-grade plutonium contains less than 7% Pu. Fuel-grade plutonium contains 7%–19%, and power reactor-grade contains 19% or more Pu. Supergrade plutonium, with less than 4% of Pu, is used in United States Navy weapons stored near ship and submarine crews, due to its lower radioactivity. Plutonium-238 is not fissile but can undergo nuclear fission easily with fast neutrons as well as alpha decay. All plutonium isotopes can be "bred" into fissile material with one or more neutron absorptions, whether followed by beta decay or not. This makes non-fissile isotopes of plutonium a fertile material. Isotopes and nucleosynthesis Twenty-two radioisotopes of plutonium have been characterized, from 226Pu to 247Pu. The longest-lived are Pu, with a half-life of 80.8 million years; Pu, with a half-life of 373,300 years; and Pu, with a half-life of 24,110 years. All other isotopes have half-lives of less than 7,000 years. This element also has eight metastable states, though all have half-lives less than a second. Pu has been found in interstellar space and it has the longest half-life of any non-primordial radioisotope. The main decay modes of isotopes with mass numbers lower than the most stable isotope, Pu, are spontaneous fission and alpha emission, mostly forming uranium (92 protons) and neptunium (93 protons) isotopes as decay products (neglecting the wide range of daughter nuclei created by fission processes). The main decay mode for isotopes heavier than Pu, along with Pu and Pu, is beta emission, forming americium isotopes (95 protons). Plutonium-241 is the parent isotope of the neptunium series, decaying to americium-241 via beta emission. Plutonium-238 and 239 are the most widely synthesized isotopes. Pu is synthesized via the following reaction using uranium (U) and neutrons (n) via beta decay (β) with neptunium (Np) as an intermediate: {^{238}_{92}U} + {^{1}_{0}n} -> {^{239}_{92}U} ->[\beta^-] [23.5 \ \ce{min}] {^{239}_{93}Np} ->[\beta^-] [2.3565 \ \ce d] {^{239}_{94}Pu} Neutrons from the fission of uranium-235 are captured by uranium-238 nuclei to form uranium-239; a beta decay converts a neutron into a proton to form neptunium-239 (half-life 2.36 days) and another beta decay forms plutonium-239. Egon Bretscher working on the British Tube Alloys project predicted this reaction theoretically in 1940. Plutonium-238 is synthesized by bombarding uranium-238 with deuterons (D or H, the nuclei of heavy hydrogen) in the following reaction: where a deuteron hitting uranium-238 produces two neutrons and neptunium-238, which decays by emitting negative beta particles to form plutonium-238. Plutonium-238 can also be produced by neutron irradiation of neptunium-237. Decay heat and fission properties Plutonium isotopes undergo radioactive decay, which produces decay heat. Different isotopes produce different amounts of heat per mass. The decay heat is usually listed as watt/kilogram, or milliwatt/gram. In larger pieces of plutonium (e.g. a weapon pit) and inadequate heat removal the resulting self-heating may be significant. Compounds and chemistry At room temperature, pure plutonium is silvery in color but gains a tarnish when oxidized. The element displays four common ionic oxidation states in aqueous solution and one rare one: Pu(III), as Pu3+ (blue lavender) Pu(IV), as Pu4+ (yellow brown) Pu(V), as (light pink) Pu(VI), as (pink orange) Pu(VII), as (green)—the heptavalent ion is rare. The color shown by plutonium solutions depends on both the oxidation state and the nature of the acid anion. It is the acid anion that influences the degree of complexing—how atoms connect to a central atom—of the plutonium species. Additionally, the formal +2 oxidation state of plutonium is known in the complex [K(2.2.2-cryptand)] [PuIICp″3], Cp″ = C5H3(SiMe3)2. A +8 oxidation state is possible as well in the volatile tetroxide . Though it readily decomposes via a reduction mechanism similar to , can be stabilized in alkaline solutions and chloroform. Metallic plutonium is produced by reacting plutonium tetrafluoride with barium, calcium or lithium at 1200 °C. Metallic plutonium is attacked by acids, oxygen, and steam but not by alkalis and dissolves easily in concentrated hydrochloric, hydroiodic and perchloric acids. Molten metal must be kept in a vacuum or an inert atmosphere to avoid reaction with air. At 135 °C the metal will ignite in air and will explode if placed in carbon tetrachloride. Plutonium is a reactive metal. In moist air or moist argon, the metal oxidizes rapidly, producing a mixture of oxides and hydrides. If the metal is exposed long enough to a limited amount of water vapor, a powdery surface coating of PuO2 is formed. Also formed is plutonium hydride but an excess of water vapor forms only PuO2. Plutonium shows enormous, and reversible, reaction rates with pure hydrogen, forming plutonium hydride. It also reacts readily with oxygen, forming PuO and PuO2 as well as intermediate oxides; plutonium oxide fills 40% more volume than plutonium metal. The metal reacts with the halogens, giving rise to compounds with the general formula PuX3 where X can be F, Cl, Br or I and PuF4 is also seen. The following oxyhalides are observed: PuOCl, PuOBr and PuOI. It will react with carbon to form PuC, nitrogen to form PuN and silicon to form PuSi2. The organometallic chemistry of plutonium complexes is typical for organoactinide species; a characteristic example of an organoplutonium compound is plutonocene. Computational chemistry methods indicate an enhanced covalent character in the plutonium-ligand bonding. Powders of plutonium, its hydrides and certain oxides like Pu2O3 are pyrophoric, meaning they can ignite spontaneously at ambient temperature and are therefore handled in an inert, dry atmosphere of nitrogen or argon. Bulk plutonium ignites only when heated above 400 °C. Pu2O3 spontaneously heats up and transforms into PuO2, which is stable in dry air, but reacts with water vapor when heated. Crucibles used to contain plutonium need to be able to withstand its strongly reducing properties. Refractory metals such as tantalum and tungsten along with the more stable oxides, borides, carbides, nitrides and silicides can tolerate this. Melting in an electric arc furnace can be used to produce small ingots of the metal without the need for a crucible. Cerium is used as a chemical simulant of plutonium for development of containment, extraction, and other technologies. Electronic structure Plutonium is an element in which the 5f electrons are the transition border between delocalized and localized; it is therefore considered one of the most complex elements. The anomalous behavior of plutonium is caused by its electronic structure. The energy difference between the 6d and 5f subshells is very low. The size of the 5f shell is just enough to allow the electrons to form bonds within the lattice, on the very boundary between localized and bonding behavior. The proximity of energy levels leads to multiple low-energy electron configurations with near equal energy levels. This leads to competing 5fn7s2 and 5fn−16d17s2 configurations, which causes the complexity of its chemical behavior. The highly directional nature of 5f orbitals is responsible for directional covalent bonds in molecules and complexes of plutonium. Alloys Plutonium can form alloys and intermediate compounds with most other metals. Exceptions include lithium, sodium, potassium, rubidium and caesium of the alkali metals; and magnesium, calcium, strontium, and barium of the alkaline earth metals; and europium and ytterbium of the rare earth metals. Partial exceptions include the refractory metals chromium, molybdenum, niobium, tantalum, and tungsten, which are soluble in liquid plutonium, but insoluble or only slightly soluble in solid plutonium. Gallium, aluminium, americium, scandium and cerium can stabilize δ-phase plutonium for room temperature. Silicon, indium, zinc and zirconium allow formation of metastable δ state when rapidly cooled. High amounts of hafnium, holmium and thallium also allows some retention of the δ phase at room temperature. Neptunium is the only element that can stabilize the α phase at higher temperatures. Plutonium alloys can be produced by adding a metal to molten plutonium. If the alloying metal is reductive enough, plutonium can be added in the form of oxides or halides. The δ phase plutonium–gallium alloy (PGA) and plutonium–aluminium alloy are produced by adding Pu(III) fluoride to molten gallium or aluminium, which has the advantage of avoiding dealing directly with the highly reactive plutonium metal. PGA is used for stabilizing the δ phase of plutonium, avoiding the α-phase and α–δ related issues. Its main use is in pits of implosion bombs. Plutonium–aluminium is an alternative to PGA. It was the original element considered for δ phase stabilization, but its tendency to react with the alpha particles and release neutrons reduces its usability for nuclear weapons. Plutonium–aluminium alloy can be also used as a component of nuclear fuel. Plutonium–gallium–cobalt alloy (PuCoGa) is an unconventional superconductor, showing superconductivity below 18.5 K, an order of magnitude higher than the highest between heavy fermion systems, and has large critical current. Plutonium–zirconium alloy can be used as nuclear fuel. Plutonium–cerium and plutonium–cerium–cobalt alloys are used as nuclear fuels. Plutonium–uranium, with about 15–30 mol.% plutonium, can be used as a nuclear fuel for fast breeder reactors. Its pyrophoric nature and high susceptibility to corrosion to the point of self-igniting or disintegrating after exposure to air require alloying with other components. Addition of aluminium, carbon or copper does not improve disintegration rates markedly, zirconium and iron alloys have better corrosion resistance but they disintegrate in several months in air as well. Addition of titanium and/or zirconium significantly increases the melting point of the alloy. Plutonium–uranium–titanium and plutonium–uranium–zirconium were investigated for use as nuclear fuels. The addition of the third element increases corrosion resistance, reduces flammability, and improves ductility, fabricability, strength, and thermal expansion. Plutonium–uranium–molybdenum has the best corrosion resistance, forming a protective film of oxides, but titanium and zirconium are preferred for physics reasons. Thorium–uranium–plutonium was investigated as a nuclear fuel for fast breeder reactors. Occurrence Trace amounts of plutonium-238, plutonium-239, plutonium-240, and plutonium-244 can be found in nature. Small traces of plutonium-239, a few parts per trillion, and its decay products are naturally found in some concentrated ores of uranium, such as the natural nuclear fission reactor in Oklo, Gabon. The ratio of plutonium-239 to uranium at the Cigar Lake Mine uranium deposit ranges from to . These trace amounts of 239Pu originate in the following fashion: on rare occasions, 238U undergoes spontaneous fission, and in the process, the nucleus emits one or two free neutrons with some kinetic energy. When one of these neutrons strikes the nucleus of another 238U atom, it is absorbed by the atom, which becomes 239U. With a relatively short half-life, 239U decays to 239Np, which decays into 239Pu. Finally, exceedingly small amounts of plutonium-238, attributed to the extremely rare double beta decay of uranium-238, have been found in natural uranium samples. Due to its relatively long half-life of about 80 million years, it was suggested that plutonium-244 occurs naturally as a primordial nuclide, but early reports of its detection could not be confirmed. Based on its likely initial abundance in the Solar System, present experiments as of 2022 are likely about an order of magnitude away from detecting live primordial 244Pu. However, its long half-life ensured its circulation across the solar system before its extinction, and indeed, evidence of the spontaneous fission of extinct 244Pu has been found in meteorites. The former presence of 244Pu in the early Solar System has been confirmed, since it manifests itself today as an excess of its daughters, either 232Th (from the alpha decay pathway) or xenon isotopes (from its spontaneous fission). The latter are generally more useful, because the chemistries of thorium and plutonium are rather similar (both are predominantly tetravalent) and hence an excess of thorium would not be strong evidence that some of it was formed as a plutonium daughter. 244Pu has the longest half-life of all transuranic nuclides and is produced only in the r-process in supernovae and colliding neutron stars; when nuclei are ejected from these events at high speed to reach Earth, 244Pu alone among transuranic nuclides has a long enough half-life to survive the journey, and hence tiny traces of live interstellar 244Pu have been found in the deep sea floor. Because 240Pu also occurs in the decay chain of 244Pu, it must thus also be present in secular equilibrium, albeit in even tinier quantities. Minute traces of plutonium are usually found in the human body due to the 550 atmospheric and underwater nuclear tests that have been carried out, and to a small number of major nuclear accidents. Most atmospheric and underwater nuclear testing was stopped by the Limited Test Ban Treaty in 1963, which of the nuclear powers was signed and ratified by the United States, United Kingdom and Soviet Union. France would continue atmospheric nuclear testing until 1974 and China would continue atmospheric nuclear testing until 1980. All subsequent nuclear testing was conducted underground. History Discovery Enrico Fermi and a team of scientists at the University of Rome reported that they had discovered element 94 in 1934. Fermi called the element hesperium and mentioned it in his Nobel Lecture in 1938. The sample actually contained products of nuclear fission, primarily barium and krypton. Nuclear fission, discovered in Germany in 1938 by Otto Hahn and Fritz Strassmann, was unknown at the time. Plutonium (specifically, plutonium-238) was first produced, isolated and then chemically identified between December 1940 and February 1941 by Glenn T. Seaborg, Edwin McMillan, Emilio Segrè, Joseph W. Kennedy, and Arthur Wahl by deuteron bombardment of uranium in the cyclotron at the Berkeley Radiation Laboratory at the University of California, Berkeley. Neptunium-238 was created directly by the bombardment but decayed by beta emission with a half-life of a little over two days, which indicated the formation of element 94. The first bombardment took place on December 14, 1940, and the new element was first identified through oxidation on the night of February 23–24, 1941. A paper documenting the discovery was prepared by the team and sent to the journal Physical Review in March 1941, but publication was delayed until a year after the end of World War II due to security concerns. At the Cavendish Laboratory in Cambridge, Egon Bretscher and Norman Feather realized that a slow neutron reactor fuelled with uranium would theoretically produce substantial amounts of plutonium-239 as a by-product. They calculated that element 94 would be fissile, and had the added advantage of being chemically different from uranium, and could easily be separated from it. McMillan had recently named the first transuranic element neptunium after the planet Neptune, and suggested that element 94, being the next element in the series, be named for what was then considered the next planet, Pluto. Nicholas Kemmer of the Cambridge team independently proposed the same name, based on the same reasoning as the Berkeley team. Seaborg originally considered the name "plutium", but later thought that it did not sound as good as "plutonium". He chose the letters "Pu" as a joke, in reference to the interjection "P U" to indicate an especially disgusting smell, which passed without notice into the periodic table. Alternative names considered by Seaborg and others were "ultimium" or "extremium" because of the erroneous belief that they had found the last possible element on the periodic table. Hahn and Strassmann, and independently Kurt Starke, were at this point also working on transuranic elements in Berlin. It is likely that Hahn and Strassmann were aware that plutonium-239 should be fissile. However, they did not have a strong neutron source. Element 93 was reported by Hahn and Strassmann, as well as Starke, in 1942. Hahn's group did not pursue element 94, likely because they were discouraged by McMillan and Abelson's lack of success in isolating it when they had first found element 93. However, since Hahn's group had access to the stronger cyclotron at Paris at this point, they would likely have been able to detect plutonium had they tried, albeit in tiny quantities (a few becquerels). Early research The chemistry of plutonium was found to resemble uranium after a few months of initial study. Early research was continued at the secret Metallurgical Laboratory of the University of Chicago. On August 20, 1942, a trace quantity of this element was isolated and measured for the first time. About 50 micrograms of plutonium-239 combined with uranium and fission products was produced and only about 1 microgram was isolated. This procedure enabled chemists to determine the new element's atomic weight. On December 2, 1942, on a racket court under the west grandstand at the University of Chicago's Stagg Field, researchers headed by Enrico Fermi achieved the first self-sustaining chain reaction in a graphite and uranium pile known as CP-1. Using theoretical information garnered from the operation of CP-1, DuPont constructed an air-cooled experimental production reactor, known as X-10, and a pilot chemical separation facility at Oak Ridge. The separation facility, using methods developed by Glenn T. Seaborg and a team of researchers at the Met Lab, removed plutonium from uranium irradiated in the X-10 reactor. Information from CP-1 was also useful to Met Lab scientists designing the water-cooled plutonium production reactors for Hanford. Construction at the site began in mid-1943. In November 1943 some plutonium trifluoride was reduced to create the first sample of plutonium metal: a few micrograms of metallic beads. Enough plutonium was produced to make it the first synthetically made element to be visible with the unaided eye. The nuclear properties of plutonium-239 were also studied; researchers found that when it is hit by a neutron it breaks apart (fissions) by releasing more neutrons and energy. These neutrons can hit other atoms of plutonium-239 and so on in an exponentially fast chain reaction. This can result in an explosion large enough to destroy a city if enough of the isotope is concentrated to form a critical mass. During the early stages of research, animals were used to study the effects of radioactive substances on health. These studies began in 1944 at the University of California at Berkeley's Radiation Laboratory and were conducted by Joseph G. Hamilton. Hamilton was looking to answer questions about how plutonium would vary in the body depending on exposure mode (oral ingestion, inhalation, absorption through skin), retention rates, and how plutonium would be fixed in tissues and distributed among the various organs. Hamilton started administering soluble microgram portions of plutonium-239 compounds to rats using different valence states and different methods of introducing the plutonium (oral, intravenous, etc.). Eventually, the lab at Chicago also conducted its own plutonium injection experiments using different animals such as mice, rabbits, fish, and even dogs. The results of the studies at Berkeley and Chicago showed that plutonium's physiological behavior differed significantly from that of radium. The most alarming result was that there was significant deposition of plutonium in the liver and in the "actively metabolizing" portion of bone. Furthermore, the rate of plutonium elimination in the excreta differed between species of animals by as much as a factor of five. Such variation made it extremely difficult to estimate what the rate would be for human beings. Production during the Manhattan Project During World War II the U.S. government established the Manhattan Project, for developing an atomic bomb. The three primary research and production sites of the project were the plutonium production facility at what is now the Hanford Site; the uranium enrichment facilities at Oak Ridge, Tennessee; and the weapons research and design lab, now known as Los Alamos National Laboratory, LANL. The first production reactor that made Pu was the X-10 Graphite Reactor. It went online in 1943 and was built at a facility in Oak Ridge that later became the Oak Ridge National Laboratory. In January 1944, workers laid the foundations for the first chemical separation building, T Plant located in 200-West. Both the T Plant and its sister facility in 200-West, the U Plant, were completed by October. (U Plant was used only for training during the Manhattan Project.) The separation building in 200-East, B Plant, was completed in February 1945. The second facility planned for 200-East was canceled. Nicknamed Queen Marys by the workers who built them, the separation buildings were awesome canyon-like structures 800 feet long, 65 feet wide, and 80 feet high containing forty process pools. The interior had an eerie quality as operators behind seven feet of concrete shielding manipulated remote control equipment by looking through television monitors and periscopes from an upper gallery. Even with massive concrete lids on the process pools, precautions against radiation exposure were necessary and influenced all aspects of plant design. On April 5, 1944, Emilio Segrè at Los Alamos received the first sample of reactor-produced plutonium from Oak Ridge. Within ten days, he discovered that reactor-bred plutonium had a higher concentration of Pu than cyclotron-produced plutonium. Pu has a high spontaneous fission rate, raising the overall background neutron level of the plutonium sample. The original gun-type plutonium weapon, code-named "Thin Man", had to be abandoned as a result—the increased number of spontaneous neutrons meant that nuclear pre-detonation (fizzle) was likely. The entire plutonium weapon design effort at Los Alamos was soon changed to the more complicated implosion device, code-named "Fat Man". In an implosion bomb, plutonium is compressed to high density with explosive lenses—a technically more daunting task than the simple gun-type bomb, but necessary for a plutonium bomb. Uranium, by contrast, can be used with either method. Construction of the Hanford B Reactor, the first industrial-sized nuclear reactor for the purposes of material production, was completed in March 1945. B Reactor produced the fissile material for the plutonium weapons used during World War II. B, D and F were the initial reactors built at Hanford, and six additional plutonium-producing reactors were built later at the site. By the end of January 1945, the highly purified plutonium underwent further concentration in the completed chemical isolation building, where remaining impurities were removed successfully. Los Alamos received its first plutonium from Hanford on February 2. While it was still by no means clear that enough plutonium could be produced for use in bombs by the war's end, Hanford was by early 1945 in operation. Only two years had passed since Col. Franklin Matthias first set up his temporary headquarters on the banks of the Columbia River. According to Kate Brown, the plutonium production plants at Hanford and Mayak in Russia, over a period of four decades, "both released more than 200 million curies of radioactive isotopes into the surrounding environment—twice the amount expelled in the Chernobyl disaster in each instance". Most of this radioactive contamination over the years were part of normal operations, but unforeseen accidents did occur and plant management kept this secret, as the pollution continued unabated. In 2004, a safe was discovered during excavations of a burial trench at the Hanford nuclear site. Inside the safe were various items, including a large glass bottle containing a whitish slurry which was subsequently identified as the oldest sample of weapons-grade plutonium known to exist. Isotope analysis by Pacific Northwest National Laboratory indicated that the plutonium in the bottle was manufactured in the X-10 Graphite Reactor at Oak Ridge during 1944. Trinity and Fat Man atomic bombs The first atomic bomb test, codenamed "Trinity "and detonated on July 16, 1945, near Alamogordo, New Mexico, used plutonium as its fissile material. The implosion design of "Gadget", as the Trinity device was codenamed, used conventional explosive lenses to compress a sphere of plutonium into a supercritical mass, which was simultaneously showered with neutrons from "Urchin", an initiator made of polonium and beryllium (neutron source: (α, n) reaction). Together, these ensured a runaway chain reaction and explosion. The weapon weighed over 4 tonnes, though it had just 6 kg of plutonium. About 20% of the plutonium in the Trinity weapon, fissioned; releasing an energy equivalent to about 20,000 tons of TNT. An identical design was used in "Fat Man", dropped on Nagasaki, Japan, on August 9, 1945, killing 35,000–40,000 people and destroying 68%–80% of war production at Nagasaki. Only after the announcement of the first atomic bombs was the existence and name of plutonium made known to the public by the Manhattan Project's Smyth Report. Cold War use and waste Large stockpiles of weapons-grade plutonium were built up by both the Soviet Union and the United States during the Cold War. The U.S. reactors at Hanford and the Savannah River Site in South Carolina produced 103 tonnes, and an estimated 170 tonnes of military-grade plutonium was produced in the USSR. Each year about 20 tonnes of the element is still produced as a by-product of the nuclear power industry. As much as 1000 tonnes of plutonium may be in storage with more than 200 tonnes of that either inside or extracted from nuclear weapons. SIPRI estimated the world plutonium stockpile in 2007 as about 500 tonnes, divided equally between weapon and civilian stocks. Radioactive contamination at the Rocky Flats Plant primarily resulted from two major plutonium fires in 1957 and 1969. Much lower concentrations of radioactive isotopes were released throughout the operational life of the plant from 1952 to 1992. Prevailing winds from the plant carried airborne contamination south and east, into populated areas northwest of Denver. The contamination of the Denver area by plutonium from the fires and other sources was not publicly reported until the 1970s. According to a 1972 study coauthored by Edward Martell, "In the more densely populated areas of Denver, the Pu contamination level in surface soils is several times fallout", and the plutonium contamination "just east of the Rocky Flats plant ranges up to hundreds of times that from nuclear tests". As noted by Carl Johnson in Ambio, "Exposures of a large population in the Denver area to plutonium and other radionuclides in the exhaust plumes from the plant date back to 1953." Weapons production at the Rocky Flats plant was halted after a combined FBI and EPA raid in 1989 and years of protests. The plant has since been shut down, with its buildings demolished and completely removed from the site. In the U.S., some plutonium extracted from dismantled nuclear weapons is melted to form glass logs of plutonium oxide that weigh two tonnes. The glass is made of borosilicates mixed with cadmium and gadolinium. These logs are planned to be encased in stainless steel and stored as much as underground in bore holes that will be back-filled with concrete. The U.S. planned to store plutonium in this way at the Yucca Mountain nuclear waste repository, which is about north-east of Las Vegas, Nevada. On March 5, 2009, Energy Secretary Steven Chu told a Senate hearing "the Yucca Mountain site no longer was viewed as an option for storing reactor waste". Starting in 1999, military-generated nuclear waste is being entombed at the Waste Isolation Pilot Plant in New Mexico. In a Presidential Memorandum dated January 29, 2010, President Obama established the Blue Ribbon Commission on America's Nuclear Future. In their final report the Commission put forth recommendations for developing a comprehensive strategy to pursue, including: "Recommendation #1: The United States should undertake an integrated nuclear waste management program that leads to the timely development of one or more permanent deep geological facilities for the safe disposal of spent fuel and high-level nuclear waste". Medical experimentation During and after the end of World War II, scientists working on the Manhattan Project and other nuclear weapons research projects conducted studies of the effects of plutonium on laboratory animals and human subjects. Animal studies found that a few milligrams of plutonium per kg of tissue is a lethal dose. For human subjects, this involved injecting solutions typically containing 5 micrograms (μg) of plutonium into hospital patients thought to be either terminally ill, or to have a life expectancy of less than ten years either due to age or chronic disease. This was reduced to 1 μg in July 1945 after animal studies found that the way plutonium distributes itself in bones is more dangerous than radium. Most of the subjects, Eileen Welsome says, were poor, powerless, and sick. In 1945–47, eighteen human test subjects were injected with plutonium without informed consent. The tests were used to create diagnostic tools to determine the uptake of plutonium in the body in order to develop safety standards for working with plutonium. Ebb Cade was an unwilling participant in medical experiments that involved injection of 4.7 μg of plutonium on April 10, 1945, at Oak Ridge, Tennessee. This experiment was under the supervision of Harold Hodge. Other experiments directed by the United States Atomic Energy Commission and the Manhattan Project continued into the 1970s. The Plutonium Files chronicles the lives of the subjects of the secret program by naming each person involved and discussing the ethical and medical research conducted in secret by the scientists and doctors. The episode is now considered to be a serious breach of medical ethics and of the Hippocratic Oath. The government covered up most of these actions until 1993, when President Bill Clinton ordered a change of policy and federal agencies then made available relevant records. The resulting investigation was undertaken by the president's Advisory Committee on Human Radiation Experiments, and it uncovered much of the material about plutonium research on humans. The committee issued a controversial 1995 report which said that "wrongs were committed" but it did not condemn those who perpetrated them. Applications Explosives Pu is a key fissile component in nuclear weapons, due to its ease of fission and availability. Encasing the bomb's plutonium pit in a tamper (a layer of dense material) decreases the critical mass by reflecting escaping neutrons back into the plutonium core. This reduces the critical mass from 16 kg to 10 kg, which is a sphere with a diameter of about . This critical mass is about a third of that for uranium-235. The Fat Man plutonium bombs used explosive compression of plutonium to obtain significantly higher density than normal, combined with a central neutron source to begin the reaction and increase efficiency. Thus only 6 kg of plutonium was needed for an explosive yield equivalent to 20 kilotons of TNT. Hypothetically, as little as 4 kg of plutonium—and maybe even less—could be used to make a single atomic bomb using very sophisticated assembly designs. Mixed oxide fuel Spent nuclear fuel from normal light water reactors contains plutonium, but it is a mixture of plutonium-242, 240, 239 and 238. The mixture is not sufficiently enriched for efficient nuclear weapons, but can be used once as MOX fuel. Accidental neutron capture causes the amount of plutonium-242 and 240 to grow each time the plutonium is irradiated in a reactor with low-speed "thermal" neutrons, so that after the second cycle, the plutonium can only be consumed by fast neutron reactors. If fast neutron reactors are not available (the normal case), excess plutonium is usually discarded, and forms one of the longest-lived components of nuclear waste. The desire to consume this plutonium and other transuranic fuels and reduce the radiotoxicity of the waste is the usual reason nuclear engineers give to make fast neutron reactors. The most common chemical process, PUREX (Plutonium–URanium EXtraction), reprocesses spent nuclear fuel to extract plutonium and uranium which can be used to form a mixed oxide (MOX) fuel for reuse in nuclear reactors. Weapons-grade plutonium can be added to the fuel mix. MOX fuel is used in light water reactors and consists of 60 kg of plutonium per tonne of fuel; after four years, three-quarters of the plutonium is burned (turned into other elements). MOX fuel has been in use since the 1980s, and is widely used in Europe. Breeder reactors are specifically designed to create more fissionable material than they consume. MOX fuel improves total burnup. A fuel rod is reprocessed after three years of use to remove waste products, which by then account for 3% of the total weight of the rods. Any uranium or plutonium isotopes produced during those three years are left and the rod goes back into production. The presence of up to 1% gallium per mass in weapons-grade plutonium alloy has the potential to interfere with long-term operation of a light water reactor. Plutonium recovered from spent reactor fuel poses little proliferation hazard, because of excessive contamination with non-fissile plutonium-240 and plutonium-242. Separation of the isotopes is not feasible. A dedicated reactor operating on very low burnup (hence minimal exposure of newly formed plutonium-239 to additional neutrons which causes it to be transformed to heavier isotopes of plutonium) is generally required to produce material suitable for use in efficient nuclear weapons. While "weapons-grade" plutonium is defined to contain at least 92% plutonium-239 (of the total plutonium), the United States have managed to detonate an under-20Kt device using plutonium believed to contain only about 85% plutonium-239, so called '"fuel-grade" plutonium. The "reactor-grade" plutonium produced by a regular LWR burnup cycle typically contains less than 60% Pu-239, with up to 30% parasitic Pu-240/Pu-242, and 10–15% fissile Pu-241. It is unknown if a device using plutonium obtained from reprocessed civil nuclear waste can be detonated, however such a device could hypothetically fizzle and spread radioactive materials over a large urban area. The IAEA conservatively classifies plutonium of all isotopic vectors as "direct-use" material, that is, "nuclear material that can be used for the manufacture of nuclear explosives components without transmutation or further enrichment". Power and heat source Plutonium-238 has a half-life of 87.74 years. It emits a large amount of thermal energy with low levels of both gamma rays/photons and neutrons. Being an alpha emitter, it combines high energy radiation with low penetration and thereby requires minimal shielding. A sheet of paper can be used to shield against the alpha particles from Pu. One kilogram of the isotope generates about 570 watts of heat. These characteristics make it well-suited for electrical power generation for devices that must function without direct maintenance for timescales approximating a human lifetime. It is therefore used in radioisotope thermoelectric generators and radioisotope heater units such as those in the Cassini, Voyager, Galileo and New Horizons space probes, and the Curiosity and Perseverance (Mars 2020) Mars rovers. The twin Voyager spacecraft were launched in 1977, each containing a 500 watt plutonium power source. Over 30 years later, each source still produces about 300 watts which allows limited operation of each spacecraft. An earlier version of the same technology powered five Apollo Lunar Surface Experiment Packages, starting with Apollo 12 in 1969. Pu has also been used successfully to power artificial heart pacemakers, to reduce the risk of repeated surgery. It has been largely replaced by lithium-based primary cells, but there were somewhere between 50 and 100 plutonium-powered pacemakers still implanted and functioning in living patients in the United States. By the end of 2007, the number of plutonium-powered pacemakers was reported to be down to just nine. Pu was studied as a way to provide supplemental heat to scuba diving. Pu mixed with beryllium is used to generate neutrons for research purposes. Precautions Toxicity There are two aspects to the harmful effects of plutonium: radioactivity and heavy metal poisoning. Plutonium compounds are radioactive and accumulate in bone marrow. Contamination by plutonium oxide has resulted from nuclear disasters and radioactive incidents, including military nuclear accidents where nuclear weapons have burned. Studies of the effects of these smaller releases, as well as of the widespread radiation poisoning sickness and death following the atomic bombings of Hiroshima and Nagasaki, have provided considerable information regarding the dangers, symptoms and prognosis of radiation poisoning, which in the case of the Japanese survivors was largely unrelated to direct plutonium exposure. The decay of plutonium, releases three types of ionizing radiation: alpha (α), beta (β), and gamma (γ). Either acute or longer-term exposure carries a danger of serious health outcomes including radiation sickness, genetic damage, cancer, and death. The danger increases with the amount of exposure. α-radiation can travel only a short distance and cannot travel through the outer, dead layer of human skin. β-radiation can penetrate human skin, but cannot go all the way through the body. γ-radiation can go all the way through the body. Even though α radiation cannot penetrate the skin, ingested or inhaled plutonium does irradiate internal organs. α-particles generated by inhaled plutonium have been found to cause lung cancer in a cohort of European nuclear workers. The skeleton, where plutonium accumulates, and the liver, where it collects and becomes concentrated, are at risk. Plutonium is not absorbed into the body efficiently when ingested; only 0.04% of plutonium oxide is absorbed after ingestion. Plutonium absorbed by the body is excreted very slowly, with a biological half-life of 200 years. Plutonium passes only slowly through cell membranes and intestinal boundaries, so absorption by ingestion and incorporation into bone structure proceeds very slowly. Donald Mastick accidentally swallowed a small amount of plutonium(III) chloride, which was detectable for the next thirty years of his life, but appeared to suffer no ill effects. Plutonium is more dangerous if inhaled than if ingested. The risk of lung cancer increases once the total radiation dose equivalent of inhaled plutonium exceeds 400 mSv. The U.S. Department of Energy estimates that the lifetime cancer risk from inhaling 5,000 plutonium particles, each about 3 μm wide, is 1% over the background U.S. average. Ingestion or inhalation of large amounts may cause acute radiation poisoning and possibly death. However, no human being is known to have died because of inhaling or ingesting plutonium, and many people have measurable amounts of plutonium in their bodies. The "hot particle" theory in which a particle of plutonium dust irradiates a localized spot of lung tissue is not supported by mainstream research—such particles are more mobile than originally thought and toxicity is not measurably increased due to particulate form. When inhaled, plutonium can pass into the bloodstream. Once in the bloodstream, plutonium moves throughout the body and into the bones, liver, or other body organs. Plutonium that reaches body organs generally stays in the body for decades and continues to expose the surrounding tissue to radiation and thus may cause cancer. A commonly cited quote by Ralph Nader states that a pound of plutonium dust spread into the atmosphere would be enough to kill 8 billion people. This was disputed by Bernard Cohen, an opponent of the generally accepted linear no-threshold model of radiation toxicity. Cohen estimated that one pound of plutonium could kill no more than 2 million people by inhalation, so that the toxicity of plutonium is roughly equivalent with that of nerve gas. Several populations of people who have been exposed to plutonium dust (e.g. people living down-wind of Nevada test sites, Nagasaki survivors, nuclear facility workers, and "terminally ill" patients injected with Pu in 1945–46 to study Pu metabolism) have been carefully followed and analyzed. Cohen found these studies inconsistent with high estimates of plutonium toxicity, citing cases such as Albert Stevens who survived into old age after being injected with plutonium. "There were about 25 workers from Los Alamos National Laboratory who inhaled a considerable amount of plutonium dust during 1940s; according to the hot-particle theory, each of them has a 99.5% chance of being dead from lung cancer by now, but there has not been a single lung cancer among them." Marine toxicity Plutonium is known to enter the marine environment by dumping of waste or accidental leakage from nuclear plants. Though the highest concentrations of plutonium in marine environments are found in sediments, the complex biogeochemical cycle of plutonium means it is also found in all other compartments. For example, various zooplankton species that aid in the nutrient cycle will consume the element on a daily basis. The complete excretion of ingested plutonium by zooplankton makes their defecation an extremely important mechanism in the scavenging of plutonium from surface waters. However, those zooplankton that succumb to predation by larger organisms may become a transmission vehicle of plutonium to fish. In addition to consumption, fish can also be exposed to plutonium by their distribution around the globe. One study investigated the effects of transuranium elements (plutonium-238, plutonium-239, plutonium-240) on various fish living in the Chernobyl Exclusion Zone (CEZ). Results showed that a proportion of female perch in the CEZ displayed either a failure or delay in maturation of the gonads. Similar studies found large accumulations of plutonium in the respiratory and digestive organs of cod, flounder and herring. Plutonium toxicity is just as detrimental to larvae of fish in nuclear waste areas. Undeveloped eggs have a higher risk than developed adult fish exposed to the element in these waste areas. Oak Ridge National Laboratory displayed that carp and minnow embryos raised in solutions containing plutonium did not hatch; eggs that hatched displayed significant abnormalities when compared to control developed embryos. It revealed that higher concentrations of plutonium have been found to cause issues in marine fauna exposed to the element. Criticality potential Care must be taken to avoid the accumulation of amounts of plutonium which approach critical mass, particularly because plutonium's critical mass is only a third of that of uranium-235. A critical mass of plutonium emits lethal amounts of neutrons and gamma rays. Plutonium in solution is more likely to form a critical mass than the solid form due to moderation by the hydrogen in water. Criticality accidents have occurred, sometimes killing people. Careless handling of tungsten carbide bricks around a 6.2 kg plutonium sphere resulted in a fatal dose of radiation at Los Alamos on August 21, 1945, when scientist Harry Daghlian received a dose estimated at 5.1 sievert (510 rem) and died 25 days later. Nine months later, another Los Alamos scientist, Louis Slotin, died from a similar accident involving a beryllium reflector and the same plutonium core (the "demon core") that had previously killed Daghlian. In December 1958, during a process of purifying plutonium at Los Alamos, a critical mass formed in a mixing vessel, which killed chemical operator Cecil Kelley. Other nuclear accidents have occurred in the Soviet Union, Japan, the United States, and many other countries. Flammability Metallic plutonium is a fire hazard, especially if finely divided. In a moist environment, plutonium forms hydrides on its surface, which are pyrophoric and may ignite in air at room temperature. Plutonium expands up to 70% in volume as it oxidizes and thus may break its container. The radioactivity of the burning material is another hazard. Magnesium oxide sand is probably the most effective material for extinguishing a plutonium fire. It cools the burning material, acting as a heat sink, and also blocks off oxygen. Special precautions are necessary to store or handle plutonium in any form; generally a dry inert gas atmosphere is required. Transportation Land and sea The usual transport of plutonium is through the more stable plutonium oxide in a sealed package. A typical transport consists of one truck carrying one protected shipping container, holding a number of packages with a total weight varying from 80 to 200 kg of plutonium oxide. A sea shipment may consist of several containers, each holding a sealed package. The U.S. Nuclear Regulatory Commission dictates that it must be solid instead of powder if the contents surpass 0.74 TBq (20 curies) of radioactivity. In 2016, the ships Pacific Egret and Pacific Heron of Pacific Nuclear Transport Ltd. transported 331 kg (730 lbs) of plutonium to a United States government facility in Savannah River, South Carolina. Air U.S. Government air transport regulations permit the transport of plutonium by air, subject to restrictions on other dangerous materials carried on the same flight, packaging requirements, and stowage in the rearmost part of the aircraft. In 2012, media revealed that plutonium has been flown out of Norway on commercial passenger airlines—around every other year—including one time in 2011. Regulations permit a plane to transport 15 grams of fissionable material. Such plutonium transportation is without problems, according to a senior advisor (seniorrådgiver) at Statens strålevern.
Physical sciences
Chemical elements_2
null
4631496
https://en.wikipedia.org/wiki/Agricultural%20soil%20science
Agricultural soil science
Agricultural soil science is a branch of soil science that deals with the study of edaphic conditions as they relate to the production of food and fiber. In this context, it is also a constituent of the field of agronomy and is thus also described as soil agronomy. History Prior to the development of pedology in the 19th century, agricultural soil science (or edaphology) was the only branch of soil science. The bias of early soil science toward viewing soils only in terms of their agricultural potential continues to define the soil science profession in both academic and popular settings . (Baveye, 2006) Current status Agricultural soil science follows the holistic method. Soil is investigated in relation to and as integral part of terrestrial ecosystems but is also recognized as a manageable natural resource. Agricultural soil science studies the chemical, physical, biological, and mineralogical composition of soils as they relate to agriculture. Agricultural soil scientists develop methods that will improve the use of soil and increase the production of food and fiber crops. Emphasis continues to grow on the importance of soil sustainability. Soil degradation such as erosion, compaction, lowered fertility, and contamination continue to be serious concerns. They conduct research in irrigation and drainage, tillage, soil classification, plant nutrition, soil fertility, and other areas. Although maximizing plant (and thus animal) production is a valid goal, sometimes it may come at high cost which can be readily evident (e.g. massive crop disease stemming from monoculture) or long-term (e.g. impact of chemical fertilizers and pesticides on human health). An agricultural soil scientist may come up with a plan that can maximize production using sustainable methods and solutions, and in order to do that they must look into a number of science fields including agricultural science, physics, chemistry, biology, meteorology and geology. Kinds of soil and their variables Some soil variables of special interest to agricultural soil science are Soil texture or soil composition: Soils are composed of solid particles of various sizes. In decreasing order, these particles are sand, silt and clay. Every soil can be classified according to the relative percentage of sand, silt and clay it contains. Aeration and porosity: Atmospheric air contains elements such as oxygen, nitrogen, carbon and others. These elements are prerequisites for life on Earth. Particularly, all cells (including root cells) require oxygen to function and if conditions become anaerobic they fail to respire and metabolize. Aeration in this context refers to the mechanisms by which air is delivered to the soil. In natural ecosystems soil aeration is chiefly accomplished through the vibrant activity of the biota. Humans commonly aerate the soil by tilling and plowing, yet such practice may cause degradation. Porosity refers to the air-holding capacity of the soil.
Technology
Academic disciplines
null
4633039
https://en.wikipedia.org/wiki/Trigger%20%28firearms%29
Trigger (firearms)
A trigger is a mechanism that actuates the function of a ranged weapon such as a firearm, airgun, crossbow, or speargun. The word may also be used to describe a switch that initiates the operation of other non-shooting devices such as a trap, a power tool, or a quick release. A small amount of energy applied to the trigger leads to the release of much more energy. Most triggers use a small flat or slightly curved lever (called the trigger blade) depressed by the index finger, but some weapons such as the M2 Browning machine gun or the Iron Horse TOR ("thumb-operated receiver") use a push-button-like thumb-actuated trigger design, and others like the Springfield Armory M6 Scout use a squeeze-bar trigger similar to the "ticklers" on medieval European crossbows. Although the word "trigger" technically implies the entire mechanism (known as the trigger group), colloquially it is usually used to refer specifically to the trigger blade. Most firearm triggers are "single-action", meaning that the trigger is designed only for the single function of disengaging the sear, which allows for a spring-tensioned hammer/striker to be released. In "double-action" firearm designs, the trigger also performs the additional function of cocking the hammer – and there are many designs where the trigger is used for a range of other functions. Furthermore, triggers can be divided into direct triggers (also called single-stage triggers) and which are popular for hunting, and pressure triggers (also called two-stage triggers which are popular on competition rifles. Function Firearms use triggers to initiate the firing of a cartridge seated within the gun barrel chamber. This is accomplished by actuating a striking device through a combination of mainspring (which stores elastic energy), a trap mechanism that can hold the spring under tension, an intermediate mechanism to transmit the kinetic energy from the spring releasing, and a firing pin to eventually strike and ignite the primer. There are two primary types of striking mechanisms – hammer and striker. A hammer is a pivoting metallic component subjected to spring tension so when released will swing forward to strike a firing pin like a mallet hitting a punch/chisel, which then relays the hammer impulse by moving forward rapidly along its longitudinal axis. A striker is essentially a firing pin directly loaded to a spring, eliminating the need to be struck by a separate hammer. The firing pin/striker then collides into the cartridge primer positioned ahead of it, which contains shock-sensitive compounds (e.g., lead styphnate) that sparks to ignite the propellant powder within the cartridge case and thus discharges the projectile. The trapping interface between the trigger and the hammer/striker is typically referred to as the sear surface. Variable mechanisms will have this surface directly on the trigger and hammer or have separate sears or other connecting parts. Stages of a trigger pull The trigger pull can be divided into three mechanical stages: Takeup or pretravel: The movement of the trigger before the sear moves. Break: The movement of the trigger during the sear's movement up to the point of release, where the felt resistance suddenly decreases. Overtravel: The movement of the trigger after the sear has already released Takeup When considering the practical accuracy of a firearm, the trigger takeup is often considered the least critical stage of the trigger pull. Often triggers are classified as either single-stage or two-stage based on the takeup. Single-stage triggers have no discernible resistance during the entire takeup, and only encounter resistance at the very end of takeup (often described as encountering "the wall") when actually actuating the trigger break. Two-stage triggers have a noticeable but relatively light resistance during the takeup, followed by a distinctly increased resistance of the actual trigger break. This allows the shooter to gradually "ease into" the trigger pull instead of hitting the trigger blade with a hasty hard squeeze. A single-stage trigger is often called direct trigger and is popular on hunting rifles. A two-stage trigger is often called pressure trigger and is popular on competition rifles. Some fully adjustable triggers can be adjusted to function as either a single-stage or two-stage trigger by adjusting the takeup. Setting the takeup travel (also known as the first stage) to near zero essentially makes the trigger a single-stage trigger. Some single-stage triggers (e.g., Glock Safe Action trigger, Savage AccuTrigger) have an integral safety with a noticeable spring resistance that can functionally mimic a two-stage trigger. Break The trigger break is named for the sudden loss of resistance when the sear reaches the point of release, which is described as resembling the breaking of rigid materials when the strength fails under stress. The actuation force required to overcome the sear resistance during the break is known as the trigger weight, which is usually measured with a force gauge in newtons in SI units, or alternatively kilograms or grams in metric units, and pounds and/or ounces in US customary units. The break is often considered the most critical stage of the trigger pull for achieving good practical accuracy, since it happens just prior to the shot being discharged and can cause some unwanted shakes from the shooter's hand at the instant of firing. Shooter preferences vary; some prefer a soft break with a smooth but discernible amount of trigger travel during firing, while others prefer a crisp break with a heavier weight and little or no discernible movement. A perceivably slow trigger break is often referred to as a "creep", and frequently described as an unfavorable feature. Overtravel The trigger overtravel happens immediately after the break and is typically a short distance and can be considered an inertially accelerated motion caused by the residual push of the finger coupled with the sudden decrease in resistance after the trigger break. It can be a very critical factor for accuracy because shaking movements during this phase may precede the projectile leaving the barrel and is especially important with firearms with long barrels, slow projectiles and heavy trigger weights, where the more significant resistance drop can make the trigger finger overshoot and shake in an uncontrolled fashion. Having some overtravel provides a "buffer zone" that prevents the shooter from "jerking the trigger", allowing the remnant pressing force from the finger to be dampened via a "follow-through" motion. Although a perceivable overtravel can be felt as adding to the "creep" of the trigger break, it is not always considered a bad thing by some shooters. An overtravel stop will arrest the motion of the trigger blade and prevent excessive movement. Reset When user releases the trigger, and it travels to its resting position. On semiauto firearms the movement eventually passes by reset position where trigger-disconnector mechanism resets itself to its resting state, in which pulling the trigger releases the sear. The reset event does not occur in double action firearms and in full auto firearms. Types There are numerous types of trigger designs, typically categorized according to which functions the trigger is tasked to perform, a.k.a. the trigger action (not to be confused with the action of the whole firearm, which refers to all the components that help handle the cartridge, including the magazine, bolt, hammer and firing pin/striker, extractor and ejector in addition to the trigger). While a trigger is primarily designed to set off a shot by releasing the hammer/striker, it may also perform additional functions such as cocking (loading against a spring) the hammer/striker, rotating a revolver's cylinder, deactivating internal safeties, transitioning between different firing modes (see progressive trigger), or reducing the pull weight (see set trigger). Single-action A single-action (SA) trigger is the earliest and mechanically simplest of trigger types. It is called "single-action" because it performs the single function of releasing the hammer/striker (and nothing else), while the hammer/striker must be cocked by separate means. Almost all single-shot and repeating long arms (rifles, shotguns, submachine guns, machine guns, etc.) use this type of trigger. The "classic" single-action revolver of the mid-to-late 19th century includes black powder caplock muzzleloaders such as the Colt 1860 "Army" Model, and Colt 1851 "Navy" Model, and European models like the LeMat, as well as early metallic cartridge revolvers such as the Colt Model 1873 "Single Action Army" (named for its trigger mechanism) and Smith & Wesson Model 3, all of which required a thumb to cock the hammer before firing. Single-action triggers with manually cocked external hammers lasted a while longer in some break-action shotguns and in dangerous game rifles, where the hunter did not want to rely on an unnecessarily complex or fragile weapon. While single-action revolvers never lost favor in the US right up until the birth of the semi-automatic pistol, double-action revolvers such as the Beaumont–Adams were designed in Europe before the American Civil War broke out and saw great popularity all through the latter half of the 19th century, with certain numbers being sold in the US as well. While many European and some American revolvers were designed as double-action models throughout the late 19th century, for the first half of the 20th century, all semi-automatic pistols were single-action weapons, requiring the weapon to be carried cocked and loaded with the safety on, or uncocked with an empty chamber (Colt M1911, Mauser C96, Luger P.08, Tokarev TT, Browning Hi-Power). The difference between these weapons and single-action revolvers is that while a single-action revolver requires the user to manually cock the hammer before each firing, a single-action semi-automatic pistol only requires manual cocking for the first shot, after which the slide will reciprocate under recoil to automatically recock the hammer for a next shot, and is thus always cocked and ready unless the user manually decocks the hammer, encounters a misfiring cartridge, or pulls the trigger on an empty chamber (for older weapons lacking "last round bolt hold open" feature). In the late 1930s and early 1940s, Walther introduced the first "double-action" (actually DA/SA hybrid) semi-automatic pistols, the PPK and P.38 models, which featured a revolver-style double-action trigger, allowing the weapon to be carried with a round chambered and the hammer lowered. After the first shot, they would fire subsequent shots like a single-action pistol. These pistols rapidly gained popularity, and the traditional single-action-only pistols rapidly lost favor, although they still retain a dedicated following among enthusiasts. Today, a typical revolver or semi-automatic pistol is a DA/SA one, carried in double-action mode but firing most of its shots in single-action mode. Double-action only A double-action trigger, also known as double-action only (DAO) to prevent confusion with the more common hybrid DA/SA designs, is a trigger that must perform the double function of both cocking and releasing the hammer/striker. Such trigger design either has no internal sear mechanism capable of holding the hammer/striker in a still position (so cocking and releasing have to happen in one uninterrupted sequence), or has the whole hammer shrouded and/or with the thumb spur machined off, preventing the user from manipulating it separately. This design requires a trigger pull to both cock and trip the hammer/striker for every single shot, unlike a DA/SA, which only requires a double-action trigger pull for the first shot (or a typical DA/SA revolver, which can fire single action any time the user wishes but uses double-action as a default). This means that there is no single-action function for any shot, and the hammer or striker always rests in the down position until the trigger pull begins. With semi-automatics, this means that unlike DA/SA weapons, the hammer does not remain cocked after the first round is fired, and every shot is in double-action mode. With revolvers, this means that one does not have the option of cocking the gun before shooting and must always shoot it in double action mode. Although there have been revolvers that were designed with trigger mechanisms totally lacking a single-action mechanism altogether, more commonly DAO revolvers are modifications of existing DA/SA models, with identical internals, only with access to the hammer prevented, either by covering it with a shroud or by removing the thumb spur. In both cases, the goal is to prevent the possible snagging of the hammer spur on clothing or holster. Due to the imposed limitation in accuracy, the majority of DAO revolvers have been short-barrel, close-range "snub" weapons, where rapidity of draw is essential and limited accuracy is already an acceptable compromise. The purpose of a DAO action in a semi-automatic is mostly to avoid the change in trigger pull between the first and subsequent shots that one experiences in a DA/SA pistol, while avoiding the perceived danger of carrying a cocked single-action handgun, although it also avoids having to carry a cocked and loaded pistol, or having to lower the hammer on a loaded chamber, if one only fires a partial magazine. A good example of this action in a semi-automatic is the SIG Sauer DAK trigger, or the DAO action of the Sig P250. For striker-fired pistols such as the Taurus 24/7, the striker will remain in the rest position through the entire reloading cycle. This term applies most often to semi-automatic handguns; however, the term can also apply to some revolvers such as the Smith & Wesson Centennial, the Type 26 Revolver, and the Enfield No. 2 Mk I* and Mk I** revolvers, in which there is no external hammer spur, or which simply lack the internal sear mechanism capable of holding the hammer in the cocked position. Double-action/single-action A double-action/single-action (DA/SA) trigger is a hybrid design combining the features of both single- and double-action mechanisms. It is also known as traditional double-action (TDA), as the vast majority of modern "double-action" handguns (both revolvers and semi-automatic pistols) use this type of trigger instead of "double-action only" (DAO). In simple terms, "double-action" refers to a trigger mechanism that both cocks the hammer and then releases the sear, thus performing two "acts", although it is supposed to describe doing both strictly with one trigger pull only. However, in practice most double-action guns feature the optional ability to cock the hammer separately, reducing the trigger to perform just one action. This is opposed to "double-action only" firearms, which completely lack the capability to fire in single-action mode. In a DA/SA trigger, the mechanism is designed with an internal sear that allows the trigger to both cock and release the hammer/striker when fully pulled, or to merely lock the hammer/striker in the cocked position when it is pulled to the rear and the trigger is not depressed. In a revolver, this means that simply squeezing the trigger when the hammer is lowered will both cock and release it. If the user uses their thumb to pull the hammer to the back, but does not press the trigger, the mechanism will lock the hammer in the cocked position until the trigger is pressed, just like a single action. Firing in double-action mode allows a quicker initiation of fire, but compromised by having a longer, heavier trigger pull, which can affect accuracy compared to the lighter, shorter trigger pull of a single-action fire. In a DA/SA semi-automatic pistol, the trigger mechanism functions identically to that of a DA revolver. However, this is combined with the ability of the pistol slide to automatically cock the hammer when firing. Thus, the weapon can be carried with the hammer down on a loaded chamber, reducing perceived danger of carrying a single-action semi-automatic. When the user is ready to fire, simply pulling the trigger will cock and release the hammer in double action mode. When the weapon fires, the cycling slide automatically cocks the hammer to the rear, meaning that the rest of the shots fired will be in single-action mode, unless the hammer is manually lowered again. This gives the positive aspects of a single-action trigger without the need to carry "cocked and locked" (with a loaded chamber and cocked hammer), or with an empty chamber, which requires the user to chamber a round before firing. A potential drawback of a DA/SA weapon is that the shooter must be comfortable dealing with two different trigger pulls: the longer, heavier DA first pull and the shorter, lighter subsequent SA pulls. The difference between these trigger pulls can affect the accuracy of the crucial first few shots in an emergency situation. Although there is little need for a safety on a DA/SA handgun when carrying it loaded with the hammer down, after the first shots are fired, the hammer will be cocked and the chamber loaded. Thus, most DA/SA guns either feature a conventional safety that prevents the hammer from accidentally dropping, or a "decocker" – a lever that safely and gently drops the hammer (i.e. decocks the gun) without fear of the gun firing. The latter is the more popular because, without a decocker, the user is forced to lower the hammer by hand onto a loaded chamber, with all of the attendant safety risks that involves, to return the gun to double-action mode. Revolvers almost never feature safeties, since they are traditionally carried un-cocked, and the hammer requires the user to physically cock it prior to every shot; unlike a DA/SA gun, which cocks itself every time the slide is cycled. There are many examples of DA/SA semi-automatics, the Little Tom Pistol being the first, followed up by the Walther PPK and Walther P38. Modern examples include weapons such as the Beretta 92, among others. Almost all revolvers that are not specified as single-action models are capable of firing in both double- and single-action mode, for example, the Smith & Wesson Model 27, S&W Model 60, the Colt Police Positive, Colt Python, etc. Early double-action revolvers included the Beaumont–Adams and Tranter black-powder muzzleloaders. There are some revolvers that can only be fired in double-action mode (DAO), but that is almost always due to existing double-action/single-action models being modified so that the hammer cannot be cocked manually, rather than from weapons designed that way from the factory. Release trigger A release trigger releases the hammer or striker when the trigger is released by the shooter, rather than when it is pulled. Binary trigger ("pull and release") A binary trigger is a trigger for a semiautomatic firearm that drops the hammer both when the trigger is pulled, and also when it is released. Examples include the AR-15 series of rifles, produced by Franklin Armory, Fostech Outdoors, and Liberty Gun Works. The AR-15 trigger as produced by Liberty Gun Works only functions in pull and release mode, and does not have a way to catch the hammer on release; while the other two have three-position safety selectors and a way to capture the hammer on release. In these triggers, the third position activates the pull and release mode, while the center selector position causes the trigger to only drop the hammer when pulled. Set trigger A set trigger allows a shooter to have a greatly reduced trigger pull (the resistance of the trigger) while maintaining a degree of safety in the field compared to having a conventional, very light trigger. There are two types: single set and double set. Set triggers are most likely to be seen on customized weapons and competition rifles where a light trigger pull is beneficial to accuracy. Single set trigger A single set trigger is usually one trigger that may be fired with a conventional amount of trigger pull weight or may be "set" – usually by pushing forward on the trigger, or by pushing forward on a small lever attached to the rear of the trigger. This takes up the trigger slack (or "take-up") in the trigger and allows for a much lighter trigger pull. This is colloquially known as a hair trigger. Double set trigger A double set trigger achieves the same result, but uses two triggers: one sets the trigger and the other fires the weapon. Double set triggers can be further classified into two different phases. A double set, single phase trigger can only be operated by first pulling the set trigger, and then pulling the firing trigger. A double set, double phase trigger can be operated as a standard trigger if the set trigger is not pulled, or as a set trigger by first pulling the set trigger. Double set, double phase triggers offer the versatility of both a standard trigger and a set trigger. Pre-set (striker or hammer) Pre-set strikers and hammers apply only to semi-automatic handguns. Upon firing a cartridge or loading the chamber, the hammer or striker will rest in a partially cocked position. The trigger serves the function of completing the cocking cycle and then releasing the striker or hammer. While technically two actions, it differs from a double-action trigger in that the trigger is not capable of fully cocking the striker or hammer. It differs from single-action in that if the striker or hammer were to release, it would generally not be capable of igniting the primer. Examples of pre-set strikers are the Glock, Smith & Wesson M&P, Springfield Armory XD-S variant (only), Kahr Arms, FN FNS series and Ruger SR series pistols. This type of trigger mechanism is sometimes referred to as a Striker Fired Action or SFA. Examples of pre-set hammers are the Kel-Tec P-32 and Ruger LCP pistols. Pre-set hybrid Pre-set hybrid triggers are similar to a DA/SA trigger in reverse. The first pull of the trigger is pre-set. If the striker or hammer fail to discharge the cartridge, the trigger may be pulled again and will operate as a double-action only (DAO) until the cartridge discharges or the malfunction is cleared. This allows the operator to attempt a second time to fire a cartridge after a misfire malfunction, as opposed to a single-action, in which the only thing to do if a round fails to fire is to rack the slide, clearing the round and recocking the hammer. While this can be advantageous in that many rounds will fire on being struck a second time, and it is faster to pull the trigger a second time than to cycle the action, if the round fails to fire on the second strike, the user will be forced to clear the round anyway, thus using up even more time than if they had simply done so in the first place. The Taurus PT 24/7 Pro pistol (not to be confused with the first-generation 24/7 which was a traditional pre-set) offered this feature starting in 2006. The Walther P99 Anti-Stress is another example. Variable triggers Double-crescent trigger A double-crescent trigger provides select fire capability without the need for a fire mode selector switch. Pressing the upper segment of the trigger produced semi-automatic fire, while holding the lower segment of the trigger produced fully automatic fire. Though considered innovative at the time, the feature was eliminated on most firearms due to its complexity. Examples include MG 34, Kulsprutegevär m/40 automatic rifle, M1946 Sieg automatic rifle, Osario Selectiva, and Star Model Z62. Progressive/staged trigger A progressive, or staged trigger allows different firing rates based on how far it is depressed. For example, when pulled lightly, the weapon will fire a single shot. When depressed further, the weapon fires at a fully automatic rate. Examples include FN P90, Jatimatic, CZ Model 25, PM-63, BXP, F1 submachine gun, Vigneron submachine gun, Wimmersperg Spz-kr, and Steyr AUG. Relative merits Each trigger mechanism has its own merits. Historically, the first type of trigger was the single-action. This is the simplest mechanism and generally the shortest, lightest, and smoothest pull available. The pull is also consistent from shot to shot so no adjustments in technique are needed for proper accuracy. On a single-action revolver, for which the hammer must be manually cocked prior to firing, an added level of safety is present. On a semi-automatic, the hammer will be cocked and made ready to fire by the process of chambering a round, and as a result an external safety is sometimes employed. Double-action triggers provide the ability to fire the gun whether the hammer is cocked or uncocked. This feature is desirable for military, police, or self-defense pistols. The primary disadvantage of any double-action trigger is the extra length the trigger must be pulled and the extra weight required to overcome the spring tension of the hammer or striker. DA/SA pistols are versatile mechanisms. These firearms generally have a manual safety that additionally may serve to decock the hammer. Some have a facility (generally a lever or button) to safely lower the hammer. As a disadvantage, these controls are often intermingled with other controls such as slide releases, magazine releases, take-down levers, takedown lever lock buttons, loaded chamber indicators, barrel tip-up levers, etc. These variables become confusing and require more complicated manuals-of-arms. One other disadvantage is the difference between the first double-action pull and subsequent single-action pulls. DAO firearms resolve some DA/SA shortcomings by making every shot a double-action shot. Because there is no difference in pull weights, training and practice are simplified. Additionally, the heavier trigger pull can help to prevent a negligent discharge under stress. This is a particular advantage for a police pistol. These weapons also generally lack any type of external safety. DAO is common among police agencies and for small, personal protection firearms. The primary drawback is that the additional trigger pull weight and travel required for each shot reduces accuracy. Pre-set triggers offer a balance of pull weight, trigger travel, safety, and consistency. Glock popularized this trigger in modern pistols and many other manufacturers have released pre-set striker products of their own. The primary disadvantage is that pulling the trigger a second time after a failure to fire will not re-strike the primer. In normal handling of the firearm, this is not an issue; loading the gun requires that the slide be retracted, pre-setting the striker. Clearing a malfunction also usually involves retracting the slide following the "tap rack bang" procedure.
Technology
Mechanisms_2
null
4635897
https://en.wikipedia.org/wiki/Handfish
Handfish
Handfish are marine ray-finned fishes belonging to the family Brachionichthyidae, a group which comprises five genera and 14 extant species and which is classified within the suborder Antennarioidei in the order Lophiiformes, the anglerfishes. These benthic marine fish are unusual in the way they propel themselves by walking on the sea floor rather than swimming. Taxonomy The handfish were first proposed as a family, Brachionichthyidae, in 1878 by the American ichthyologist Theodore Gill. The Brachionichthyidae is classified within the suborder Antennarioidei within the order Lophiiformes, the anglerfishes. The Brachionichthyidae is regarded as the most basal family within the suborder Antennarioidei. Genera The handfish family, Brachionichthyidae, contains the following genera: = Extinct Distribution Handfish are found today in the coastal waters of southern and eastern Australia and around the island state of Tasmania. This is the most species-rich of the few marine fish families endemic to the Australian region, with all but three species found there. There are 14 species of handfish around Tasmania. The biology of handfishes is poorly known and their typically small population sizes and restricted distributions make them highly vulnerable to disturbance. Some species are considered to be critically endangered. Anatomy Handfish grow up to long, and have skin covered with denticles (tooth-like scales), giving them the alternative name warty anglers. They are slow-moving fish that prefer to 'walk' rather than swim, using their modified pectoral fins to move about on the sea floor. These highly modified fins have the appearance of hands, hence their scientific name, from Latin bracchium meaning "arm" and Greek ichthys meaning "fish". Like other anglerfish, they possess an illicium, a modified dorsal fin ray above the mouth, but it is short and does not appear to be used as a fishing lure. The second dorsal spine is joined to the third by a flap of skin, making a crest. Fossil record The prehistoric species, Histionotophorus bassani, from the Lutetian of Monte Bolca, is now considered to be a handfish, sometimes even being included in the genus Brachionichthys. Considering the low extant diversity, restricted geographical distribution, and very meagre fossil record of antennarioids in general, the existence of fossil representatives of the family Brachionichthyidae is unusual. Conservation status In 1996, the spotted handfish (Brachionichthys hirsutus) was the first marine fish to be listed as critically endangered in the IUCN Red List. With its only habitat in the Derwent River estuary and surrounds, it is threatened by the Northern Pacific seastar's invasion into southern Australian waters. The Northern Pacific seastar (Asterias amurensis), preys on not only the fish eggs, but also on the sea squirts (ascidians) that help to form the substrate that the fish spawn on. The cause of the decline in spotted handfish is unclear. Suggested causes may include disturbance of benthic communities and predation on egg masses by the introduced northern Pacific seastar, habitat modification through increased siltation, heavy metal contamination or urban effluent. The lack of a pelagic larval stage and low rates of dispersal may be responsible for their restricted distributions and may also have an impact on handfishes ability to recolonise areas where they once occurred. In March 2020, the smooth handfish (Sympterichthys unipennis) was declared extinct in the IUCN Red List. Once common enough to be one of the first fish to be described by European explorers of Australia, but not seen for well over a century, this is the first modern-day marine fish to be officially declared extinct. However, this was reversed in September 2021, as there is not sufficient data to confirm this status. In October 2021, the endangered and very rare pink handfish (Brachiopsilus dianthus) was seen for the first time since 1999, in footage from a camera placed on the sea bed off Tasmania at a depth of . Prior to this sighting, it had been assumed that this species was confined to shallow waters. The discovery that it has a greater range than previously thought may give cause for optimism regarding its survival. Current status of species three species of handfish are listed as threatened under the Environment Protection and Biodiversity Conservation Act 1999 (EPBC Act) and the IUCN: Brachionichthys hirsutus, spotted handfish – Critically Endangered under EPBC Act and IUCN; Thymichthys politus, red handfish – Critically Endangered under EPBC Act and IUCN; and Brachiopsilus ziebelli, Ziebell's handfish – Vulnerable under EPBC Act, Critically Endangered under IUCN. All three of the above are listed as Endangered under the Tasmanian Threatened Species Protection Act 1995, and all handfish species are protected under the Tasmanian Living Marine Resources Management Act 1995, which prohibits their collection in State waters without a permit.
Biology and health sciences
Acanthomorpha
Animals
33455
https://en.wikipedia.org/wiki/Web%20server
Web server
A web server is computer software and underlying hardware that accepts requests via HTTP (the network protocol created to distribute web content) or its secure variant HTTPS. A user agent, commonly a web browser or web crawler, initiates communication by making a request for a web page or other resource using HTTP, and the server responds with the content of that resource or an error message. A web server can also accept and store resources sent from the user agent if configured to do so. The hardware used to run a web server can vary according to the volume of requests that it needs to handle. At the low end of the range are embedded systems, such as a router that runs a small web server as its configuration interface. A high-traffic Internet website might handle requests with hundreds of servers that run on racks of high-speed computers. A resource sent from a web server can be a pre-existing file (static content) available to the web server, or it can be generated at the time of the request (dynamic content) by another program that communicates with the server software. The former usually can be served faster and can be more easily cached for repeated requests, while the latter supports a broader range of applications. Technologies such as REST and SOAP, which use HTTP as a basis for general computer-to-computer communication, as well as support for WebDAV extensions, have extended the application of web servers well beyond their original purpose of serving human-readable pages. History This is a very brief history of web server programs, so some information necessarily overlaps with the histories of the web browsers, the World Wide Web and the Internet; therefore, for the sake of clarity and understandability, some key historical information below reported may be similar to that found also in one or more of the above-mentioned history articles. Initial WWW project (1989–1991) In March 1989, Sir Tim Berners-Lee proposed a new project to his employer CERN, with the goal of easing the exchange of information between scientists by using a hypertext system. The proposal titled "HyperText and CERN", asked for comments and it was read by several people. In October 1990 the proposal was reformulated and enriched (having as co-author Robert Cailliau), and finally, it was approved. Between late 1990 and early 1991 the project resulted in Berners-Lee and his developers writing and testing several software libraries along with three programs, which initially ran on NeXTSTEP OS installed on NeXT workstations: a graphical web browser, called WorldWideWeb; a portable line mode web browser; a web server, later known as CERN httpd. Those early browsers retrieved web pages written in a simple early form of HTML, from web server(s) using a new basic communication protocol that was named HTTP 0.9. In August 1991 Tim Berners-Lee announced the birth of WWW technology and encouraged scientists to adopt and develop it. Soon after, those programs, along with their source code, were made available to people interested in their usage. Although the source code was not formally licensed or placed in the public domain, CERN informally allowed users and developers to experiment and further develop on top of them. Berners-Lee started promoting the adoption and the usage of those programs along with their porting to other operating systems. Fast and wild development (1991–1995) In December 1991, the was installed at SLAC (U.S.A.). This was a very important event because it started trans-continental web communications between web browsers and web servers. In 1991–1993, CERN web server program continued to be actively developed by the www group, meanwhile, thanks to the availability of its source code and the public specifications of the HTTP protocol, many other implementations of web servers started to be developed. In April 1993, CERN issued a public official statement stating that the three components of Web software (the basic line-mode client, the web server and the library of common code), along with their source code, were put in the public domain. This statement freed web server developers from any possible legal issue about the development of derivative work based on that source code (a threat that in practice never existed). At the beginning of 1994, the most notable among new web servers was NCSA httpd which ran on a variety of Unix-based OSs and could serve dynamically generated content by implementing the POST HTTP method and the CGI to communicate with external programs. These capabilities, along with the multimedia features of NCSA's Mosaic browser (also able to manage HTML FORMs in order to send data to a web server) highlighted the potential of web technology for publishing and distributed computing applications. In the second half of 1994, the development of NCSA httpd stalled to the point that a group of external software developers, webmasters and other professional figures interested in that server, started to write and collect patches thanks to the NCSA httpd source code being available to the public domain. At the beginning of 1995 those patches were all applied to the last release of NCSA source code and, after several tests, the Apache HTTP server project was started. At the end of 1994, a new commercial web server, named Netsite, was released with specific features. It was the first one of many other similar products that were developed first by Netscape, then also by Sun Microsystems, and finally by Oracle Corporation. In mid-1995, the first version of IIS was released, for Windows NT OS, by Microsoft. This marked the entry, in the field of World Wide Web technologies, of a very important commercial developer and vendor that has played and still is playing a key role on both sides (client and server) of the web. In the second half of 1995, CERN and NCSA web servers started to decline (in global percentage usage) because of the widespread adoption of new web servers which had a much faster development cycle along with more features, more fixes applied, and more performances than the previous ones. Explosive growth and competition (1996–2014) At the end of 1996, there were already over fifty known (different) web server software programs that were available to everybody who wanted to own an Internet domain name and/or to host websites. Many of them lived only shortly and were replaced by other web servers. The publication of RFCs about protocol versions HTTP/1.0 (1996) and HTTP/1.1 (1997, 1999), forced most web servers to comply (not always completely) with those standards. The use of TCP/IP persistent connections (HTTP/1.1) required web servers both to increase the maximum number of concurrent connections allowed and to improve their level of scalability. Between 1996 and 1999, Netscape Enterprise Server and Microsoft's IIS emerged among the leading commercial options whereas among the freely available and open-source programs Apache HTTP Server held the lead as the preferred server (because of its reliability and its many features). In those years there was also another commercial, highly innovative and thus notable web server called Zeus (now discontinued) that was known as one of the fastest and most scalable web servers available on market, at least till the first decade of 2000s, despite its low percentage of usage. Apache resulted in the most used web server from mid-1996 to the end of 2015 when, after a few years of decline, it was surpassed initially by IIS and then by Nginx. Afterward IIS dropped to much lower percentages of usage than Apache (see also market share). From 2005–2006, Apache started to improve its speed and its scalability level by introducing new performance features (e.g. event MPM and new content cache). As those new performance improvements initially were marked as experimental, they were not enabled by its users for a long time and so Apache suffered, even more, the competition of commercial servers and, above all, of other open-source servers which meanwhile had already achieved far superior performances (mostly when serving static content) since the beginning of their development and at the time of the Apache decline were able to offer also a long enough list of well tested advanced features. In fact, a few years after 2000 started, not only other commercial and highly competitive web servers, e.g. LiteSpeed, but also many other open-source programs, often of excellent quality and very high performances, among which should be noted Hiawatha, Cherokee HTTP server, Lighttpd, Nginx and other derived/related products also available with commercial support, emerged. Around 2007–2008, most popular web browsers increased their previous default limit of 2 persistent connections per host-domain (a limit recommended by RFC-2616) to 4, 6 or 8 persistent connections per host-domain, in order to speed up the retrieval of heavy web pages with lots of images, and to mitigate the problem of the shortage of persistent connections dedicated to dynamic objects used for bi-directional notifications of events in web pages. Within a year, these changes, on average, nearly tripled the maximum number of persistent connections that web servers had to manage. This trend (of increasing the number of persistent connections) definitely gave a strong impetus to the adoption of reverse proxies in front of slower web servers and it gave also one more chance to the emerging new web servers that could show all their speed and their capability to handle very high numbers of concurrent connections without requiring too many hardware resources (expensive computers with lots of CPUs, RAM and fast disks). New challenges (2015 and later years) In 2015, RFCs published new protocol version [HTTP/2], and as the implementation of new specifications was not trivial at all, a dilemma arose among developers of less popular web servers (e.g. with a percentage of usage lower than 1% .. 2%), about adding or not adding support for that new protocol version. In fact supporting HTTP/2 often required radical changes to their internal implementation due to many factors (practically always required encrypted connections, capability to distinguish between HTTP/1.x and HTTP/2 connections on the same TCP port, binary representation of HTTP messages, message priority, compression of HTTP headers, use of streams also known as TCP/IP sub-connections and related flow-control, etc.) and so a few developers of those web servers opted for not supporting new HTTP/2 version (at least in the near future) also because of these main reasons: protocols HTTP/1.x would have been supported anyway by browsers for a very long time (maybe forever) so that there would be no incompatibility between clients and servers in next future; implementing HTTP/2 was considered a task of overwhelming complexity that could open the door to a whole new class of bugs that till 2015 did not exist and so it would have required notable investments in developing and testing the implementation of the new protocol; adding HTTP/2 support could always be done in future in case the efforts would be justified. Instead, developers of most popular web servers, rushed to offer the availability of new protocol, not only because they had the work force and the time to do so, but also because usually their previous implementation of SPDY protocol could be reused as a starting point and because most used web browsers implemented it very quickly for the same reason. Another reason that prompted those developers to act quickly was that webmasters felt the pressure of the ever increasing web traffic and they really wanted to install and to try – as soon as possible – something that could drastically lower the number of TCP/IP connections and speedup accesses to hosted websites. In 2020–2021 the HTTP/2 dynamics about its implementation (by top web servers and popular web browsers) were partly replicated after the publication of advanced drafts of future RFC about HTTP/3 protocol. Technical overview The following technical overview should be considered only as an attempt to give a few very limited examples about some features that may be implemented in a web server and some of the tasks that it may perform in order to have a sufficiently wide scenario about the topic. A web server program plays the role of a server in a client–server model by implementing one or more versions of HTTP protocol, often including the HTTPS secure variant and other features and extensions that are considered useful for its planned usage. The complexity and the efficiency of a web server program may vary a lot depending on (e.g.): common features implemented; common tasks performed; performances and scalability level aimed as a goal; software model and techniques adopted to achieve wished performance and scalability level; target hardware and category of usage, e.g. embedded system, low-medium traffic web server, high traffic Internet web server. Common features Although web server programs differ in how they are implemented, most of them offer the following common features. These are basic features that most web servers usually have. Static content serving: to be able to serve static content (web files) to clients via HTTP protocol. HTTP: support for one or more versions of HTTP protocol in order to send versions of HTTP responses compatible with versions of client HTTP requests, e.g. HTTP/1.0, HTTP/1.1 (eventually also with encrypted connections HTTPS), plus, if available, HTTP/2, HTTP/3. Logging: usually web servers have also the capability of logging some information, about client requests and server responses, to log files for security and statistical purposes. A few other more advanced and popular features (only a very short selection) are the following ones. Dynamic content serving: to be able to serve dynamic content (generated on the fly) to clients via HTTP protocol. Virtual hosting: to be able to serve many websites (domain names) using only one IP address. Authorization: to be able to allow, to forbid or to authorize access to portions of website paths (web resources). Content cache: to be able to cache static and/or dynamic content in order to speed up server responses; Large file support: to be able to serve files whose size is greater than 2 GB on 32 bit OS. Bandwidth throttling: to limit the speed of content responses in order to not saturate the network and to be able to serve more clients; Rewrite engine: to map parts of clean URLs (found in client requests) to their real names. Custom error pages: support for customized HTTP error messages. Common tasks A web server program, when it is running, usually performs several general tasks, (e.g.): starts, optionally reads and applies settings found in its configuration file(s) or elsewhere, optionally opens log file, starts listening to client connections / requests; optionally tries to adapt its general behavior according to its settings and its current operating conditions; manages client connection(s) (accepting new ones or closing the existing ones as required); receives client requests (by reading HTTP messages): reads and verify each HTTP request message; usually performs URL normalization; usually performs URL mapping (which may default to URL path translation); usually performs URL path translation along with various security checks; executes or refuses requested HTTP method: optionally manages URL authorizations; optionally manages URL redirections; optionally manages requests for static resources (file contents): optionally manages directory index files; optionally manages regular files; optionally manages requests for dynamic resources: optionally manages directory listings; optionally manages program or module processing, checking the availability, the start and eventually the stop of the execution of external programs used to generate dynamic content; optionally manages the communications with external programs / internal modules used to generate dynamic content; replies to client requests sending proper HTTP responses (e.g. requested resources or error messages) eventually verifying or adding HTTP headers to those sent by dynamic programs / modules; optionally logs (partially or totally) client requests and/or its responses to an external user log file or to a system log file by syslog, usually using common log format; optionally logs process messages about detected anomalies or other notable events (e.g. in client requests or in its internal functioning) using syslog or some other system facilities; these log messages usually have a debug, warning, error, alert level which can be filtered (not logged) depending on some settings, see also severity level; optionally generates statistics about web traffic managed and/or its performances; other custom tasks. Read request message Web server programs are able: to read an HTTP request message; to interpret it; to verify its syntax; to identify known HTTP headers and to extract their values from them. Once an HTTP request message has been decoded and verified, its values can be used to determine whether that request can be satisfied or not. This requires many other steps, including security checks. URL normalization Web server programs usually perform some type of URL normalization (URL found in most HTTP request messages) in order to: make resource path always a clean uniform path from root directory of website; lower security risks (e.g. by intercepting more easily attempts to access static resources outside the root directory of the website or to access to portions of path below website root directory that are forbidden or which require authorization); make path of web resources more recognizable by human beings and web log analysis programs (also known as log analyzers / statistical applications). The term URL normalization refers to the process of modifying and standardizing a URL in a consistent manner. There are several types of normalization that may be performed, including the conversion of the scheme and host to lowercase. Among the most important normalizations are the removal of "." and ".." path segments and adding trailing slashes to a non-empty path component. URL mapping "URL mapping is the process by which a URL is analyzed to figure out what resource it is referring to, so that that resource can be returned to the requesting client. This process is performed with every request that is made to a web server, with some of the requests being served with a file, such as an HTML document, or a gif image, others with the results of running a CGI program, and others by some other process, such as a built-in module handler, a PHP document, or a Java servlet."In practice, web server programs that implement advanced features, beyond the simple static content serving (e.g. URL rewrite engine, dynamic content serving), usually have to figure out how that URL has to be handled, e.g. as a: URL redirection, a redirection to another URL; static request of file content; dynamic request of: directory listing of files or other sub-directories contained in that directory; other types of dynamic request in order to identify the program / module processor able to handle that kind of URL path and to pass to it other URL parts, i.e. usually path-info and query string variables. One or more configuration files of web server may specify the mapping of parts of URL path (e.g. initial parts of file path, filename extension and other path components) to a specific URL handler (file, directory, external program or internal module). When a web server implements one or more of the above-mentioned advanced features then the path part of a valid URL may not always match an existing file system path under website directory tree (a file or a directory in file system) because it can refer to a virtual name of an internal or external module processor for dynamic requests. URL path translation to file system Web server programs are able to translate an URL path (all or part of it), that refers to a physical file system path, to an absolute path under the target website's root directory. Website's root directory may be specified by a configuration file or by some internal rule of the web server by using the name of the website which is the host part of the URL found in HTTP client request. Path translation to file system is done for the following types of web resources: a local, usually non-executable, file (static request for file content); a local directory (dynamic request: directory listing generated on the fly); a program name (dynamic requests that is executed using CGI or SCGI interface and whose output is read by web server and resent to client who made the HTTP request). The web server appends the path found in requested URL (HTTP request message) and appends it to the path of the (Host) website root directory. On an Apache server, this is commonly /home/www/website (on Unix machines, usually it is: /var/www/website). See the following examples of how it may result. URL path translation for a static file request Example of a static request of an existing file specified by the following URL: http://www.example.com/path/file.html The client's user agent connects to www.example.com and then sends the following HTTP/1.1 request: GET /path/file.html HTTP/1.1 Host: www.example.com Connection: keep-alive The result is the local file system resource: /home/www/www.example.com/path/file.html The web server then reads the file, if it exists, and sends a response to the client's web browser. The response will describe the content of the file and contain the file itself or an error message will return saying that the file does not exist or its access is forbidden. URL path translation for a directory request (without a static index file) Example of an implicit dynamic request of an existing directory specified by the following URL: http://www.example.com/directory1/directory2/ The client's user agent connects to www.example.com and then sends the following HTTP/1.1 request: GET /directory1/directory2 HTTP/1.1 Host: www.example.com Connection: keep-alive The result is the local directory path: /home/www/www.example.com/directory1/directory2/ The web server then verifies the existence of the directory and if it exists and it can be accessed then tries to find out an index file (which in this case does not exist) and so it passes the request to an internal module or a program dedicated to directory listings and finally reads data output and sends a response to the client's web browser. The response will describe the content of the directory (list of contained subdirectories and files) or an error message will return saying that the directory does not exist or its access is forbidden. URL path translation for a dynamic program request For a dynamic request the URL path specified by the client should refer to an existing external program (usually an executable file with a CGI) used by the web server to generate dynamic content. Example of a dynamic request using a program file to generate output: http://www.example.com/cgi-bin/forum.php?action=view&orderby=thread&date=2021-10-15 The client's user agent connects to www.example.com and then sends the following HTTP/1.1 request: GET /cgi-bin/forum.php?action=view&ordeby=thread&date=2021-10-15 HTTP/1.1 Host: www.example.com Connection: keep-alive The result is the local file path of the program (in this example, a PHP program): /home/www/www.example.com/cgi-bin/forum.php The web server executes that program, passing in the path-info and the query string action=view&orderby=thread&date=2021-10-15 so that the program has the info it needs to run. (In this case, it will return an HTML document containing a view of forum entries ordered by thread from October 15, 2021). In addition to this, the web server reads data sent from the external program and resends that data to the client that made the request. Manage request message Once a request has been read, interpreted, and verified, it has to be managed depending on its method, its URL, and its parameters, which may include values of HTTP headers. In practice, the web server has to handle the request by using one of these response paths: if something in request was not acceptable (in status line or message headers), web server already sent an error response; if request has a method (e.g. OPTIONS) that can be satisfied by general code of web server then a successful response is sent; if URL requires authorization then an authorization error message is sent; if URL maps to a redirection then a redirect message is sent; if URL maps to a dynamic resource (a virtual path or a directory listing) then its handler (an internal module or an external program) is called and request parameters (query string and path info) are passed to it in order to allow it to reply to that request; if URL maps to a static resource (usually a file on file system) then the internal static handler is called to send that file; if request method is not known or if there is some other unacceptable condition (e.g. resource not found, internal server error, etc.) then an error response is sent. Serve static content If a web server program is capable of serving static content and it has been configured to do so, then it is able to send file content whenever a request message has a valid URL path matching (after URL mapping, URL translation and URL redirection) that of an existing file under the root directory of a website and file has attributes which match those required by internal rules of web server program. That kind of content is called static because usually it is not changed by the web server when it is sent to clients and because it remains the same until it is modified (file modification) by some program. NOTE: when serving static content only, a web server program usually does not change file contents of served websites (as they are only read and never written) and so it suffices to support only these HTTP methods: OPTIONS HEAD GET Response of static file content can be sped up by a file cache. Directory index files If a web server program receives a client request message with an URL whose path matches one of an existing directory and that directory is accessible and serving directory index file(s) is enabled then a web server program may try to serve the first of known (or configured) static index file names (a regular file) found in that directory; if no index file is found or other conditions are not met then an error message is returned. Most used names for static index files are: index.html, index.htm and Default.htm. Regular files If a web server program receives a client request message with an URL whose path matches the file name of an existing file and that file is accessible by web server program and its attributes match internal rules of web server program, then web server program can send that file to client. Usually, for security reasons, most web server programs are pre-configured to serve only regular files or to avoid to use special file types like device files, along with symbolic links or hard links to them. The aim is to avoid undesirable side effects when serving static web resources. Serve dynamic content If a web server program is capable of serving dynamic content and it has been configured to do so, then it is able to communicate with the proper internal module or external program (associated with the requested URL path) in order to pass to it the parameters of the client request. After that, the web server program reads from it its data response (that it has generated, often on the fly) and then it resends it to the client program who made the request. NOTE: when serving static and dynamic content, a web server program usually has to support also the following HTTP method in order to be able to safely receive data from client(s) and so to be able to host also websites with interactive form(s) that may send large data sets (e.g. lots of data entry or file uploads) to web server / external programs / modules: POST In order to be able to communicate with its internal modules and/or external programs, a web server program must have implemented one or more of the many available gateway interface(s) (see also Web Server Gateway Interfaces used for dynamic content). The three standard and historical gateway interfaces are the following ones. CGI An external CGI program is run by web server program for each dynamic request, then web server program reads from it the generated data response and then resends it to client. SCGI An external SCGI program (it usually is a process) is started once by web server program or by some other program / process and then it waits for network connections; every time there is a new request for it, web server program makes a new network connection to it in order to send request parameters and to read its data response, then network connection is closed. FastCGI An external FastCGI program (it usually is a process) is started once by web server program or by some other program / process and then it waits for a network connection which is established permanently by web server; through that connection are sent the request parameters and read data responses. Directory listings A web server program may be capable to manage the dynamic generation (on the fly) of a directory index list of files and sub-directories. If a web server program is configured to do so and a requested URL path matches an existing directory and its access is allowed and no static index file is found under that directory then a web page (usually in HTML format), containing the list of files and/or subdirectories of above mentioned directory, is dynamically generated (on the fly). If it cannot be generated an error is returned. Some web server programs allow the customization of directory listings by allowing the usage of a web page template (an HTML document containing placeholders, e.g. $(FILE_NAME), $(FILE_SIZE), etc., that are replaced with the field values of each file entry found in directory by web server), e.g. index.tpl or the usage of HTML and embedded source code that is interpreted and executed on the fly, e.g. index.asp, and / or by supporting the usage of dynamic index programs such as CGIs, SCGIs, FCGIs, e.g. index.cgi, index.php, index.fcgi. Usage of dynamically generated directory listings is usually avoided or limited to a few selected directories of a website because that generation takes much more OS resources than sending a static index page. The main usage of directory listings is to allow the download of files (usually when their names, sizes, modification date-times or file attributes may change randomly / frequently) as they are, without requiring to provide further information to requesting user. Program or module processing An external program or an internal module (processing unit) can execute some sort of application function that may be used to get data from or to store data to one or more data repositories, e.g.: files (file system); databases (DBs); other sources located in local computer or in other computers. A processing unit can return any kind of web content, also by using data retrieved from a data repository, e.g.: a document (e.g. HTML, XML, etc.); an image; a video; structured data, e.g. that may be used to update one or more values displayed by a dynamic page (DHTML) of a web interface and that maybe was requested by an XMLHttpRequest API (see also: dynamic page). In practice whenever there is content that may vary, depending on one or more parameters contained in client request or in configuration settings, then, usually, it is generated dynamically. Send response message Web server programs are able to send response messages as replies to client request messages. An error response message may be sent because a request message could not be successfully read or decoded or analyzed or executed. NOTE: the following sections are reported only as examples to help to understand what a web server, more or less, does; these sections are by any means neither exhaustive nor complete. Error message A web server program may reply to a client request message with many kinds of error messages, anyway these errors are divided mainly in two categories: HTTP client errors, due to the type of request message or to the availability of requested web resource; HTTP server errors, due to internal server errors. When an error response / message is received by a client browser, then if it is related to the main user request (e.g. an URL of a web resource such as a web page) then usually that error message is shown in some browser window / message. URL authorization A web server program may be able to verify whether the requested URL path: can be freely accessed by everybody; requires a user authentication (request of user credentials, e.g. such as user name and password); access is forbidden to some or all kind of users. If the authorization / access rights feature has been implemented and enabled and access to web resource is not granted, then, depending on the required access rights, a web server program: can deny access by sending a specific error message (e.g. access forbidden); may deny access by sending a specific error message (e.g. access unauthorized) that usually forces the client browser to ask human user to provide required user credentials; if authentication credentials are provided then web server program verifies and accepts or rejects them. URL redirection A web server program may have the capability of doing URL redirections to new URLs (new locations) which consists in replying to a client request message with a response message containing a new URL suited to access a valid or an existing web resource (client should redo the request with the new URL). URL redirection of location is used: to fix a directory name by adding a final slash '/'; to give a new URL for a no more existing URL path to a new path where that kind of web resource can be found. to give a new URL to another domain when current domain has too much load. Example 1: a URL path points to a directory name but it does not have a final slash '/' so web server sends a redirect to client in order to instruct it to redo the request with the fixed path name. From:   /directory1/directory2 To:   /directory1/directory2/ Example 2: a whole set of documents has been moved inside website in order to reorganize their file system paths. From:   /directory1/directory2/2021-10-08/ To:   /directory1/directory2/2021/10/08/ Example 3: a whole set of documents has been moved to a new website and now it is mandatory to use secure HTTPS connections to access them. From:   http://www.example.com/directory1/directory2/2021-10-08/ To:   https://docs.example.com/directory1/2021-10-08/ Above examples are only a few of the possible kind of redirections. Successful message A web server program is able to reply to a valid client request message with a successful message, optionally containing requested web resource data. If web resource data is sent back to client, then it can be static content or dynamic content depending on how it has been retrieved (from a file or from the output of some program / module). Content cache In order to speed up web server responses by lowering average HTTP response times and hardware resources used, many popular web servers implement one or more content caches, each one specialized in a content category. Content is usually cached by its origin, e.g.: static content: file cache; dynamic content: dynamic cache (module / program output). File cache Historically, static contents found in files which had to be accessed frequently, randomly and quickly, have been stored mostly on electro-mechanical disks since mid-late 1960s / 1970s; regrettably reads from and writes to those kind of devices have always been considered very slow operations when compared to RAM speed and so, since early OSs, first disk caches and then also OS file cache sub-systems were developed to speed up I/O operations of frequently accessed data / files. Even with the aid of an OS file cache, the relative / occasional slowness of I/O operations involving directories and files stored on disks became soon a bottleneck in the increase of performances expected from top level web servers, specially since mid-late 1990s, when web Internet traffic started to grow exponentially along with the constant increase of speed of Internet / network lines. The problem about how to further efficiently speed-up the serving of static files, thus increasing the maximum number of requests/responses per second (RPS), started to be studied / researched since mid 1990s, with the aim to propose useful cache models that could be implemented in web server programs. In practice, nowadays, many popular / high performance web server programs include their own userland file cache, tailored for a web server usage and using their specific implementation and parameters. The wide spread adoption of RAID and/or fast solid-state drives (storage hardware with very high I/O speed) has slightly reduced but of course not eliminated the advantage of having a file cache incorporated in a web server. Dynamic cache Dynamic content, output by an internal module or an external program, may not always change very frequently (given a unique URL with keys / parameters) and so, maybe for a while (e.g. from 1 second to several hours or more), the resulting output can be cached in RAM or even on a fast disk. The typical usage of a dynamic cache is when a website has dynamic web pages about news, weather, images, maps, etc. that do not change frequently (e.g. every n minutes) and that are accessed by a huge number of clients per minute / hour; in those cases it is useful to return cached content too (without calling the internal module or the external program) because clients often do not have an updated copy of the requested content in their browser caches. Anyway, in most cases those kind of caches are implemented by external servers (e.g. reverse proxy) or by storing dynamic data output in separate computers, managed by specific applications (e.g. memcached), in order to not compete for hardware resources (CPU, RAM, disks) with web server(s). Kernel-mode and user-mode web servers A web server software can be either incorporated into the OS and executed in kernel space, or it can be executed in user space (like other regular applications). Web servers that run in kernel mode (usually called kernel space web servers) can have direct access to kernel resources and so they can be, in theory, faster than those running in user mode; anyway there are disadvantages in running a web server in kernel mode, e.g.: difficulties in developing (debugging) software whereas run-time critical errors may lead to serious problems in OS kernel. Web servers that run in user-mode have to ask the system for permission to use more memory or more CPU resources. Not only do these requests to the kernel take time, but they might not always be satisfied because the system reserves resources for its own usage and has the responsibility to share hardware resources with all the other running applications. Executing in user mode can also mean using more buffer/data copies (between user-space and kernel-space) which can lead to a decrease in the performance of a user-mode web server. Nowadays almost all web server software is executed in user mode (because many of the aforementioned small disadvantages have been overcome by faster hardware, new OS versions, much faster OS system calls and new optimized web server software).
Technology
Networks
null
33456
https://en.wikipedia.org/wiki/Well-order
Well-order
In mathematics, a well-order (or well-ordering or well-order relation) on a set is a total ordering on with the property that every non-empty subset of has a least element in this ordering. The set together with the ordering is then called a well-ordered set or woset. In some academic articles and textbooks these terms are instead written as wellorder, wellordered, and wellordering or well order, well ordered, and well ordering. Every non-empty well-ordered set has a least element. Every element of a well-ordered set, except a possible greatest element, has a unique successor (next element), namely the least element of the subset of all elements greater than . There may be elements, besides the least element, that have no predecessor (see below for an example). A well-ordered set contains for every subset with an upper bound a least upper bound, namely the least element of the subset of all upper bounds of in . If ≤ is a non-strict well ordering, then < is a strict well ordering. A relation is a strict well ordering if and only if it is a well-founded strict total order. The distinction between strict and non-strict well orders is often ignored since they are easily interconvertible. Every well-ordered set is uniquely order isomorphic to a unique ordinal number, called the order type of the well-ordered set. The well-ordering theorem, which is equivalent to the axiom of choice, states that every set can be well ordered. If a set is well ordered (or even if it merely admits a well-founded relation), the proof technique of transfinite induction can be used to prove that a given statement is true for all elements of the set. The observation that the natural numbers are well ordered by the usual less-than relation is commonly called the well-ordering principle (for natural numbers). Ordinal numbers Every well-ordered set is uniquely order isomorphic to a unique ordinal number, called the order type of the well-ordered set. The position of each element within the ordered set is also given by an ordinal number. In the case of a finite set, the basic operation of counting, to find the ordinal number of a particular object, or to find the object with a particular ordinal number, corresponds to assigning ordinal numbers one by one to the objects. The size (number of elements, cardinal number) of a finite set is equal to the order type. Counting in the everyday sense typically starts from one, so it assigns to each object the size of the initial segment with that object as last element. Note that these numbers are one more than the formal ordinal numbers according to the isomorphic order, because these are equal to the number of earlier objects (which corresponds to counting from zero). Thus for finite , the expression "-th element" of a well-ordered set requires context to know whether this counts from zero or one. In an expression "-th element" where can also be an infinite ordinal, it will typically count from zero. For an infinite set, the order type determines the cardinality, but not conversely: sets of a particular infinite cardinality can have many different order types (see , below, for an example). For a countably infinite set, the set of possible order types is uncountable. Examples and counterexamples Natural numbers The standard ordering ≤ of the natural numbers is a well ordering and has the additional property that every non-zero natural number has a unique predecessor. Another well ordering of the natural numbers is given by defining that all even numbers are less than all odd numbers, and the usual ordering applies within the evens and the odds: This is a well-ordered set of order type . Every element has a successor (there is no largest element). Two elements lack a predecessor: 0 and 1. Integers Unlike the standard ordering ≤ of the natural numbers, the standard ordering ≤ of the integers is not a well ordering, since, for example, the set of negative integers does not contain a least element. The following binary relation is an example of well ordering of the integers: if and only if one of the following conditions holds: is positive, and is negative and are both positive, and and are both negative, and This relation can be visualized as follows: is isomorphic to the ordinal number . Another relation for well ordering the integers is the following definition: if and only if This well order can be visualized as follows: This has the order type . Reals The standard ordering ≤ of any real interval is not a well ordering, since, for example, the open interval does not contain a least element. From the ZFC axioms of set theory (including the axiom of choice) one can show that there is a well order of the reals. Also Wacław Sierpiński proved that ZF + GCH (the generalized continuum hypothesis) imply the axiom of choice and hence a well order of the reals. Nonetheless, it is possible to show that the ZFC+GCH axioms alone are not sufficient to prove the existence of a definable (by a formula) well order of the reals. However it is consistent with ZFC that a definable well ordering of the reals exists—for example, it is consistent with ZFC that V=L, and it follows from ZFC+V=L that a particular formula well orders the reals, or indeed any set. An uncountable subset of the real numbers with the standard ordering ≤ cannot be a well order: Suppose is a subset of well ordered by . For each in , let be the successor of in ordering on (unless is the last element of ). Let whose elements are nonempty and disjoint intervals. Each such interval contains at least one rational number, so there is an injective function from to There is an injection from to (except possibly for a last element of , which could be mapped to zero later). And it is well known that there is an injection from to the natural numbers (which could be chosen to avoid hitting zero). Thus there is an injection from to the natural numbers, which means that is countable. On the other hand, a countably infinite subset of the reals may or may not be a well order with the standard . For example, The natural numbers are a well order under the standard ordering . The set has no least element and is therefore not a well order under standard ordering . Examples of well orders: The set of numbers has order type . The set of numbers has order type . The previous set is the set of limit points within the set. Within the set of real numbers, either with the ordinary topology or the order topology, 0 is also a limit point of the set. It is also a limit point of the set of limit points. The set of numbers has order type . With the order topology of this set, 1 is a limit point of the set, despite being separated from the only limit point 0 under the ordinary topology of the real numbers. Equivalent formulations If a set is totally ordered, then the following are equivalent to each other: The set is well ordered. That is, every nonempty subset has a least element. Transfinite induction works for the entire ordered set. Every strictly decreasing sequence of elements of the set must terminate after only finitely many steps (assuming the axiom of dependent choice). Every subordering is isomorphic to an initial segment. Order topology Every well-ordered set can be made into a topological space by endowing it with the order topology. With respect to this topology there can be two kinds of elements: isolated points — these are the minimum and the elements with a predecessor. limit points — this type does not occur in finite sets, and may or may not occur in an infinite set; the infinite sets without limit point are the sets of order type , for example the natural numbers For subsets we can distinguish: Subsets with a maximum (that is, subsets that are bounded by themselves); this can be an isolated point or a limit point of the whole set; in the latter case it may or may not be also a limit point of the subset. Subsets that are unbounded by themselves but bounded in the whole set; they have no maximum, but a supremum outside the subset; if the subset is non-empty this supremum is a limit point of the subset and hence also of the whole set; if the subset is empty this supremum is the minimum of the whole set. Subsets that are unbounded in the whole set. A subset is cofinal in the whole set if and only if it is unbounded in the whole set or it has a maximum that is also maximum of the whole set. A well-ordered set as topological space is a first-countable space if and only if it has order type less than or equal to ω1 (omega-one), that is, if and only if the set is countable or has the smallest uncountable order type.
Mathematics
Order theory
null
33458
https://en.wikipedia.org/wiki/Well-ordering%20theorem
Well-ordering theorem
In mathematics, the well-ordering theorem, also known as Zermelo's theorem, states that every set can be well-ordered. A set X is well-ordered by a strict total order if every non-empty subset of X has a least element under the ordering. The well-ordering theorem together with Zorn's lemma are the most important mathematical statements that are equivalent to the axiom of choice (often called AC, see also ). Ernst Zermelo introduced the axiom of choice as an "unobjectionable logical principle" to prove the well-ordering theorem. One can conclude from the well-ordering theorem that every set is susceptible to transfinite induction, which is considered by mathematicians to be a powerful technique. One famous consequence of the theorem is the Banach–Tarski paradox. History Georg Cantor considered the well-ordering theorem to be a "fundamental principle of thought". However, it is considered difficult or even impossible to visualize a well-ordering of ; such a visualization would have to incorporate the axiom of choice. In 1904, Gyula Kőnig claimed to have proven that such a well-ordering cannot exist. A few weeks later, Felix Hausdorff found a mistake in the proof. It turned out, though, that in first-order logic the well-ordering theorem is equivalent to the axiom of choice, in the sense that the Zermelo–Fraenkel axioms with the axiom of choice included are sufficient to prove the well-ordering theorem, and conversely, the Zermelo–Fraenkel axioms without the axiom of choice but with the well-ordering theorem included are sufficient to prove the axiom of choice. (The same applies to Zorn's lemma.) In second-order logic, however, the well-ordering theorem is strictly stronger than the axiom of choice: from the well-ordering theorem one may deduce the axiom of choice, but from the axiom of choice one cannot deduce the well-ordering theorem. There is a well-known joke about the three statements, and their relative amenability to intuition:The axiom of choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma? Proof from axiom of choice The well-ordering theorem follows from the axiom of choice as follows.Let the set we are trying to well-order be , and let be a choice function for the family of non-empty subsets of . For every ordinal , define an element that is in by setting if this complement is nonempty, or leave undefined if it is. That is, is chosen from the set of elements of that have not yet been assigned a place in the ordering (or undefined if the entirety of has been successfully enumerated). Then the order on defined by if and only if (in the usual well-order of the ordinals) is a well-order of as desired, of order type . Proof of axiom of choice The axiom of choice can be proven from the well-ordering theorem as follows. To make a choice function for a collection of non-empty sets, , take the union of the sets in and call it . There exists a well-ordering of ; let be such an ordering. The function that to each set of associates the smallest element of , as ordered by (the restriction to of) , is a choice function for the collection . An essential point of this proof is that it involves only a single arbitrary choice, that of ; applying the well-ordering theorem to each member of separately would not work, since the theorem only asserts the existence of a well-ordering, and choosing for each a well-ordering would require just as many choices as simply choosing an element from each . Particularly, if contains uncountably many sets, making all uncountably many choices is not allowed under the axioms of Zermelo-Fraenkel set theory without the axiom of choice.
Mathematics
Axiomatic systems
null
33496
https://en.wikipedia.org/wiki/Weapon
Weapon
A weapon, arm, or armament is any implement or device that is used to deter, threaten, inflict physical damage, harm, or kill. Weapons are used to increase the efficacy and efficiency of activities such as hunting, crime (e.g., murder), law enforcement, self-defense, warfare, or suicide. In a broader context, weapons may be construed to include anything used to gain a tactical, strategic, material, or mental advantage over an adversary or enemy target. While ordinary objects such as rocks and bottles can be used as weapons, many objects are expressly designed for the purpose; these range from simple implements such as clubs and swords to complicated modern firearms, tanks, missiles and biological weapons. Something that has been repurposed, converted, or enhanced to become a weapon of war is termed weaponized, such as a weaponized virus or weaponized laser. History The use of weapons has been a major driver of cultural evolution and human history up to today since weapons are a type of tool that is used to dominate and subdue autonomous agents such as animals and, by doing so, allow for an expansion of the cultural niche, while simultaneously other weapon users (i.e., agents such as humans, groups, and cultures) are able to adapt to the weapons of enemies by learning, triggering a continuous process of competitive technological, skill, and cognitive improvement (arms race). Prehistoric The use of objects as weapons has been observed among chimpanzees, leading to speculation that early hominids used weapons as early as five million years ago. However, this cannot be confirmed using physical evidence because wooden clubs, spears, and unshaped stones would have left an ambiguous record. The earliest unambiguous weapons to be found are the Schöningen spears, eight wooden throwing spears dating back more than 300,000 years. At the site of Nataruk in Turkana, Kenya, numerous human skeletons dating to 10,000 years ago may present evidence of traumatic injuries to the head, neck, ribs, knees, and hands, including obsidian projectiles embedded in the bones that might have been caused by arrows and clubs during conflict between two hunter-gatherer groups. But the interpretation of warfare at Nataruk has been challenged due to conflicting evidence. Ancient history The earliest ancient weapons were evolutionary improvements of late Neolithic implements, but significant improvements in materials and crafting techniques led to a series of revolutions in military technology. The development of metal tools began with copper during the Copper Age (about 3,300 BC) and was followed by the Bronze Age, leading to the creation of the Bronze Age sword and similar weapons. During the Bronze Age, the first defensive structures and fortifications appeared as well, indicating an increased need for security. Weapons designed to breach fortifications followed soon after, such as the battering ram, which was in use by 2500 BC. The development of ironworking around 1300 BC in Greece had an important impact on the development of ancient weapons. It was not the introduction of early Iron Age swords, however, as they were not superior to their bronze predecessors, but rather the domestication of the horse and widespread use of spoked wheels by . This led to the creation of the light, horse-drawn chariot, whose improved mobility proved important during this era. Spoke-wheeled chariot usage peaked around 1300 BC and then declined, ceasing to be militarily relevant by the 4th century BC. Cavalry developed once horses were bred to support the weight of a human. The horse extended the range and increased the speed of attacks. Alexander's conquest saw the increased use of spears and shields in the Middle East and Western Asia as a result Greek culture spread which saw many Greek and other European weapons be used in these regions and as a result many of these weapons were adapted to fit their new use in war In addition to land-based weaponry, warships, such as the trireme, were in use by the 7th century BC. During the first First Punic War, the use of advanced warships contributed to a Roman victory over the Carthaginians. Post-classical history European warfare during post-classical history was dominated by elite groups of knights supported by massed infantry. They were involved in mobile combat and sieges, which involved various siege weapons and tactics. Knights on horseback developed tactics for charging with lances, providing an impact on the enemy formations, and then drawing more practical weapons (such as swords) once they entered melee. By contrast, infantry, in the age before structured formations, relied on cheap, sturdy weapons such as spears and billhooks in close combat and bows from a distance. As armies became more professional, their equipment was standardized, and infantry transitioned to pikes. Pikes are normally seven to eight feet in length and used in conjunction with smaller sidearms (short swords). In Eastern and Middle Eastern warfare, similar tactics were developed independent of European influences. The introduction of gunpowder from Asia at the end of this period revolutionized warfare. Formations of musketeers, protected by pikemen, came to dominate open battles, and the cannon replaced the trebuchet as the dominant siege weapon. The Ottoman used the cannon to destroy much of the fortifications at Constantinople which would change warfare as gunpowder became more available and technology improved Modern history Early modern The European Renaissance marked the beginning of the implementation of firearms in western warfare. Guns and rockets were introduced to the battlefield. Firearms are qualitatively different from earlier weapons because they release energy from combustible propellants, such as gunpowder, rather than from a counterweight or spring. This energy is released very rapidly and can be replicated without much effort by the user. Therefore, even early firearms such as the arquebus were much more powerful than human-powered weapons. Firearms became increasingly important and effective during the 16th–19th centuries, with progressive improvements in ignition mechanisms followed by revolutionary changes in ammunition handling and propellant. During the American Civil War, new applications of firearms, including the machine gun and ironclad warship, emerged that would still be recognizable and useful military weapons today, particularly in limited conflicts. In the 19th century, warship propulsion changed from sail power to fossil fuel-powered steam engines. Since the mid-18th century North American French-Indian war through the beginning of the 20th century, human-powered weapons were reduced from the primary weaponry of the battlefield to yielding gunpowder-based weaponry. Sometimes referred to as the "Age of Rifles", this period was characterized by the development of firearms for infantry and cannons for support, as well as the beginnings of mechanized weapons such as the machine gun. Artillery pieces such as howitzers were able to destroy masonry fortresses and other fortifications, and this single invention caused a revolution in military affairs, establishing tactics and doctrine that are still in use today. World War I An important feature of industrial age warfare was technological escalation – innovations were rapidly matched through replication or countered by another innovation. World War I marked the entry of fully industrialized warfare as well as weapons of mass destruction (e.g., chemical and biological weapons), and new weapons were developed quickly to meet wartime needs. The technological escalation during World War I was profound, including the wide introduction of aircraft into warfare and naval warfare with the introduction of aircraft carriers. Above all, it promised the military commanders independence from horses and a resurgence in maneuver warfare through the extensive use of motor vehicles. The changes that these military technologies underwent were evolutionary but defined their development for the rest of the century. Interwar This period of innovation in weapon design continued in the interwar period (between WWI and WWII) with the continuous evolution of weapon systems by all major industrial powers. The major armament firms were Schneider-Creusot (based in France), Škoda Works (Czechoslovakia), and Vickers (Great Britain). The 1920s were committed to disarmament and the outlawing of war and poison gas, but rearmament picked up rapidly in the 1930s. The munitions makers responded nimbly to the rapidly shifting strategic and economic landscape. The main purchasers of munitions from the big three companies were Romania, Yugoslavia, Greece, and Turkeyand, to a lesser extent, Poland, Finland, the Baltic States, and the Soviet Union. Criminalizing poison gas Realistic critics understood that war could not really be outlawed, but its worst excesses might be banned. Poison gas became the focus of a worldwide crusade in the 1920s. Poison gas did not win battles, and the generals did not want it. The soldiers hated it far more intensely than bullets or explosive shells. By 1918, chemical shells made up 35 percent of French ammunition supplies, 25 percent of British, and 20 percent of American stock. The "Protocol for the Prohibition of the Use in War of Asphyxiating, Poisonous, or Other Gases and of Bacteriological Methods of Warfare", also known as the Geneva Protocol, was issued in 1925 and was accepted as policy by all major countries. In 1937, poison gas was manufactured in large quantities but not used except against nations that lacked modern weapons or gas masks. World War II and postwar Many modern military weapons, particularly ground-based ones, are relatively minor improvements to weapon systems developed during World War II. World War II marked perhaps the most frantic period of weapon development in the history of humanity. Massive numbers of new designs and concepts were fielded, and all existing technologies were improved between 1939 and 1945. The most powerful weapon invented during this period was the nuclear bomb; however, many other weapons influenced the world, such as jet aircraft and radar, but were overshadowed by the visibility of nuclear weapons and long-range rockets. Nuclear weapons Since the realization of mutual assured destruction (MAD), the nuclear option of all-out war is no longer considered a survivable scenario. During the Cold War in the years following World War II, both the United States and the Soviet Union engaged in a nuclear arms race. Each country and their allies continually attempted to out-develop each other in the field of nuclear armaments. Once the joint technological capabilities reached the point of being able to ensure the destruction of the Earth by 100 fold, a new tactic had to be developed. With this realization, armaments development funding shifted back to primarily sponsoring the development of conventional arms technologies for support of limited wars rather than total war. Types By user – what person or unit uses the weapon Personal weapons (or small arms) – designed to be used by a single person. Light weapons – 'man-portable' weapons that may require a small team to operate. Heavy weapons – artillery and similar weapons larger than light weapons (see SALW). Crew served weapons – larger than personal weapons, requiring two or more people to operate correctly. Fortification weapons – mounted in a permanent installation or used primarily within a fortification. Mountain weapons – for use by mountain forces or those operating in difficult terrain. Vehicle-mounted weapons – to be mounted on any type of combat vehicle. Railway weapons – designed to be mounted on railway cars, including armored trains. Aircraft weapons – carried on and used by some type of aircraft, helicopter, or other aerial vehicle. Naval weapons – mounted on ships and submarines. Space weapons – are designed to be used in or launched from space. Autonomous weapons – are capable of accomplishing a mission with limited or no human intervention. By function – the construction of the weapon and the principle of operation Antimatter weapons (theoretical) – would combine matter and antimatter to cause a powerful explosion. Archery weapons – operate by using a tensioned string and a bent solid to launch a projectile. Artillery – firearms capable of launching heavy projectiles over long distances. Biological weapons – spread biological agents, causing disease or infection. Blunt instruments – designed to break or fracture bones, produce concussions, create organ ruptures, or crush injuries. Chemical weapons – poison people and cause reactions. Edged and bladed weapons – designed to pierce or cut through skin, muscle, or bone and cause internal or external bleeding. Energy weapons – rely on concentrating forms of energy to attack, such as lasers or sonic attacks. Explosive weapons – use a physical explosion to create a blast, concussion, or spread shrapnel. Firearms – use a chemical charge to launch projectiles. Improvised weapons – common objects reused as weapons, such as crowbars and kitchen knives. Incendiary weapons – cause damage by fire. Loitering munitions – designed to loiter over a battlefield, striking once a target is located. Magnetic weapons – use magnetic fields to propel projectiles or focus particle beams. Missiles – rockets that are guided to their target after launch. (Also a general term for projectile weapons.) Non-lethal weapons – designed to subdue without killing. Nuclear weapons – use radioactive materials to create nuclear fission or nuclear fusion detonations Rockets – self-propelled projectiles. Suicide weapons – exploit the willingness of their operators not surviving the attack. By target – the type of target the weapon is designed to attack Anti-aircraft weapons – target missiles and aerial vehicles in flight. Anti-fortification weapons – designed to target enemy installations. Anti-personnel weapons – designed to attack people, either individually or in numbers. Anti-radiation weapons – target sources of electronic radiation, particularly radar emitters. Anti-satellite weapons – target orbiting satellites. Anti-ship weapons – target ships and vessels on water. Anti-submarine weapons – target submarines and other underwater targets. Anti-tank weapons – designed to defeat armored targets. Area denial weapons – target territory, making it unsafe or unsuitable for enemy use or travel. Hunting weapons – weapons used to hunt game animals. Infantry support weapons – designed to attack various threats to infantry units. Siege engines – designed to break or circumvent heavy fortifications in siege warfare. Manufacture of weapons The arms industry is a global industry that involves the sale and manufacture of weaponry. It consists of a commercial industry involved in the research and development, engineering, production, and servicing of military material, equipment, and facilities. Many industrialized countries have a domestic arms industry to supply their own military forces, and some also have a substantial trade in weapons for use by their citizens for self-defense, hunting, or sporting purposes. Contracts to supply a given country's military are awarded by governments, making arms contracts of substantial political importance. The link between politics and the arms trade can result in the development of a "military–industrial complex", where the armed forces, commerce, and politics become closely linked. According to research institute SIPRI, the volume of international transfers of major weapons in 2010–2014 was 16 percent higher than in 2005–2009, and the arms sales of the world's 100 largest private arms-producing and military services companies totaled $420 billion in 2018. Legislation The production, possession, trade, and use of many weapons are controlled. This may be at a local or central government level or by international treaty. Examples of such controls include: The right of self-defense Knife legislation Air gun laws Gun law Arms trafficking laws Arms control treaties Space Preservation Treaty Gun laws All countries have laws and policies regulating aspects such as the manufacture, sale, transfer, possession, modification, and use of small arms by civilians. Countries that regulate access to firearms will typically restrict access to certain categories of firearms and then restrict the categories of persons who may be granted a license for access to such firearms. There may be separate licenses for hunting, sport shooting (a.k.a. target shooting), self-defense, collecting, and concealed carry, with different sets of requirements, permissions, and responsibilities. Arms control laws International treaties and agreements place restrictions on the development, production, stockpiling, proliferation, and usage of weapons, from small arms and heavy weapons to weapons of mass destruction. Arms control is typically exercised through the use of diplomacy, which seeks to impose such limitations upon consenting participants, although it may also comprise efforts by a nation or group of nations to enforce limitations upon a non-consenting country. Arms trafficking laws Arms trafficking is the trafficking of contraband weapons and ammunition. What constitutes legal trade in firearms varies widely, depending on local and national laws. In 2001, the United Nations had made a protocol against the manufacturing and trafficking of illicit arms. This protocol made governments dispose illegal arms, and to licence new firearms being produced, to ensure them being legitimate. It was signed by 122 parties. Lifecycle problems There are a number of issues around the potential ongoing risks from deployed weapons, the safe storage of weapons, and their eventual disposal when they are no longer effective or safe. Ocean dumping of unused weapons such as bombs, ordnance, landmines, and chemical weapons has been common practice by many nations and has created hazards. Unexploded ordnance (UXO) are bombs, land mines, naval mines, and similar devices that did not explode when they were employed and still pose a risk for many years or decades. Demining or mine clearance from areas of past conflict is a difficult process, but every year, landmines kill 15,000 to 20,000 people and severely maim countless more. Nuclear terrorism was a serious concern after the fall of the Soviet Union, with the prospect of "loose nukes" being available. While this risk may have receded, similar situations may arise in the future. In science fiction Strange and exotic weapons are a recurring feature or theme in science fiction. In some cases, weapons first introduced in science fiction have now become a reality. Other science fiction weapons, such as force fields and stasis fields, remain purely fictional and are often beyond the realms of known physical possibility. At its most prosaic, science fiction features an endless variety of sidearms, mostly variations on real weapons such as guns and swords. Among the best-known of these are the phaser used in the Star Trek television series, films, and novels, and the lightsaber and blaster featured in the Star Wars movies, comics, novels, and TV series. In addition to adding action and entertainment value, weaponry in science fiction sometimes becomes a theme when it touches on deeper concerns, often motivated by contemporary issues. One example is science fiction that deals with weapons of mass destruction like doomsday devices.
Technology
Weapons
null
33498
https://en.wikipedia.org/wiki/Wire
Wire
A wire is a flexible, round, bar of metal. Wires are commonly formed by drawing the metal through a hole in a die or draw plate. Wire gauges come in various standard sizes, as expressed in terms of a gauge number or cross-sectional area. Wires are used to bear mechanical loads, often in the form of wire rope. In electricity and telecommunications signals, a "wire" can refer to an electrical cable, which can contain a "solid core" of a single wire or separate strands in stranded or braided forms. Usually cylindrical in geometry, wire can also be made in square, hexagonal, flattened rectangular, or other cross-sections, either for decorative purposes, or for technical purposes such as high-efficiency voice coils in loudspeakers. Edge-wound coil springs, such as the Slinky toy, are made of special flattened wire. History In antiquity, jewelry often contains large amounts of wire in the form of chains and applied decoration that is accurately made and which must have been produced by some efficient, if not technically advanced, means. In some cases, strips cut from metal sheet were made into wire by pulling them through perforations in stone beads. This causes the strips to fold round on themselves to form thin tubes. This strip drawing technique was in use in Egypt by the 2nd Dynasty (). From the middle of the 2nd millennium BCE most of the gold wires in jewelry are characterized by seam lines that follow a spiral path along the wire. Such twisted strips can be converted into solid round wires by rolling them between flat surfaces or the strip wire drawing method. The strip twist wire manufacturing method was superseded by drawing in the ancient Old World sometime between about the 8th and 10th centuries AD. There is some evidence for the use of drawing further East prior to this period. Square and hexagonal wires were possibly made using a swaging technique. In this method a metal rod was struck between grooved metal blocks, or between a grooved punch and a grooved metal anvil. Swaging is of great antiquity, possibly dating to the beginning of the 2nd millennium BCE in Egypt and in the Bronze and Iron Ages in Europe for torcs and fibulae. Twisted square-section wires are a very common filigree decoration in early Etruscan jewelry. In about the middle of the 2nd millennium BCE, a new category of decorative tube was introduced which imitated a line of granules. True beaded wire, produced by mechanically distorting a round-section wire, appeared in the Eastern Mediterranean and Italy in the seventh century BCE, perhaps disseminated by the Phoenicians. Beaded wire continued to be used in jewellery into modern times, although it largely fell out of favour in about the tenth century CE when two drawn round wires, twisted together to form what are termed 'ropes', provided a simpler-to-make alternative. A forerunner to beaded wire may be the notched strips and wires which first occur from around 2000 BCE in Anatolia. Wire was drawn in England from the medieval period. The wire was used to make wool cards and pins, manufactured goods whose import was prohibited by Edward IV in 1463. The first wire mill in Great Britain was established at Tintern in about 1568 by the founders of the Company of Mineral and Battery Works, who had a monopoly on this. Apart from their second wire mill at nearby Whitebrook, there were no other wire mills before the second half of the 17th century. Despite the existence of mills, the drawing of wire down to fine sizes continued to be done manually. According to a description in the early 20th century, "[w]ire is usually drawn of cylindrical form; but it may be made of any desired section by varying the outline of the holes in the draw-plate through which it is passed in the process of manufacture. The draw-plate or die is a piece of hard cast-iron or hard steel, or for fine work it may be a diamond or a ruby. The object of utilising precious stones is to enable the dies to be used for a considerable period without losing their size, and so producing wire of incorrect diameter. Diamond dies must be re-bored when they have lost their original diameter of hole, but metal dies are brought down to size again by hammering up the hole and then drifting it out to correct diameter with a punch." Production Wire is often reduced to the desired diameter and properties by repeated drawing through progressively smaller dies, or traditionally holes in draw plates. After a number of passes the wire may be annealed to facilitate more drawing or, if it is a finished product, to maximise ductility and conductivity. Electrical wires are usually covered with insulating materials, such as plastic, rubber-like polymers, or varnish. Insulating and jacketing of wires and cables is nowadays done by passing them through an extruder. Formerly, materials used for insulation included treated cloth or paper and various oil-based products. Since the mid-1960s, plastic and polymers exhibiting properties similar to rubber have predominated. Two or more wires may be wrapped concentrically, separated by insulation, to form coaxial cable. The wire or cable may be further protected with substances like paraffin, some kind of preservative compound, bitumen, lead, aluminum sheathing, or steel taping. Stranding or covering machines wind material onto wire which passes through quickly. Some of the smallest machines for cotton covering have a large drum, which grips the wire and moves it through toothed gears; the wire passes through the centre of disks mounted above a long bed, and the disks carry each a number of bobbins varying from six to twelve or more in different machines. A supply of covering material is wound on each bobbin, and the end is led on to the wire, which occupies a central position relatively to the bobbins; the latter being revolved at a suitable speed bodily with their disks, the cotton is consequently served on to the wire, winding in spiral fashion so as to overlap. If many strands are required the disks are duplicated, so that as many as sixty spools may be carried, the second set of strands being laid over the first. For heavier cables that are used for electric light and power as well as submarine cables, the machines are somewhat different in construction. The wire is still carried through a hollow shaft, but the bobbins or spools of covering material are set with their spindles at right angles to the axis of the wire, and they lie in a circular cage which rotates on rollers below. The various strands coming from the spools at various parts of the circumference of the cage all lead to a disk at the end of the hollow shaft. This disk has perforations through which each of the strands pass, thence being immediately wrapped on the cable, which slides through a bearing at this point. Toothed gears having certain definite ratios are used to cause the winding drum for the cable and the cage for the spools to rotate at suitable relative speeds which do not vary. The cages are multiplied for stranding with many tapes or strands, so that a machine may have six bobbins on one cage and twelve on the other. Forms Solid Solid wire, also called solid-core or single-strand wire, consists of one piece of metal wire. Solid wire is useful for wiring breadboards. Solid wire is cheaper to manufacture than stranded wire and is used where there is little need for flexibility in the wire. Solid wire also provides mechanical ruggedness; and, because it has relatively less surface area which is exposed to attack by corrosives, protection against the environment. Stranded Stranded wire is composed of a number of small wires bundled or wrapped together to form a larger conductor. Stranded wire is more flexible than solid wire of the same total cross-sectional area. Stranded wire is used when higher resistance to metal fatigue is required. Such situations include connections between circuit boards in multi-printed-circuit-board devices, where the rigidity of solid wire would produce too much stress as a result of movement during assembly or servicing; A.C. line cords for appliances; musical instrument cables; computer mouse cables; welding electrode cables; control cables connecting moving machine parts; mining machine cables; trailing machine cables; and numerous others. At high frequencies, current travels near the surface of the wire because of the skin effect, resulting in increased power loss in the wire. Stranded wire might seem to reduce this effect, since the total surface area of the strands is greater than the surface area of the equivalent solid wire, but ordinary stranded wire does not reduce the skin effect because all the strands are short-circuited together and behave as a single conductor. A stranded wire will have higher resistance than a solid wire of the same diameter because the cross-section of the stranded wire is not all copper; there are unavoidable gaps between the strands (this is the circle packing problem for circles within a circle). A stranded wire with the same cross-section of conductor as a solid wire is said to have the same equivalent gauge and is always a larger diameter. However, for many high-frequency applications, proximity effect is more severe than skin effect, and in some limited cases, simple stranded wire can reduce proximity effect. For better performance at high frequencies, litz wire, which has the individual strands insulated and twisted in special patterns, may be used. The more individual wire strands in a wire bundle, the more flexible, kink-resistant, break-resistant, and stronger the wire becomes. However, more strands increases manufacturing complexity and cost. For geometrical reasons, the lowest number of strands usually seen is 7: one in the middle, with 6 surrounding it in close contact. The next level up is 19, which is another layer of 12 strands on top of the 7. After that the number varies, but 37 and 49 are common, then in the 70 to 100 range (the number is no longer exact). Larger numbers than that are typically found only in very large cables. For application where the wire moves, 19 is the lowest that should be used (7 should only be used in applications where the wire is placed and then does not move), and 49 is much better. For applications with constant repeated movement, such as assembly robots and headphone wires, 70 to 100 is mandatory . For applications that need even more flexibility, even more strands are used (welding cables are the usual example, but also any application that needs to move wire in tight areas). One example is a 2/0 wire made from 5,292 strands of No. 36 gauge wire. The strands are organized by first creating a bundle of 7 strands. Then 7 of these bundles are put together into super bundles. Finally 108 super bundles are used to make the final cable. Each group of wires is wound in a helix so that when the wire is flexed, the part of a bundle that is stretched moves around the helix to a part that is compressed to allow the wire to have less stress. Prefused wire is stranded wire made up of strands that are heavily tinned, then fused together. Prefused wire has many of the properties of solid wire, except it is less likely to break. Braided A braided wire consists of a number of small strands of wire braided together. Braided wires do not break easily when flexed. Braided wires are often suitable as an electromagnetic shield in noise-reduction cables. Uses Wire has many uses. It forms the raw material of many important manufacturers, such as the wire netting industry, engineered springs, wire-cloth making and wire rope spinning, in which it occupies a place analogous to a textile fiber. Wire-cloth of all degrees of strength and fineness of mesh is used for sifting and screening machinery, for draining paper pulp, for window screens, and for many other purposes. Vast quantities of aluminium, copper, nickel and steel wire are employed for telephone and data cables, and as conductors in electric power transmission, and heating. It is in no less demand for fencing, and much is consumed in the construction of suspension bridges, and cages, etc. In the manufacture of stringed musical instruments and scientific instruments, wire is again largely used. Carbon and stainless spring steel wire have significant applications in engineered springs for critical automotive or industrial manufactured parts/components. Pin and hairpin making; the needle and fish-hook industries; nail, peg, and rivet making; and carding machinery consume large amounts of wire as feedstock. Not all metals and metallic alloys possess the physical properties necessary to make useful wire. The metals must in the first place be ductile and strong in tension, the quality on which the utility of wire principally depends. The principal metals suitable for wire, possessing almost equal ductility, are platinum, silver, iron, copper, aluminium, and gold; and it is only from these and certain of their alloys with other metals, principally brass and bronze, that wire is prepared. By careful treatment, extremely thin wire can be produced. Special purpose wire is however made from other metals (e.g. tungsten wire for light bulb and vacuum tube filaments, because of its high melting temperature). Copper wires are also plated with other metals, such as tin, nickel, and silver to handle different temperatures, provide lubrication, and provide easier stripping of rubber insulation from copper. Metallic wires are often used for the lower-pitched sound-producing "strings" in stringed instruments, such as violins, cellos, and guitars, and percussive string instruments such as pianos, dulcimers, dobros, and cimbaloms. To increase the mass per unit length (and thus lower the pitch of the sound even further), the main wire may sometimes be helically wrapped with another, finer strand of wire. Such musical strings are said to be "overspun"; the added wire may be circular in cross-section ("round-wound"), or flattened before winding ("flat-wound"). Examples include: Hook-up wire is small-to-medium gauge, solid or stranded, insulated wire, used for making internal connections inside electrical or electronic devices. It is often tin-plated to improve solderability. Wire bonding is the application of microscopic wires for making electrical connections inside semiconductor components and integrated circuits. Magnet wire is solid wire, usually copper, which, to allow closer winding when making electromagnetic coils, is insulated only with varnish, rather than the thicker plastic or other insulation commonly used on electrical wire. It is used for the winding of motors, transformers, inductors, generators, speaker coils, etc. (For further information about copper magnet wire, see: Copper wire and cable#Magnet wire (Winding wire).). Coaxial cable is a cable consisting of an inner conductor, surrounded by a tubular insulating layer typically made from a flexible material with a high dielectric constant, all of which is then surrounded by another conductive layer (typically of fine woven wire for flexibility, or of a thin metallic foil), and then finally covered again with a thin insulating layer on the outside. The term coaxial comes from the inner conductor and the outer shield sharing the same geometric axis. Coaxial cables are often used as a transmission line for radio frequency signals. In a hypothetical ideal coaxial cable, the electromagnetic field carrying the signal exists only in the space between the inner and outer conductors. Practical cables achieve this objective to a high degree. A coaxial cable provides extra protection of signals from external electromagnetic interference and effectively guides signals with low emission along the length of the cable which in turn affects thermal heat inside the conductivity of the wire. Speaker wire is used to make a low-resistance electrical connection between loudspeakers and audio amplifiers. Some high-end modern speaker wire consists of multiple electrical conductors individually insulated by plastic, similar to Litz wire. Resistance wire is wire with higher than normal resistivity, often used for heating elements or for making wire-wound resistors. Nichrome wire is the most common type.
Technology
Components
null
33501
https://en.wikipedia.org/wiki/White%20dwarf
White dwarf
A white dwarf is a stellar core remnant composed mostly of electron-degenerate matter. A white dwarf is very dense: in an Earth sized volume, it packs a mass that is comparable to the Sun. No nuclear fusion takes place in a white dwarf; what light it radiates is from its residual heat. The nearest known white dwarf is Sirius B, at 8.6 light years, the smaller component of the Sirius binary star. There are currently thought to be eight white dwarfs among the hundred star systems nearest the Sun. The unusual faintness of white dwarfs was first recognized in 1910. The name white dwarf was coined by Willem Jacob Luyten in 1922. White dwarfs are thought to be the final evolutionary state of stars whose mass is not high enough to become a neutron star or black hole. This includes over 97% of the stars in the Milky Way. After the hydrogen-fusing period of a main-sequence star of low or intermediate mass ends, such a star will expand to a red giant and fuse helium to carbon and oxygen in its core by the triple-alpha process. If a red giant has insufficient mass to generate the core temperatures required to fuse carbon (around ), an inert mass of carbon and oxygen will build up at its center. After such a star sheds its outer layers and forms a planetary nebula, it will leave behind a core, which is the remnant white dwarf. Usually, white dwarfs are composed of carbon and oxygen (CO white dwarf). If the mass of the progenitor is between 7 and 9 solar masses (), the core temperature will be sufficient to fuse carbon but not neon, in which case an oxygen–neon–magnesium (ONeMg or ONe) white dwarf may form. Stars of very low mass will be unable to fuse helium; hence, a helium white dwarf may form by mass loss in an interacting binary star system. Because the material in a white dwarf no longer undergoes fusion reactions, it lacks a heat source to support it against gravitational collapse. Instead, it is supported only by electron degeneracy pressure, causing it to be extremely dense. The physics of degeneracy yields a maximum mass for a non-rotating white dwarf, the Chandrasekhar limit approximately 1.44 times beyond which electron degeneracy pressure cannot support it. A carbon–oxygen white dwarf which approaches this limit, typically by mass transfer from a companion star, may explode as a Type Ia supernova via a process known as carbon detonation; SN 1006 is a likely example. A white dwarf, very hot when it forms, gradually cools as it radiates its energy. This radiation, which initially has a high color temperature, lessens and reddens over time. Eventually a white dwarf will cool enough that its material will begin to crystallize, starting with the core. No longer emitting significant heat or light, and it will become a cold black dwarf. Inasmuch as the length of time it takes for a white dwarf to reach this end state has been calculated to be longer than the current age of the known universe (approximately 13.8 billion years), it is thought that no black dwarfs exist. The oldest known white dwarfs still radiate at temperatures of a few thousand kelvins, which establishes an observational limit on the maximum possible age of the universe. Discovery The first white dwarf discovered was in the triple star system of 40 Eridani, which contains the relatively bright main sequence star 40 Eridani A, orbited at a distance by the closer binary system of the white dwarf 40 Eridani B and the main sequence red dwarf 40 Eridani C. The pair 40 Eridani B/C was discovered by William Herschel on 31 January 1783. In 1910, Henry Norris Russell, Edward Charles Pickering and Williamina Fleming discovered that, despite being a dim star, 40 Eridani B was of spectral type A, or white. In 1939, Russell looked back on the discovery noted that Pickering had suggested that such exceptions lead to breakthroughs and in this case it lead to the discovery of white dwarfs. The spectral type of 40 Eridani B was officially described in 1914 by Walter Adams. The white dwarf companion of Sirius, Sirius B, was next to be discovered. During the nineteenth century, positional measurements of some stars became precise enough to measure small changes in their location. Friedrich Bessel used position measurements to determine that the stars Sirius (α Canis Majoris) and Procyon (α Canis Minoris) were changing their positions periodically. In 1844 he predicted that both stars had unseen companions. Bessel roughly estimated the period of the companion of Sirius to be about half a century; C.A.F. Peters computed an orbit for it in 1851. It was not until 31 January 1862 that Alvan Graham Clark observed a previously unseen star close to Sirius, later identified as the predicted companion. Walter Adams announced in 1915 that he had found the spectrum of Sirius B to be similar to that of Sirius. In 1917, Adriaan van Maanen discovered van Maanen's Star, an isolated white dwarf. These three white dwarfs, the first discovered, are the so-called classical white dwarfs. Eventually, many faint white stars that had high proper motion were found, indicating that they could be suspected to be low-luminosity stars close to the Earth, and hence white dwarfs. Willem Luyten appears to have been the first to use the term white dwarf when he examined this class of stars in 1922; the term was later popularized by Arthur Eddington. Despite these suspicions, the first non-classical white dwarf was not definitely identified until the 1930s. 18 white dwarfs had been discovered by 1939. Luyten and others continued to search for white dwarfs in the 1940s. By 1950, over a hundred were known, and by 1999, over 2000 were known. Since then the Sloan Digital Sky Survey has found over 9000 white dwarfs, mostly new. Composition and structure Although white dwarfs are known with estimated masses as low as and as high as , the mass distribution is strongly peaked at , and the majority lie between . The estimated radii of observed white dwarfs are typically 0.8–2% the radius of the Sun; this is comparable to the Earth's radius of approximately 0.9% solar radius. A white dwarf, then, packs mass comparable to the Sun's into a volume that is typically a million times smaller than the Sun's; the average density of matter in a white dwarf must therefore be, very roughly,  times greater than the average density of the Sun, or approximately , or 1 tonne per cubic centimetre. A typical white dwarf has a density of between 104 and . White dwarfs are composed of one of the densest forms of matter known, surpassed only by other compact stars such as neutron stars and the hypothetical quark stars. White dwarfs were found to be extremely dense soon after their discovery. If a star is in a binary system, as is the case for Sirius B or 40 Eridani B, it is possible to estimate its mass from observations of the binary orbit. This was done for Sirius B by 1910, yielding a mass estimate of , which compares well with a more modern estimate of . Since hotter bodies radiate more energy than colder ones, a star's surface brightness can be estimated from its effective surface temperature, and that from its spectrum. If the star's distance is known, its absolute luminosity can also be estimated. From the absolute luminosity and distance, the star's surface area and its radius can be calculated. Reasoning of this sort led to the realization, puzzling to astronomers at the time, that due to their relatively high temperature and relatively low absolute luminosity, Sirius B and 40 Eridani B must be very dense. When Ernst Öpik estimated the density of a number of visual binary stars in 1916, he found that 40 Eridani B had a density of over  times that of the Sun, which was so high that he called it "impossible". As Arthur Eddington put it later, in 1927: We learn about the stars by receiving and interpreting the messages which their light brings to us. The message of the companion of Sirius when it was decoded ran: "I am composed of material 3000 times denser than anything you have ever come across; a ton of my material would be a little nugget that you could put in a matchbox." What reply can one make to such a message? The reply which most of us made in 1914 was — "Shut up. Don't talk nonsense." As Eddington pointed out in 1924, densities of this order implied that, according to the theory of general relativity, the light from Sirius B should be gravitationally redshifted. This was confirmed when Adams measured this redshift in 1925. Such densities are possible because white dwarf material is not composed of atoms joined by chemical bonds, but rather consists of a plasma of unbound nuclei and electrons. There is therefore no obstacle to placing nuclei closer than normally allowed by electron orbitals limited by normal matter. Eddington wondered what would happen when this plasma cooled and the energy to keep the atoms ionized was no longer sufficient. This paradox was resolved by R. H. Fowler in 1926 by an application of the newly devised quantum mechanics. Since electrons obey the Pauli exclusion principle, no two electrons can occupy the same state, and they must obey Fermi–Dirac statistics, also introduced in 1926 to determine the statistical distribution of particles that satisfy the Pauli exclusion principle. At zero temperature, therefore, electrons cannot all occupy the lowest-energy, or ground, state; some of them would have to occupy higher-energy states, forming a band of lowest-available energy states, the Fermi sea. This state of the electrons, called degenerate, meant that a white dwarf could cool to zero temperature and still possess high energy. Compression of a white dwarf will increase the number of electrons in a given volume. Applying the Pauli exclusion principle, this will increase the kinetic energy of the electrons, thereby increasing the pressure. This electron degeneracy pressure supports a white dwarf against gravitational collapse. The pressure depends only on density and not on temperature. Degenerate matter is relatively compressible; this means that the density of a high-mass white dwarf is much greater than that of a low-mass white dwarf and that the radius of a white dwarf decreases as its mass increases. The existence of a limiting mass that no white dwarf can exceed without collapsing to a neutron star is another consequence of being supported by electron degeneracy pressure. Such limiting masses were calculated for cases of an idealized, constant density star in 1929 by Wilhelm Anderson and in 1930 by Edmund C. Stoner. This value was corrected by considering hydrostatic equilibrium for the density profile, and the presently known value of the limit was first published in 1931 by Subrahmanyan Chandrasekhar in his paper "The Maximum Mass of Ideal White Dwarfs". For a non-rotating white dwarf, it is equal to approximately , where is the average molecular weight per electron of the star. As the carbon-12 and oxygen-16 that predominantly compose a carbon–oxygen white dwarf both have atomic numbers equal to half their atomic weight, one should take equal to 2 for such a star, leading to the commonly quoted value of . (Near the beginning of the 20th century, there was reason to believe that stars were composed chiefly of heavy elements, so, in his 1931 paper, Chandrasekhar set the average molecular weight per electron, , equal to 2.5, giving a limit of .) Together with William Alfred Fowler, Chandrasekhar received the Nobel Prize for this and other work in 1983. The limiting mass is now called the Chandrasekhar limit. If a carbon-oxygen white dwarf accreted enough matter to reach the Chandrasekhar limit of about 1.44 solar masses (for a non-rotating star), it would no longer be able to support the bulk of its mass through electron degeneracy pressure and, in the absence of nuclear reactions, would begin to collapse. However, the current view is that this limit is not normally attained; increasing temperature and density inside the core ignite carbon fusion as the star approaches the limit (to within about 1%) before collapse is initiated. In contrast, for a core primarily composed of oxygen, neon and magnesium, the collapsing white dwarf will typically form a neutron star. In this case, only a fraction of the star's mass will be ejected during the collapse. If a white dwarf star accumulates sufficient material from a stellar companion to raise its core temperature enough to ignite carbon fusion, it will undergo runaway nuclear fusion, completely disrupting it. There are three avenues by which this detonation is theorised to happen: stable accretion of material from a companion, the collision of two white dwarfs, or accretion that causes ignition in a shell that then ignites the core. The dominant mechanism by which type Ia supernovae are produced remains unclear. Despite this uncertainty in how type Ia supernovae are produced, type Ia supernovae have very uniform properties and are useful standard candles over intergalactic distances. Some calibrations are required to compensate for the gradual change in properties or different frequencies of abnormal luminosity supernovae at high redshift, and for small variations in brightness identified by light curve shape or spectrum. White dwarfs have low luminosity and therefore occupy a strip at the bottom of the Hertzsprung–Russell diagram, a graph of stellar luminosity versus color or temperature. They should not be confused with low-luminosity objects at the low-mass end of the main sequence, such as the hydrogen-fusing red dwarfs, whose cores are supported in part by thermal pressure, or the even lower-temperature brown dwarfs. Mass–radius relationship The relationship between the mass and radius of white dwarfs can be estimated using the nonrelativistic Fermi gas equation of state, which gives where is the radius, is the mass of the white dwarf, and the subscript indicates relative to the Sun. The chemical potential, is a thermodynamic property giving the change in energy as one electron is added or removed; it relates to the relating to the composition of the star. Numerical treatment of more complete models have been tested against observational data with good agreement. Since this analysis uses the non-relativistic formula for the kinetic energy, it is non-relativistic. When the electron velocity in a white dwarf is close to the speed of light, the kinetic energy formula approaches where is the speed of light, and it can be shown that the Fermi gas model has no stable equilibrium in the ultrarelativistic limit. In particular, this analysis yields the maximum mass of a white dwarf, which is: The observation of many white dwarf stars implies that either they started with masses similar to the Sun or something dramatic happened to reduce their mass. For a more accurate computation of the mass-radius relationship and limiting mass of a white dwarf, one must compute the equation of state that describes the relationship between density and pressure in the white dwarf material. If the density and pressure are both set equal to functions of the radius from the center of the star, the system of equations consisting of the hydrostatic equation together with the equation of state can then be solved to find the structure of the white dwarf at equilibrium. In the non-relativistic case, the radius is inversely proportional to the cube root of the mass. Relativistic corrections will alter the result so that the radius becomes zero at a finite value of the mass. This is the limiting value of the mass – called the Chandrasekhar limit – at which the white dwarf can no longer be supported by electron degeneracy pressure. The graph on the right shows the result of such a computation. It shows how radius varies with mass for non-relativistic (blue curve) and relativistic (green curve) models of a white dwarf. Both models treat the white dwarf as a cold Fermi gas in hydrostatic equilibrium. The average molecular weight per electron, , has been set equal to 2. Radius is measured in standard solar radii and mass in standard solar masses. These computations all assume that the white dwarf is non-rotating. If the white dwarf is rotating, the equation of hydrostatic equilibrium must be modified to take into account the centrifugal pseudo-force arising from working in a rotating frame. For a uniformly rotating white dwarf, the limiting mass increases only slightly. If the star is allowed to rotate nonuniformly, and viscosity is neglected, then, as was pointed out by Fred Hoyle in 1947, there is no limit to the mass for which it is possible for a model white dwarf to be in static equilibrium. Not all of these model stars will be dynamically stable. Rotating white dwarfs and the estimates of their diameter in terms of the angular velocity of rotation has been treated in the rigorous mathematical literature. The fine structure of the free boundary of white dwarfs has also been analysed mathematically rigorously. Radiation and cooling The degenerate matter that makes up the bulk of a white dwarf has a very low opacity, because any absorption of a photon requires that an electron must transition to a higher empty state, which may not be possible as the energy of the photon may not be a match for the possible quantum states available to that electron, hence radiative heat transfer within a white dwarf is low; it does, however, have a high thermal conductivity. As a result, the interior of the white dwarf maintains an almost uniform temperature as it cools down, starting at approximately 108 K shortly after the formation of the white dwarf and reaching less than 106 K for the coolest known white dwarfs. An outer shell of non-degenerate matter sits on top of the degenerate core. The outermost layers, which are cooler than the interior, radiate roughly as a black body. A white dwarf remains visible for a long time, as its tenuous outer atmosphere slowly radiates the thermal content of the degenerate interior. The visible radiation emitted by white dwarfs varies over a wide color range, from the whitish-blue color of an O-, B- or A-type main sequence star to the yellow-orange of a late K- or early M-type star. White dwarf effective surface temperatures extend from over  K to barely under 4000 K. In accordance with the Stefan–Boltzmann law, luminosity increases with increasing surface temperature (proportional to T); this surface temperature range corresponds to a luminosity from over 100 times that of the Sun to under that of the Sun. Hot white dwarfs, with surface temperatures in excess of , have been observed to be sources of soft (i.e., lower-energy) X-rays. This enables the composition and structure of their atmospheres to be studied by soft X-ray and extreme ultraviolet observations. White dwarfs also radiate neutrinos through the Urca process. This process has more effect on hotter and younger white dwarfs. Because neutrinos can pass easily through stellar plasma, they can drain energy directly from the dwarf's interior; this mechanism is the dominant contribution to cooling for approximately the first 20 million years of a white dwarf's existence. As was explained by Leon Mestel in 1952, unless the white dwarf accretes matter from a companion star or other source, its radiation comes from its stored heat, which is not replenished. White dwarfs have an extremely small surface area to radiate this heat from, so they cool gradually, remaining hot for a long time. As a white dwarf cools, its surface temperature decreases, the radiation that it emits reddens, and its luminosity decreases. Since the white dwarf has no energy sink other than radiation, it follows that its cooling slows with time. The rate of cooling has been estimated for a carbon white dwarf of with a hydrogen atmosphere. After initially taking approximately 1.5 billion years to cool to a surface temperature of 7140 K, cooling approximately 500 more kelvins to 6590 K takes around 0.3 billion years, but the next two steps of around 500 kelvins (to 6030 K and 5550 K) take first 0.4 and then 1.1 billion years. Most observed white dwarfs have relatively high surface temperatures, between 8000 K and . A white dwarf, though, spends more of its lifetime at cooler temperatures than at hotter temperatures, so we should expect that there are more cool white dwarfs than hot white dwarfs. Once we adjust for the selection effect that hotter, more luminous white dwarfs are easier to observe, we do find that decreasing the temperature range examined results in finding more white dwarfs. This trend stops when we reach extremely cool white dwarfs; few white dwarfs are observed with surface temperatures below , and one of the coolest so far observed, WD J2147–4035, has a surface temperature of approximately 3050 K. The reason for this is that the Universe's age is finite; there has not been enough time for white dwarfs to cool below this temperature. The white dwarf luminosity function can therefore be used to find the time when stars started to form in a region; an estimate for the age of our galactic disk found in this way is 8 billion years. A white dwarf will eventually, in many trillions of years, cool and become a non-radiating black dwarf in approximate thermal equilibrium with its surroundings and with the cosmic background radiation. No black dwarfs are thought to exist yet. At very low temperatures (<4000 K) white dwarfs with hydrogen in their atmosphere will be affected by collision induced absoption (CIA) of hydrogen molecules colliding with helium atoms. This affects the optical red and infrared brightness of white dwarfs with a hydrogen or mixed hydrogen-helium atmosphere. This makes old white dwarfs with this kind of atmosphere bluer than the main cooling sequence. White dwarfs with hydrogen-poor atmospheres, such as WD J2147–4035, are less affected by CIA and therefore have a yellow to orange color. White dwarf core material is a completely ionized plasma – a mixture of nuclei and electrons – that is initially in a fluid state. It was theoretically predicted in the 1960s that at a late stage of cooling, it should crystallize into a solid state, starting at its center. The crystal structure is thought to be a body-centered cubic lattice. In 1995 it was suggested that asteroseismological observations of pulsating white dwarfs yielded a potential test of the crystallization theory, and in 2004, observations were made that suggested approximately 90% of the mass of BPM 37093 had crystallized. Other work gives a crystallized mass fraction of between 32% and 82%. As a white dwarf core undergoes crystallization into a solid phase, latent heat is released, which provides a source of thermal energy that delays its cooling. Another possible mechanism that was suggested to explain the seeming delay in the cooling of some types of white dwarfs is a solid–liquid distillation process: the crystals formed in the core are buoyant and float up, thereby displacing heavier liquid downward, thus causing a net release of gravitational energy. Chemical fractionation between the ionic species in the plasma mixture can release a similar or even greater amount of energy. This energy release was first confirmed in 2019 after the identification of a pile up in the cooling sequence of more than white dwarfs observed with the Gaia satellite. Low-mass helium white dwarfs (mass ), often referred to as extremely low-mass white dwarfs (ELM WDs), are formed in binary systems. As a result of their hydrogen-rich envelopes, residual hydrogen burning via the CNO cycle may keep these white dwarfs hot for hundreds of millions of years. In addition, they remain in a bloated proto-white dwarf stage for up to 2 Gyr before they reach the cooling track. Atmosphere and spectra Although most white dwarfs are thought to be composed of carbon and oxygen, spectroscopy typically shows that their emitted light comes from an atmosphere that is observed to be either hydrogen or helium dominated. The dominant element is usually at least 1000 times more abundant than all other elements. As explained by Schatzman in the 1940s, the high surface gravity is thought to cause this purity by gravitationally separating the atmosphere so that heavy elements are below and the lighter above. This atmosphere, the only part of the white dwarf visible to us, is thought to be the top of an envelope that is a residue of the star's envelope in the AGB phase and may also contain material accreted from the interstellar medium. The envelope is believed to consist of a helium-rich layer with mass no more than of the star's total mass, which, if the atmosphere is hydrogen-dominated, is overlain by a hydrogen-rich layer with mass approximately of the star's total mass. Although thin, these outer layers determine the thermal evolution of the white dwarf. The degenerate electrons in the bulk of a white dwarf conduct heat well. Most of a white dwarf's mass is therefore at almost the same temperature (isothermal), and it is also hot: a white dwarf with surface temperature between and will have a core temperature between approximately and . The white dwarf is kept from cooling very quickly only by its outer layers' opacity to radiation. The first attempt to classify white dwarf spectra appears to have been by G. P. Kuiper in 1941, and various classification schemes have been proposed and used since then. The system currently in use was introduced by Edward M. Sion, Jesse L. Greenstein and their coauthors in 1983 and has been subsequently revised several times. It classifies a spectrum by a symbol that consists of an initial D, a letter describing the primary feature of the spectrum followed by an optional sequence of letters describing secondary features of the spectrum (as shown in the adjacent table), and a temperature index number, computed by dividing by the effective temperature. For example, a white dwarf with only He I lines in its spectrum and an effective temperature of could be given the classification of DB3, or, if warranted by the precision of the temperature measurement, DB3.5. Likewise, a white dwarf with a polarized magnetic field, an effective temperature of , and a spectrum dominated by He I lines that also had hydrogen features could be given the classification of DBAP3. The symbols "?" and ":" may also be used if the correct classification is uncertain. White dwarfs whose primary spectral classification is DA have hydrogen-dominated atmospheres. They make up the majority, approximately 80%, of all observed white dwarfs. The next class in number is of DBs, approximately 16%. The hot, above , DQ class (roughly 0.1%) have carbon-dominated atmospheres. Those classified as DB, DC, DO, DZ, and cool DQ have helium-dominated atmospheres. Assuming that carbon and metals are not present, which spectral classification is seen depends on the effective temperature. Between approximately to , the spectrum will be classified DO, dominated by singly ionized helium. From to , the spectrum will be DB, showing neutral helium lines, and below about , the spectrum will be featureless and classified DC. Molecular hydrogen (H2) has been detected in spectra of the atmospheres of some white dwarfs. While theoretical work suggests that some types of white dwarfs may have stellar corona, searches at X-ray and radio wavelengths, where coronae are most easily detected, have been unsuccessful. Metal-rich white dwarfs Around 25–33% of white dwarfs have metal lines in their spectra, which is notable because any heavy elements in a white dwarf should sink into the star's interior in just a small fraction of the star's lifetime. The prevailing explanation for metal-rich white dwarfs is that they have recently accreted rocky planetesimals. The bulk composition of the accreted object can be measured from the strengths of the metal lines. For example, a 2015 study of the white dwarf Ton 345 concluded that its metal abundances were consistent with those of a differentiated, rocky planet whose mantle had been eroded by the host star's wind during its asymptotic giant branch phase. Magnetic field Magnetic fields in white dwarfs with a strength at the surface of 1 million gauss (100 teslas) were predicted by P. M. S. Blackett in 1947 as a consequence of a physical law he had proposed, which stated that an uncharged, rotating body should generate a magnetic field proportional to its angular momentum. This putative law, sometimes called the Blackett effect, was never generally accepted, and by the 1950s even Blackett felt it had been refuted. In the 1960s, it was proposed that white dwarfs might have magnetic fields due to conservation of total surface magnetic flux that existed in its progenitor star phase. A surface magnetic field of 100 gauss (0.01 T) in the progenitor star would thus become a surface magnetic field of 100 × 1002 = 1 million gauss (100 T) once the star's radius had shrunk by a factor of 100. The first magnetic white dwarf to be discovered was GJ 742 (also known as ), which was identified by James Kemp, John Swedlund, John Landstreet and Roger Angel in 1970 to host a magnetic field by its emission of circularly polarized light. It is thought to have a surface field of approximately 300 million gauss (30 kT). Since 1970, magnetic fields have been discovered in well over 200 white dwarfs, ranging from to  gauss (0.2 T to 100 kT). Many of the presently known magnetic white dwarfs are identified by low-resolution spectroscopy, which is able to reveal the presence of a magnetic field of 1 megagauss or more. Thus the basic identification process also sometimes results in discovery of magnetic fields. White dwarf magnetic fields may also be measured without spectral lines, using the techniques of broadband circular polarimetry, or maybe through measurement of their frequencies of radio emission via the electron cyclotron maser. It has been estimated that at least 10% of white dwarfs have fields in excess of 1 million gauss (100 T). The magnetic fields in a white dwarf may allow for the existence of a new type of chemical bond, perpendicular paramagnetic bonding, in addition to ionic and covalent bonds, though detecting molecules bonded in this way is expected to be difficult. The highly magnetized white dwarf in the binary system AR Scorpii was identified in 2016 as the first pulsar in which the compact object is a white dwarf instead of a neutron star. A second white dwarf pulsar was discovered in 2023. Variability Early calculations suggested that there might be white dwarfs whose luminosity varied with a period of around 10 seconds, but searches in the 1960s failed to observe this. The first variable white dwarf found was HL Tau 76; in 1965 and 1966, and was observed to vary with a period of approximately 12.5 minutes. The reason for this period being longer than predicted is that the variability of HL Tau 76, like that of the other pulsating variable white dwarfs known, arises from non-radial gravity wave pulsations. Known types of pulsating white dwarf include the DAV, or ZZ Ceti, stars, including HL Tau 76, with hydrogen-dominated atmospheres and the spectral type DA; DBV, or V777 Her, stars, with helium-dominated atmospheres and the spectral type DB; and GW Vir stars, sometimes subdivided into DOV and PNNV stars, with atmospheres dominated by helium, carbon, and oxygen. GW Vir stars are not, strictly speaking, white dwarfs, but are stars that are in a position on the Hertzsprung–Russell diagram between the asymptotic giant branch and the white dwarf region. They may be called pre-white dwarfs. These variables all exhibit small (1%–30%) variations in light output, arising from a superposition of vibrational modes with periods of hundreds to thousands of seconds. Observation of these variations gives asteroseismological evidence about the interiors of white dwarfs. Formation White dwarfs are thought to represent the end point of stellar evolution for main-sequence stars with masses from about . The composition of the white dwarf produced will depend on the initial mass of the star. Current galactic models suggest the Milky Way galaxy currently contains about ten billion white dwarfs. Stars with very low mass If the mass of a main-sequence star is lower than approximately half a solar mass, it will never become hot enough to ignite and fuse helium in its core. It is thought that, over a lifespan that considerably exceeds the age of the universe ( 13.8 billion years), such a star will eventually burn all its hydrogen, for a while becoming a blue dwarf, and end its evolution as a helium white dwarf composed chiefly of helium-4 nuclei. Due to the very long time this process takes, it is not thought to be the origin of the observed helium white dwarfs. Rather, they are thought to be mostly the product of mass loss in binary systems. Proposals to explain those helium white dwarfs that are not part of binary systems include mass loss due to a large planetary companion, stars being stripped of material by companions exploding as supernovae, and various types of stellar mergers. Stars with low to medium mass If the mass of a main-sequence star is between , its core will become sufficiently hot to fuse helium into carbon and oxygen via the triple-alpha process, but it will never become sufficiently hot to fuse carbon into neon. Near the end of the period in which it undergoes fusion reactions, such a star will have a carbon–oxygen core that does not undergo fusion reactions, surrounded by an inner helium-burning shell and an outer hydrogen-burning shell. On the Hertzsprung–Russell diagram, it will be found on the asymptotic giant branch. It will then expel most of its outer material, creating a planetary nebula, until only the carbon–oxygen core is left. This process is responsible for the carbon–oxygen white dwarfs that form the vast majority of observed white dwarfs. Stars with medium to high mass If a star is massive enough, its core will eventually become sufficiently hot to fuse carbon to neon, and then to fuse neon to iron. Such a star will not become a white dwarf, because the mass of its central, non-fusing core, initially supported by electron degeneracy pressure, will eventually exceed the largest possible mass supportable by degeneracy pressure. At this point the core of the star will collapse and it will explode in a core-collapse supernova that will leave behind a remnant neutron star, black hole, or possibly a more exotic form of compact star. Some main-sequence stars, of perhaps , although sufficiently massive to fuse carbon to neon and magnesium, may be insufficiently massive to fuse neon. Such a star may leave a remnant white dwarf composed chiefly of oxygen, neon, and magnesium, provided that its core does not collapse, and provided that fusion does not proceed so violently as to blow apart the star in a supernova. Although a few white dwarfs have been identified that may be of this type, most evidence for the existence of such comes from the novae called ONeMg or neon novae. The spectra of these novae exhibit abundances of neon, magnesium, and other intermediate-mass elements that appear to be only explicable by the accretion of material onto an oxygen–neon–magnesium white dwarf. Type Iax supernova Type Iax supernovae, that involve helium accretion by a white dwarf, have been proposed to be a channel for transformation of this type of stellar remnant. In this scenario, the carbon detonation produced in a Type Ia supernova is too weak to destroy the white dwarf, expelling just a small part of its mass as ejecta, but produces an asymmetric explosion that kicks the star, often known as a zombie star, to the high speeds of a hypervelocity star. The matter processed in the failed detonation is re-accreted by the white dwarf with the heaviest elements such as iron falling to its core where it accumulates. These iron-core white dwarfs would be smaller than the carbon–oxygen kind of similar mass and would cool and crystallize faster than those. Fate A white dwarf is stable once formed and will continue to cool almost indefinitely, eventually to become a black dwarf. Assuming that the universe continues to expand, it is thought that in 1019 to 1020 years, the galaxies will evaporate as their stars escape into intergalactic space. White dwarfs should generally survive galactic dispersion, although an occasional collision between white dwarfs may produce a new fusing star or a super-Chandrasekhar mass white dwarf that will explode in a Type Ia supernova. The subsequent lifetime of white dwarfs is thought to be on the order of the hypothetical lifetime of the proton, known to be at least 1034–1035 years. Some grand unified theories predict a proton lifetime between 1030 and 1036 years. If these theories are not valid, the proton might still decay by complicated nuclear reactions or through quantum gravitational processes involving virtual black holes; in these cases, the lifetime is estimated to be no more than 10200 years. If protons do decay, the mass of a white dwarf will decrease very slowly with time as its nuclei decay, until it loses enough mass to become a nondegenerate lump of matter, and finally disappears completely. A white dwarf can also be cannibalized or evaporated by a companion star, causing the white dwarf to lose so much mass that it becomes a planetary mass object. The resultant object, orbiting the former companion, now host star, could be a helium planet or diamond planet. Debris disks and planets A white dwarf's stellar and planetary system is inherited from its progenitor star and may interact with the white dwarf in various ways. There are several indications that a white dwarf has a remnant planetary system. The most common observable evidence of a remnant planetary system is pollution of the spectrum of a white dwarf with metal absorption lines. 27–50% of white dwarfs show a spectrum polluted with metals, but these heavy elements settle out in the atmosphere of white dwarfs colder than . The most widely accepted hypothesis is that this pollution comes from tidally disrupted rocky bodies. The first observation of a metal-polluted white dwarf was by van Maanen in 1917 at the Mount Wilson Observatory and is now recognized as the first evidence of exoplanets in astronomy. The white dwarf van Maanen 2 shows iron, calcium and magnesium in its atmosphere, but van Maanen misclassified it as the faintest F-type star based on the calcium H- and K-lines. The nitrogen in white dwarfs is thought to come from nitrogen-ice of extrasolar Kuiper Belt objects, the lithium is thought to come from accreted crust material and the beryllium is thought to come from exomoons. A less common observable evidence is infrared excess due to a flat and optically thick debris disk, which is found in around 1%–4% of white dwarfs. The first white dwarf with infrared excess was discovered by Zuckerman and Becklin in 1987 in the near-infrared around Giclas 29-38 and later confirmed as a debris disk. White dwarfs hotter than sublimate all the dust formed by tidally disrupting a rocky body, preventing the formation of a debris disk. In colder white dwarfs, a rocky body might be tidally disrupted near the Roche radius and forced into a circular orbit by the Poynting–Robertson drag, which is stronger for less massive white dwarfs. The Poynting–Robertson drag will also cause the dust to orbit closer and closer towards the white dwarf, until it will eventually sublimate and the disk will disappear. A debris disk will have a lifetime of around a few million years for white dwarfs hotter than . Colder white dwarfs can have disk-lifetimes of a few 10 million years, which is enough time to tidally disrupt a second rocky body and forming a second disk around a white dwarf, such as the two rings around LSPM J0207+3331. The least common observable evidence of planetary systems are detected major or minor planets. Only a handful of giant planets and a handful of minor planets are known around white dwarfs. Infrared spectroscopic observations made by NASA's Spitzer Space Telescope of the central star of the Helix Nebula suggest the presence of a dust cloud, which may be caused by cometary collisions. It is possible that infalling material from this may cause X-ray emission from the central star. Similarly, observations made in 2004 indicated the presence of a dust cloud around the young (estimated to have formed from its AGB progenitor about 500 million years ago) white dwarf G29-38, which may have been created by tidal disruption of a comet passing close to the white dwarf. Some estimations based on the metal content of the atmospheres of the white dwarfs consider that at least 15% of them may be orbited by planets or asteroids, or at least their debris. Another suggested idea is that white dwarfs could be orbited by the stripped cores of rocky planets, that would have survived the red giant phase of their star but losing their outer layers and, given those planetary remnants would likely be made of metals, to attempt to detect them looking for the signatures of their interaction with the white dwarf's magnetic field. Other suggested ideas of how white dwarfs are polluted with dust involve the scattering of asteroids by planets or via planet-planet scattering. Liberation of exomoons from their host planet could cause white dwarf pollution with dust. Either the liberation could cause asteroids to be scattered towards the white dwarf or the exomoon could be scattered into the Roche radius of the white dwarf. The mechanism behind the pollution of white dwarfs in binaries was also explored as these systems are more likely to lack a major planet, but this idea cannot explain the presence of dust around single white dwarfs. While old white dwarfs show evidence of dust accretion, white dwarfs older than ~1 billion years or >7000 K with dusty infrared excess were not detected until the discovery of LSPM J0207+3331 in 2018, which has a cooling age of ~3 billion years. The white dwarf shows two dusty components that are being explained with two rings with different temperatures. Another possible way to detect planetary systems around white dwarfs is through their radio emissions. In 2004 and 2005, A. J. Willes and K. Wu hypothesized that when an exoplanet travels through the magnetosphere of a white dwarf, it may generate auroral radio emissions from the magnetic poles of the white dwarf, similar to how Io stimulates radio emissions from Jupiter. However, a search for such radio emission from nine white dwarfs by researchers using the Arecibo radio telescope did not find any so far. The metal-rich white dwarf WD 1145+017 is the first white dwarf observed with a disintegrating minor planet that transits the star. The disintegration of the planetesimal generates a debris cloud that passes in front of the star every 4.5 hours, causing a 5-minute-long fade in the star's optical brightness. The depth of the transit is highly variable. The giant planet WD J0914+1914b is being evaporated by the strong ultraviolet radiation of the hot white dwarf. Part of the evaporated material is being accreted in a gaseous disk around the white dwarf. The weak hydrogen line as well as other lines in the spectrum of the white dwarf revealed the presence of the giant planet. The white dwarf WD 0145+234 shows brightening in the mid-infrared, seen in NEOWISE data. The brightening, not seen before 2018, may be due to the tidal disruption of an exoasteroid, the first time such an event has been observed. WD 1856+534 is the first transiting major planet to be observed orbiting a white dwarf, and remains the only such example as of 2023. MOA-2010-BLG-477L, a white dwarf discovered thanks to a microlensing event, is also known to have a giant planet. GD 140 and LAWD 37 are suspected to have giant exoplanets due to anomaly in the Hipparcos-Gaia proper motion. For GD 140 it is suspected to be a planet several times more massive than Jupiter and for LAWD 37 it is suspected to be a planet less massive than Jupiter. Additionally, WD 0141-675 was suspected to have a super-Jupiter with an orbital period of 33.65 days based on Gaia astrometry. This is remarkable because WD 0141-675 is polluted with metals and metal polluted white dwarfs have long be suspected to host giant planets that disturb the orbits of minor planets, causing the pollution. Both GD 140 and WD 0141 will be observed with JWST in cycle 2 with the aim to detect infrared excess caused by the planets. However, the planet candidate at WD 0141-675 was found to be a false positive caused by a software error. Habitability A search has been proposed for transits of hypothetical Earth-like planets around white dwarfs with surface temperatures of less than . Such stars that could harbor a habitable zone at a distance of  0.005 to 0.02 AU that would last upwards of 3 billion years. This is so close that any habitable planets would be tidally locked. As a white dwarf has a size similar to that of a planet, these kinds of transits would produce strong eclipses. Newer research casts some doubts on this idea, given that the close orbits of those hypothetical planets around their parent stars would subject them to strong tidal forces that could render them uninhabitable by triggering a greenhouse effect. Another suggested constraint to this idea is the origin of those planets. Leaving aside formation from the accretion disk surrounding the white dwarf, there are two ways a planet could end in a close orbit around stars of this kind: by surviving being engulfed by the star during its red giant phase, and then spiralling inward, or inward migration after the white dwarf has formed. The former case is implausible for low-mass bodies, as they are unlikely to survive being absorbed by their stars. In the latter case, the planets would have to expel so much orbital energy as heat, through tidal interactions with the white dwarf, that they would likely end as uninhabitable embers. Binary stars and novae If a white dwarf is in a binary star system and is accreting matter from its companion, a variety of phenomena may occur, including novae and Type Ia supernovae. It may also be a super-soft x-ray source if it is able to take material from its companion fast enough to sustain fusion on its surface. On the other hand, phenomena in binary systems such as tidal interaction and star–disc interaction, moderated by magnetic fields or not, act on the rotation of accreting white dwarfs. In fact, the (securely known) fastest-spinning white dwarfs are members of binary systems (the fastest one being the white dwarf in CTCV J2056-3014). A close binary system of two white dwarfs can lose angular momentum and radiate energy in the form of gravitational waves, causing their mutual orbit to steadily shrink until the stars merge. Type Ia supernovae The mass of an isolated, nonrotating white dwarf cannot exceed the Chandrasekhar limit of ~ . This limit may increase if the white dwarf is rotating rapidly and nonuniformly. White dwarfs in binary systems can accrete material from a companion star, increasing both their mass and their density. As their mass approaches the Chandrasekhar limit, this could theoretically lead to either the explosive ignition of fusion in the white dwarf or its collapse into a neutron star. There are two models that might explain the progenitor systems of Type Ia supernovae: the single-degenerate model and the double-degenerate model. In the single-degenerate model, a carbon–oxygen white dwarf accretes mass and compresses its core by pulling mass from a companion non-degenerate star. It is believed that compressional heating of the core leads to ignition of carbon fusion as the mass approaches the Chandrasekhar limit. Because the white dwarf is supported against gravity by quantum degeneracy pressure instead of by thermal pressure, adding heat to the star's interior increases its temperature but not its pressure, so the white dwarf does not expand and cool in response. Rather, the increased temperature accelerates the rate of the fusion reaction, in a runaway process that feeds on itself. The thermonuclear flame consumes much of the white dwarf in a few seconds, causing a Type Ia supernova explosion that obliterates the star. In another possible mechanism for Type Ia supernovae, the double-degenerate model, two carbon–oxygen white dwarfs in a binary system merge, creating an object with mass greater than the Chandrasekhar limit in which carbon fusion is then ignited. In both cases, the white dwarfs are not expected to survive the Type Ia supernova. The single-degenerate model was the favored mechanism for Type Ia supernovae, but now, because of observations, the double-degenerate model is thought to be the more likely scenario. Predicted rates of white dwarf-white dwarf mergers are comparable to the rate of Type Ia supernovae and would explain the lack of hydrogen in the spectra of Type Ia supernovae. However, the main mechanism for Type Ia supernovae remains an open question. In the single-degenerate scenario, the accretion rate onto the white dwarf needs to be within a narrow range dependent on its mass so that the hydrogen burning on the surface of the white dwarf is stable. If the accretion rate is too low, novae on the surface of the white dwarf will blow away accreted material. If it is too high, the white dwarf will expand and the white dwarf and companion star will be in a common envelope. This stops the growth of the white dwarf thus preventing it from reaching the Chandrasekhar limit and exploding. For the single-degenerate model its companion is expected to survive, but there is no strong evidence of such a star near Type Ia supernovae sites. In the double-degenerate scenario, white dwarfs need to be in very close binaries; otherwise their inspiral time is longer than the age of the universe. It is also likely that instead of a Type Ia supernova, the merger of two white dwarfs will lead to core-collapse. As a white dwarf accretes material quickly, the core can ignite off-center, which leads to gravitational instabilities that could create a neutron star. The historical bright SN 1006 is thought to have been a Type Ia supernova from a white dwarf, possibly the merger of two white dwarfs. Tycho's Supernova of 1572 was also a type Ia supernova, and its remnant has been detected. WD 0810–353, a white dwarf 11 parsecs away from the Sun, is possibly a hypervelocity runaway ejected from a Type Ia supernova, though this has been disputed. Post-common envelope binary A post-common envelope binary (PCEB) is a binary consisting of a white dwarf or hot subdwarf and a closely tidally-locked red dwarf (in other cases this might be a brown dwarf instead of a red dwarf). These binaries form when the red dwarf is engulfed in the red giant phase. As the red dwarf orbits inside the common envelope, it is slowed down in the denser environment. This slowed orbital speed is compensated with a decrease of the orbital distance between the red dwarf and the core of the red giant. The red dwarf spirals inwards towards the core and might merge with the core. If this does not happen and instead the common envelope is ejected, then the binary ends up in a close orbit, consisting of a white dwarf and a red dwarf. This type of binary is called a post-common envelope binary. The evolution of the PCEB continues as the two dwarf stars orbit closer and closer due to magnetic braking and by releasing gravitational waves. The binary might then evolve into one of several dramatic outcomes: a high-field magnetic white dwarf, a white dwarf pulsar, a double-degenerate binary, or even a Type Ia supernova. Because a PCEB may evolve at some point into a cataclysmic variable, some of them are also called pre-cataclysmic variables. Cataclysmic variables Before accretion of material pushes a white dwarf close to the Chandrasekhar limit, accreted hydrogen-rich material on the surface may ignite in a less destructive type of thermonuclear explosion powered by hydrogen fusion. These surface explosions can be repeated as long as the white dwarf's core remains intact. This weaker kind of repetitive cataclysmic phenomenon is called a (classical) nova. Astronomers have also observed dwarf novae, which have smaller, more frequent luminosity peaks than the classical novae. These are thought to be caused by the release of gravitational potential energy when part of the accretion disc collapses onto the star, rather than through a release of energy due to fusion. In general, binary systems with a white dwarf accreting matter from a stellar companion are called cataclysmic variables. As well as novae and dwarf novae, several other classes of these variables are known, including polars and intermediate polars, both of which feature highly magnetic white dwarfs. Both fusion- and accretion-powered cataclysmic variables have been observed to be X-ray sources. Other multiple-star systems Other binaries include those that consist of a main sequence star (or giant) and a white dwarf. The binary Sirius AB is an example pair of this type. White dwarfs can also exist as binaries or multiple star systems that only consist of white dwarfs. An example of a resolved triple white dwarf system is WD J1953−1019, discovered with Gaia DR2 data. One interesting field is the study of remnant planetary systems around white dwarfs. It is expected that planets orbiting several AU from a star will survive the star's post-main-sequence transformation into a white dwarf. Moreover, white dwarfs, being much smaller and correspondingly less luminous than their progenitors, are less likely to outshine any bodies in orbit around them. This makes white dwarfs advantageous targets for direct-imaging searches for exoplanets and brown dwarfs. The first brown dwarf to be detected by direct imaging was the companion to the white dwarf GD 165 A, discovered in 1988. More recently, the white dwarf WD 0806−661 was found to have a cold companion body of substellar mass, variously described as a brown dwarf or an exoplanet. Nearest
Physical sciences
Stellar astronomy
null
33516
https://en.wikipedia.org/wiki/Wave
Wave
In physics, mathematics, engineering, and related fields, a wave is a propagating dynamic disturbance (change from equilibrium) of one or more quantities. Periodic waves oscillate repeatedly about an equilibrium (resting) value at some frequency. When the entire waveform moves in one direction, it is said to be a travelling wave; by contrast, a pair of superimposed periodic waves traveling in opposite directions makes a standing wave. In a standing wave, the amplitude of vibration has nulls at some positions where the wave amplitude appears smaller or even zero. There are two types of waves that are most commonly studied in classical physics: mechanical waves and electromagnetic waves. In a mechanical wave, stress and strain fields oscillate about a mechanical equilibrium. A mechanical wave is a local deformation (strain) in some physical medium that propagates from particle to particle by creating local stresses that cause strain in neighboring particles too. For example, sound waves are variations of the local pressure and particle motion that propagate through the medium. Other examples of mechanical waves are seismic waves, gravity waves, surface waves and string vibrations. In an electromagnetic wave (such as light), coupling between the electric and magnetic fields sustains propagation of waves involving these fields according to Maxwell's equations. Electromagnetic waves can travel through a vacuum and through some dielectric media (at wavelengths where they are considered transparent). Electromagnetic waves, as determined by their frequencies (or wavelengths), have more specific designations including radio waves, infrared radiation, terahertz waves, visible light, ultraviolet radiation, X-rays and gamma rays. Other types of waves include gravitational waves, which are disturbances in spacetime that propagate according to general relativity; heat diffusion waves; plasma waves that combine mechanical deformations and electromagnetic fields; reaction–diffusion waves, such as in the Belousov–Zhabotinsky reaction; and many more. Mechanical and electromagnetic waves transfer energy, momentum, and information, but they do not transfer particles in the medium. In mathematics and electronics waves are studied as signals. On the other hand, some waves have envelopes which do not move at all such as standing waves (which are fundamental to music) and hydraulic jumps. A physical wave field is almost always confined to some finite region of space, called its domain. For example, the seismic waves generated by earthquakes are significant only in the interior and surface of the planet, so they can be ignored outside it. However, waves with infinite domain, that extend over the whole space, are commonly studied in mathematics, and are very valuable tools for understanding physical waves in finite domains. A plane wave is an important mathematical idealization where the disturbance is identical along any (infinite) plane normal to a specific direction of travel. Mathematically, the simplest wave is a sinusoidal plane wave in which at any point the field experiences simple harmonic motion at one frequency. In linear media, complicated waves can generally be decomposed as the sum of many sinusoidal plane waves having different directions of propagation and/or different frequencies. A plane wave is classified as a transverse wave if the field disturbance at each point is described by a vector perpendicular to the direction of propagation (also the direction of energy transfer); or longitudinal wave if those vectors are aligned with the propagation direction. Mechanical waves include both transverse and longitudinal waves; on the other hand electromagnetic plane waves are strictly transverse while sound waves in fluids (such as air) can only be longitudinal. That physical direction of an oscillating field relative to the propagation direction is also referred to as the wave's polarization, which can be an important attribute. Mathematical description Single waves A wave can be described just like a field, namely as a function where is a position and is a time. The value of is a point of space, specifically in the region where the wave is defined. In mathematical terms, it is usually a vector in the Cartesian three-dimensional space . However, in many cases one can ignore one dimension, and let be a point of the Cartesian plane . This is the case, for example, when studying vibrations of a drum skin. One may even restrict to a point of the Cartesian line – that is, the set of real numbers. This is the case, for example, when studying vibrations in a violin string or recorder. The time , on the other hand, is always assumed to be a scalar; that is, a real number. The value of can be any physical quantity of interest assigned to the point that may vary with time. For example, if represents the vibrations inside an elastic solid, the value of is usually a vector that gives the current displacement from of the material particles that would be at the point in the absence of vibration. For an electromagnetic wave, the value of can be the electric field vector , or the magnetic field vector , or any related quantity, such as the Poynting vector . In fluid dynamics, the value of could be the velocity vector of the fluid at the point , or any scalar property like pressure, temperature, or density. In a chemical reaction, could be the concentration of some substance in the neighborhood of point of the reaction medium. For any dimension (1, 2, or 3), the wave's domain is then a subset of , such that the function value is defined for any point in . For example, when describing the motion of a drum skin, one can consider to be a disk (circle) on the plane with center at the origin , and let be the vertical displacement of the skin at the point of and at time . Superposition Waves of the same type are often superposed and encountered simultaneously at a given point in space and time. The properties at that point are the sum of the properties of each component wave at that point. In general, the velocities are not the same, so the wave form will change over time and space. Wave spectrum Wave families Sometimes one is interested in a single specific wave. More often, however, one needs to understand large set of possible waves; like all the ways that a drum skin can vibrate after being struck once with a drum stick, or all the possible radar echoes one could get from an airplane that may be approaching an airport. In some of those situations, one may describe such a family of waves by a function that depends on certain parameters , besides and . Then one can obtain different waves – that is, different functions of and – by choosing different values for those parameters. For example, the sound pressure inside a recorder that is playing a "pure" note is typically a standing wave, that can be written as The parameter defines the amplitude of the wave (that is, the maximum sound pressure in the bore, which is related to the loudness of the note); is the speed of sound; is the length of the bore; and is a positive integer (1,2,3,...) that specifies the number of nodes in the standing wave. (The position should be measured from the mouthpiece, and the time from any moment at which the pressure at the mouthpiece is maximum. The quantity is the wavelength of the emitted note, and is its frequency.) Many general properties of these waves can be inferred from this general equation, without choosing specific values for the parameters. As another example, it may be that the vibrations of a drum skin after a single strike depend only on the distance from the center of the skin to the strike point, and on the strength of the strike. Then the vibration for all possible strikes can be described by a function . Sometimes the family of waves of interest has infinitely many parameters. For example, one may want to describe what happens to the temperature in a metal bar when it is initially heated at various temperatures at different points along its length, and then allowed to cool by itself in vacuum. In that case, instead of a scalar or vector, the parameter would have to be a function such that is the initial temperature at each point of the bar. Then the temperatures at later times can be expressed by a function that depends on the function (that is, a functional operator), so that the temperature at a later time is Differential wave equations Another way to describe and study a family of waves is to give a mathematical equation that, instead of explicitly giving the value of , only constrains how those values can change with time. Then the family of waves in question consists of all functions that satisfy those constraints – that is, all solutions of the equation. This approach is extremely important in physics, because the constraints usually are a consequence of the physical processes that cause the wave to evolve. For example, if is the temperature inside a block of some homogeneous and isotropic solid material, its evolution is constrained by the partial differential equation where is the heat that is being generated per unit of volume and time in the neighborhood of at time (for example, by chemical reactions happening there); are the Cartesian coordinates of the point ; is the (first) derivative of with respect to ; and is the second derivative of relative to . (The symbol "" is meant to signify that, in the derivative with respect to some variable, all other variables must be considered fixed.) This equation can be derived from the laws of physics that govern the diffusion of heat in solid media. For that reason, it is called the heat equation in mathematics, even though it applies to many other physical quantities besides temperatures. For another example, we can describe all possible sounds echoing within a container of gas by a function that gives the pressure at a point and time within that container. If the gas was initially at uniform temperature and composition, the evolution of is constrained by the formula Here is some extra compression force that is being applied to the gas near by some external process, such as a loudspeaker or piston right next to . This same differential equation describes the behavior of mechanical vibrations and electromagnetic fields in a homogeneous isotropic non-conducting solid. Note that this equation differs from that of heat flow only in that the left-hand side is , the second derivative of with respect to time, rather than the first derivative . Yet this small change makes a huge difference on the set of solutions . This differential equation is called "the" wave equation in mathematics, even though it describes only one very special kind of waves. Wave in elastic medium Consider a traveling transverse wave (which may be a pulse) on a string (the medium). Consider the string to have a single spatial dimension. Consider this wave as traveling in the direction in space. For example, let the positive direction be to the right, and the negative direction be to the left. with constant amplitude with constant velocity , where is independent of wavelength (no dispersion) independent of amplitude (linear media, not nonlinear). with constant waveform, or shape This wave can then be described by the two-dimensional functions or, more generally, by d'Alembert's formula: representing two component waveforms and traveling through the medium in opposite directions. A generalized representation of this wave can be obtained as the partial differential equation General solutions are based upon Duhamel's principle. Wave forms The form or shape of F in d'Alembert's formula involves the argument x − vt. Constant values of this argument correspond to constant values of F, and these constant values occur if x increases at the same rate that vt increases. That is, the wave shaped like the function F will move in the positive x-direction at velocity v (and G will propagate at the same speed in the negative x-direction). In the case of a periodic function F with period λ, that is, F(x + λ − vt) = F(x − vt), the periodicity of F in space means that a snapshot of the wave at a given time t finds the wave varying periodically in space with period λ (the wavelength of the wave). In a similar fashion, this periodicity of F implies a periodicity in time as well: F(x − v(t + T)) = F(x − vt) provided vT = λ, so an observation of the wave at a fixed location x finds the wave undulating periodically in time with period T = λ/v. Amplitude and modulation The amplitude of a wave may be constant (in which case the wave is a c.w. or continuous wave), or may be modulated so as to vary with time and/or position. The outline of the variation in amplitude is called the envelope of the wave. Mathematically, the modulated wave can be written in the form: where is the amplitude envelope of the wave, is the wavenumber and is the phase. If the group velocity (see below) is wavelength-independent, this equation can be simplified as: showing that the envelope moves with the group velocity and retains its shape. Otherwise, in cases where the group velocity varies with wavelength, the pulse shape changes in a manner often described using an envelope equation. Phase velocity and group velocity There are two velocities that are associated with waves, the phase velocity and the group velocity. Phase velocity is the rate at which the phase of the wave propagates in space: any given phase of the wave (for example, the crest) will appear to travel at the phase velocity. The phase velocity is given in terms of the wavelength (lambda) and period as Group velocity is a property of waves that have a defined envelope, measuring propagation through space (that is, phase velocity) of the overall shape of the waves' amplitudes—modulation or envelope of the wave. Special waves Sine waves Plane waves A plane wave is a kind of wave whose value varies only in one spatial direction. That is, its value is constant on a plane that is perpendicular to that direction. Plane waves can be specified by a vector of unit length indicating the direction that the wave varies in, and a wave profile describing how the wave varies as a function of the displacement along that direction () and time (). Since the wave profile only depends on the position in the combination , any displacement in directions perpendicular to cannot affect the value of the field. Plane waves are often used to model electromagnetic waves far from a source. For electromagnetic plane waves, the electric and magnetic fields themselves are transverse to the direction of propagation, and also perpendicular to each other. Standing waves A standing wave, also known as a stationary wave, is a wave whose envelope remains in a constant position. This phenomenon arises as a result of interference between two waves traveling in opposite directions. The sum of two counter-propagating waves (of equal amplitude and frequency) creates a standing wave. Standing waves commonly arise when a boundary blocks further propagation of the wave, thus causing wave reflection, and therefore introducing a counter-propagating wave. For example, when a violin string is displaced, transverse waves propagate out to where the string is held in place at the bridge and the nut, where the waves are reflected back. At the bridge and nut, the two opposed waves are in antiphase and cancel each other, producing a node. Halfway between two nodes there is an antinode, where the two counter-propagating waves enhance each other maximally. There is no net propagation of energy over time. Solitary waves A soliton or solitary wave is a self-reinforcing wave packet that maintains its shape while it propagates at a constant velocity. Solitons are caused by a cancellation of nonlinear and dispersive effects in the medium. (Dispersive effects are a property of certain systems where the speed of a wave depends on its frequency.) Solitons are the solutions of a widespread class of weakly nonlinear dispersive partial differential equations describing physical systems. Physical properties Propagation Wave propagation is any of the ways in which waves travel. With respect to the direction of the oscillation relative to the propagation direction, we can distinguish between longitudinal wave and transverse waves. Electromagnetic waves propagate in vacuum as well as in material media. Propagation of other wave types such as sound may occur only in a transmission medium. Reflection of plane waves in a half-space The propagation and reflection of plane waves—e.g. Pressure waves (P wave) or Shear waves (SH or SV-waves) are phenomena that were first characterized within the field of classical seismology, and are now considered fundamental concepts in modern seismic tomography. The analytical solution to this problem exists and is well known. The frequency domain solution can be obtained by first finding the Helmholtz decomposition of the displacement field, which is then substituted into the wave equation. From here, the plane wave eigenmodes can be calculated. SV wave propagation The analytical solution of SV-wave in a half-space indicates that the plane SV wave reflects back to the domain as a P and SV waves, leaving out special cases. The angle of the reflected SV wave is identical to the incidence wave, while the angle of the reflected P wave is greater than the SV wave. For the same wave frequency, the SV wavelength is smaller than the P wavelength. This fact has been depicted in this animated picture. P wave propagation Similar to the SV wave, the P incidence, in general, reflects as the P and SV wave. There are some special cases where the regime is different. Wave velocity Wave velocity is a general concept, of various kinds of wave velocities, for a wave's phase and speed concerning energy (and information) propagation. The phase velocity is given as: where: vp is the phase velocity (with SI unit m/s), ω is the angular frequency (with SI unit rad/s), k is the wavenumber (with SI unit rad/m). The phase speed gives you the speed at which a point of constant phase of the wave will travel for a discrete frequency. The angular frequency ω cannot be chosen independently from the wavenumber k, but both are related through the dispersion relationship: In the special case , with c a constant, the waves are called non-dispersive, since all frequencies travel at the same phase speed c. For instance electromagnetic waves in vacuum are non-dispersive. In case of other forms of the dispersion relation, we have dispersive waves. The dispersion relationship depends on the medium through which the waves propagate and on the type of waves (for instance electromagnetic, sound or water waves). The speed at which a resultant wave packet from a narrow range of frequencies will travel is called the group velocity and is determined from the gradient of the dispersion relation: In almost all cases, a wave is mainly a movement of energy through a medium. Most often, the group velocity is the velocity at which the energy moves through this medium. Waves exhibit common behaviors under a number of standard situations, for example: Transmission and media Waves normally move in a straight line (that is, rectilinearly) through a transmission medium. Such media can be classified into one or more of the following categories: A bounded medium if it is finite in extent, otherwise an unbounded medium A linear medium if the amplitudes of different waves at any particular point in the medium can be added A uniform medium or homogeneous medium if its physical properties are unchanged at different locations in space An anisotropic medium if one or more of its physical properties differ in one or more directions An isotropic medium if its physical properties are the same in all directions Absorption Waves are usually defined in media which allow most or all of a wave's energy to propagate without loss. However materials may be characterized as "lossy" if they remove energy from a wave, usually converting it into heat. This is termed "absorption." A material which absorbs a wave's energy, either in transmission or reflection, is characterized by a refractive index which is complex. The amount of absorption will generally depend on the frequency (wavelength) of the wave, which, for instance, explains why objects may appear colored. Reflection When a wave strikes a reflective surface, it changes direction, such that the angle made by the incident wave and line normal to the surface equals the angle made by the reflected wave and the same normal line. Refraction Refraction is the phenomenon of a wave changing its speed. Mathematically, this means that the size of the phase velocity changes. Typically, refraction occurs when a wave passes from one medium into another. The amount by which a wave is refracted by a material is given by the refractive index of the material. The directions of incidence and refraction are related to the refractive indices of the two materials by Snell's law. Diffraction A wave exhibits diffraction when it encounters an obstacle that bends the wave or when it spreads after emerging from an opening. Diffraction effects are more pronounced when the size of the obstacle or opening is comparable to the wavelength of the wave. Interference When waves in a linear medium (the usual case) cross each other in a region of space, they do not actually interact with each other, but continue on as if the other one were not present. However at any point in that region the field quantities describing those waves add according to the superposition principle. If the waves are of the same frequency in a fixed phase relationship, then there will generally be positions at which the two waves are in phase and their amplitudes add, and other positions where they are out of phase and their amplitudes (partially or fully) cancel. This is called an interference pattern. Polarization The phenomenon of polarization arises when wave motion can occur simultaneously in two orthogonal directions. Transverse waves can be polarized, for instance. When polarization is used as a descriptor without qualification, it usually refers to the special, simple case of linear polarization. A transverse wave is linearly polarized if it oscillates in only one direction or plane. In the case of linear polarization, it is often useful to add the relative orientation of that plane, perpendicular to the direction of travel, in which the oscillation occurs, such as "horizontal" for instance, if the plane of polarization is parallel to the ground. Electromagnetic waves propagating in free space, for instance, are transverse; they can be polarized by the use of a polarizing filter. Longitudinal waves, such as sound waves, do not exhibit polarization. For these waves there is only one direction of oscillation, that is, along the direction of travel. Dispersion Dispersion is the frequency dependence of the refractive index, a consequence of the atomic nature of materials. A wave undergoes dispersion when either the phase velocity or the group velocity depends on the wave frequency. Dispersion is seen by letting white light pass through a prism, the result of which is to produce the spectrum of colors of the rainbow. Isaac Newton was the first to recognize that this meant that white light was a mixture of light of different colors. Doppler effect The Doppler effect or Doppler shift is the change in frequency of a wave in relation to an observer who is moving relative to the wave source. It is named after the Austrian physicist Christian Doppler, who described the phenomenon in 1842. Mechanical waves A mechanical wave is an oscillation of matter, and therefore transfers energy through a medium. While waves can move over long distances, the movement of the medium of transmission—the material—is limited. Therefore, the oscillating material does not move far from its initial position. Mechanical waves can be produced only in media which possess elasticity and inertia. There are three types of mechanical waves: transverse waves, longitudinal waves, and surface waves. Waves on strings The transverse vibration of a string is a function of tension and inertia, and is constrained by the length of the string as the ends are fixed. This constraint limits the steady state modes that are possible, and thereby the frequencies. The speed of a transverse wave traveling along a vibrating string (v) is directly proportional to the square root of the tension of the string (T) over the linear mass density (μ): where the linear density μ is the mass per unit length of the string. Acoustic waves Acoustic or sound waves are compression waves which travel as body waves at the speed given by: or the square root of the adiabatic bulk modulus divided by the ambient density of the medium (see speed of sound). Water waves Ripples on the surface of a pond are actually a combination of transverse and longitudinal waves; therefore, the points on the surface follow orbital paths. Sound, a mechanical wave that propagates through gases, liquids, solids and plasmas. Inertial waves, which occur in rotating fluids and are restored by the Coriolis effect. Ocean surface waves, which are perturbations that propagate through water. Body waves Body waves travel through the interior of the medium along paths controlled by the material properties in terms of density and modulus (stiffness). The density and modulus, in turn, vary according to temperature, composition, and material phase. This effect resembles the refraction of light waves. Two types of particle motion result in two types of body waves: Primary and Secondary waves. Seismic waves Seismic waves are waves of energy that travel through the Earth's layers, and are a result of earthquakes, volcanic eruptions, magma movement, large landslides and large man-made explosions that give out low-frequency acoustic energy. They include body waves—the primary (P waves) and secondary waves (S waves)—and surface waves, such as Rayleigh waves, Love waves, and Stoneley waves. Shock waves A shock wave is a type of propagating disturbance. When a wave moves faster than the local speed of sound in a fluid, it is a shock wave. Like an ordinary wave, a shock wave carries energy and can propagate through a medium; however, it is characterized by an abrupt, nearly discontinuous change in pressure, temperature and density of the medium. Shear waves Shear waves are body waves due to shear rigidity and inertia. They can only be transmitted through solids and to a lesser extent through liquids with a sufficiently high viscosity. Other Waves of traffic, that is, propagation of different densities of motor vehicles, and so forth, which can be modeled as kinematic waves Metachronal wave refers to the appearance of a traveling wave produced by coordinated sequential actions. Electromagnetic waves An electromagnetic wave consists of two waves that are oscillations of the electric and magnetic fields. An electromagnetic wave travels in a direction that is at right angles to the oscillation direction of both fields. In the 19th century, James Clerk Maxwell showed that, in vacuum, the electric and magnetic fields satisfy the wave equation both with speed equal to that of the speed of light. From this emerged the idea that light is an electromagnetic wave. The unification of light and electromagnetic waves was experimentally confirmed by Hertz in the end of the 1880s. Electromagnetic waves can have different frequencies (and thus wavelengths), and are classified accordingly in wavebands, such as radio waves, microwaves, infrared, visible light, ultraviolet, X-rays, and gamma rays. The range of frequencies in each of these bands is continuous, and the limits of each band are mostly arbitrary, with the exception of visible light, which must be visible to the normal human eye. Quantum mechanical waves Schrödinger equation The Schrödinger equation describes the wave-like behavior of particles in quantum mechanics. Solutions of this equation are wave functions which can be used to describe the probability density of a particle. Dirac equation The Dirac equation is a relativistic wave equation detailing electromagnetic interactions. Dirac waves accounted for the fine details of the hydrogen spectrum in a completely rigorous way. The wave equation also implied the existence of a new form of matter, antimatter, previously unsuspected and unobserved and which was experimentally confirmed. In the context of quantum field theory, the Dirac equation is reinterpreted to describe quantum fields corresponding to spin- particles. de Broglie waves Louis de Broglie postulated that all particles with momentum have a wavelength where h is the Planck constant, and p is the magnitude of the momentum of the particle. This hypothesis was at the basis of quantum mechanics. Nowadays, this wavelength is called the de Broglie wavelength. For example, the electrons in a CRT display have a de Broglie wavelength of about 10−13 m. A wave representing such a particle traveling in the k-direction is expressed by the wave function as follows: where the wavelength is determined by the wave vector k as: and the momentum by: However, a wave like this with definite wavelength is not localized in space, and so cannot represent a particle localized in space. To localize a particle, de Broglie proposed a superposition of different wavelengths ranging around a central value in a wave packet, a waveform often used in quantum mechanics to describe the wave function of a particle. In a wave packet, the wavelength of the particle is not precise, and the local wavelength deviates on either side of the main wavelength value. In representing the wave function of a localized particle, the wave packet is often taken to have a Gaussian shape and is called a Gaussian wave packet. Gaussian wave packets also are used to analyze water waves. For example, a Gaussian wavefunction ψ might take the form: at some initial time t = 0, where the central wavelength is related to the central wave vector k0 as λ0 = 2π / k0. It is well known from the theory of Fourier analysis, or from the Heisenberg uncertainty principle (in the case of quantum mechanics) that a narrow range of wavelengths is necessary to produce a localized wave packet, and the more localized the envelope, the larger the spread in required wavelengths. The Fourier transform of a Gaussian is itself a Gaussian. Given the Gaussian: the Fourier transform is: The Gaussian in space therefore is made up of waves: that is, a number of waves of wavelengths λ such that kλ = 2 π. The parameter σ decides the spatial spread of the Gaussian along the x-axis, while the Fourier transform shows a spread in wave vector k determined by 1/σ. That is, the smaller the extent in space, the larger the extent in k, and hence in λ = 2π/k. Gravity waves Gravity waves are waves generated in a fluid medium or at the interface between two media when the force of gravity or buoyancy works to restore equilibrium. Surface waves on water are the most familiar example. Gravitational waves Gravitational waves also travel through space. The first observation of gravitational waves was announced on 11 February 2016. Gravitational waves are disturbances in the curvature of spacetime, predicted by Einstein's theory of general relativity.
Physical sciences
Physics
null
33537
https://en.wikipedia.org/wiki/Wing
Wing
A wing is a type of fin that produces both lift and drag while moving through air. Wings are defined by two shape characteristics, an airfoil section and a planform. Wing efficiency is expressed as lift-to-drag ratio, which compares the benefit of lift with the air resistance of a given wing shape, as it flies. Aerodynamics is the study of wing performance in air. Equivalent foils that move through water are found on hydrofoil power vessels and foiling sailboats that lift out of the water at speed and on submarines that use diving planes to point the boat upwards or downwards, while running submerged. Hydrodynamics is the study of foil performance in water. Etymology and usage The word "wing" from the Old Norse vængr for many centuries referred mainly to the foremost limbs of birds (in addition to the architectural aisle). But in recent centuries the word's meaning has extended to include lift producing appendages of insects, bats, pterosaurs, boomerangs, some sail boats and aircraft, or the airfoil on a race car. Aerodynamics The design and analysis of the wings of aircraft is one of the principal applications of the science of aerodynamics, which is a branch of fluid mechanics. The properties of the airflow around any moving object can be found by solving the Navier-Stokes equations of fluid dynamics. Except for simple geometries, these equations are difficult to solve. Simpler explanations can be given. For a wing to produce "lift", it must be oriented at a suitable angle of attack relative to the flow of air past the wing. When this occurs, the wing deflects the airflow downwards, "turning" the air as it passes the wing. Since the wing exerts a force on the air to change its direction, the air must exert a force on the wing, equal in size but opposite in direction. This force arises from different air pressures that exist on the upper and lower surfaces of the wing. Lower-than-ambient air pressure is generated on the top surface of the wing, with a higher-than ambient-pressure on the bottom of the wing. (See: airfoil) These air pressure differences can be either measured using a pressure-measuring device, or can be calculated from the airspeed] using physical principles including Bernoulli's principle, which relates changes in air speed to changes in air pressure. The lower air pressure on the top of the wing generates a smaller downward force on the top of the wing than the upward force generated by the higher air pressure on the bottom of the wing. This gives an upward force on the wing. This force is called the lift generated by the wing. The different velocities of the air passing by the wing, the air pressure differences, the change in direction of the airflow, and the lift on the wing are different ways of describing how lift is produced so it is possible to calculate lift from any one of the other three. For example, the lift can be calculated from the pressure differences, or from different velocities of the air above and below the wing, or from the total momentum change of the deflected air. Fluid dynamics offers other approaches to solving these problems all which methods produce the same answer if correctly calculated. Given a particular wing and its velocity through the air, debates over which mathematical approach is the most convenient to use can be mistaken by those not familiar with the study of aerodynamics as differences of opinion about the basic principles of flight. Cross-sectional shape Wings with an asymmetrical cross-section are the norm in subsonic flight. Wings with a symmetrical cross-section can also generate lift by using a positive angle of attack to deflect air downward. Symmetrical airfoils have higher stalling speeds than cambered airfoils of the same wing area but are used in aerobatic aircraft as they provide the same flight characteristics whether the aircraft is upright or inverted. Another example comes from sailboats, where the sail is a thin sheet. For flight speeds near the speed of sound (transonic flight), specific asymmetrical airfoil sections are used to minimize the very pronounced increase in drag associated with airflow near the speed of sound. These airfoils, called supercritical airfoils, are flat on top and curved on the bottom. Design features Aircraft wings may feature some of the following: A rounded leading edge cross-section A sharp trailing edge cross-section Leading-edge devices such as slats, slots, or extensions Trailing-edge devices such as flaps or flaperons (combination of flaps and ailerons) Winglets to keep wingtip vortices from increasing drag and decreasing lift Dihedral, or a positive wing angle to the horizontal, increases spiral stability around the roll axis, whereas anhedral, or a negative wing angle to the horizontal, decreases spiral stability. Aircraft wings may have various devices, such as flaps or slats, that the pilot uses to modify the shape and surface area of the wing to change its operating characteristics in flight. Ailerons (usually near the wingtips) to roll the aircraft Spoilers on the upper surface to increase drag for descent and to reduce lift for more weight on wheels during braking Vortex generators to help prevent flow separation in transonic flow Wing fences to keep flow attached to the wing by stopping boundary layer separation from spreading roll direction. Folding wings allow more aircraft storage in the confined space of the hangar deck of an aircraft carrier Variable-sweep wing or "swing wings" that allow outstretched wings during low-speed flight (e.g., take-off, landing and loitering) and swept back wings for high-speed flight (including supersonic flight), such as in the F-111 Aardvark, the F-14 Tomcat, the Panavia Tornado, the MiG-23, the MiG-27, the Tu-160 and the B-1B Lancer. Applications Besides fixed-wing aircraft, applications for wing shapes include: Hang gliders, which use wings ranging from fully flexible (paragliders, gliding parachutes), flexible (framed sail wings), to rigid Kites, which use a variety of lifting surfaces Flying model airplanes Helicopters, which use a rotating wing with a variable pitch angle to provide directional forces Propellers, whose blades generate lift for propulsion. The NASA Space Shuttle, which uses its wings only to glide during its descent to a runway. These types of aircraft are called spaceplanes. Some racing cars, especially Formula One cars, which use upside-down wings (or airfoils) to provide greater traction at high speeds Sailboats, which use sails as vertical wings with variable fullness and direction to move across water Flexible wings In 1948, Francis Rogallo invented the fully limp flexible wing. Domina Jalbert invented flexible un-sparred ram-air airfoiled thick wings. In nature Wings have evolved multiple times in history: in dinosaurs (see Pterosaurs), insects, birds (see Bird wing), mammals (see Bats), fish, reptiles and plants. Wings of pterosaurs, birds, bats, and reptiles all evolved from existing limbs, however insect wings evolved as a completely separate structure. Wings facilitated increased locomotion, dispersal, and diversification. Various species of penguins and other flighted or flightless water birds such as auks, cormorants, guillemots, shearwaters, eider and scoter ducks and diving petrels are efficient underwater swimmers, and use their wings to propel through water.
Technology
Aviation
null
33538
https://en.wikipedia.org/wiki/Week
Week
A week is a unit of time equal to seven days. It is the standard time period used for short cycles of days in most parts of the world. The days are often used to indicate common work days and rest days, as well as days of worship. Weeks are often mapped against yearly calendars. Ancient cultures had different "week" lengths, including ten in Egypt and an eight-day week for Etruscans. The Etruscan week was adopted by the ancient Romans, but they later moved to a seven-day week, which had spread across Western Asia and the Eastern Mediterranean due to the influence of the Christian seven-day week, which is rooted in the Jewish seven-day week. In AD 321, Emperor Constantine the Great officially decreed a seven-day week in the Roman Empire, including making Sunday a public holiday. This later spread across Europe, then the rest of the world. In English, the names of the days of the week are Monday, Tuesday, Wednesday, Thursday, Friday, Saturday and Sunday. In many languages, including English, the days of the week are named after gods or classical planets. Saturday has kept its Roman name, while the other six days use Germanic equivalents. Such a week may be called a planetary week (i.e., a classical planetary week). Certain weeks within a year may be designated for a particular purpose, such as Golden Week in China and Japan, and National Family Week in Canada. More informally, certain groups may advocate awareness weeks, which are designed to draw attention to a certain subject or cause. The term "week" may also be used to refer to a sub-section of the week, such as the workweek and weekend. Cultures vary in which days of the week are designated the first and the last, though virtually all have Saturday, Sunday or Monday as the first day. The Geneva-based ISO standards organization uses Monday as the first day of the week in its ISO week date system through the international ISO 8601 standard. Most of Europe and China consider Monday the first day of the (work) week, while North America, Israel, South Asia, and many Catholic and Protestant countries, consider Sunday the first day of the week. Prior to 2000, Saturday was judged as the first day of the week in much of the Middle East and North Africa due to the Islamic influence; however, this is no longer the case. Other regions are mixed, but typically observe either Sunday or Monday as the first day. The three Abrahamic religions observe different days of the week as their holy day. Jews observe their Sabbath (Shabbat) on Saturday, the seventh day, from sundown Friday to sundown Saturday, in honor of God's creation of the world in six days and then resting on the seventh. Most Christians observe Sunday (the Lord's Day), the first day of the week in traditional Christian calendars, in honor of the resurrection of Jesus. Muslims observe their "day of congregation", known as , on Friday because it was described as a sacred day of congregational worship in the Quran. Name The English word week comes from the Old English , ultimately from a Common Germanic , from a root "turn, move, change". The Germanic word probably had a wider meaning prior to the adoption of the Roman calendar, perhaps "succession series", as suggested by Gothic translating "order" in Luke 1:8. The seven-day week is named in many languages by a word derived from "seven". The archaism sennight ("seven-night") preserves the old Germanic practice of reckoning time by nights, as in the more common fortnight ("fourteen-night"). Hebdomad and hebdomadal week both derive from the Greek (, "a seven"). Septimana is cognate with the Romance terms derived from Latin ("seven mornings"). Slavic has a formation *tъ(žь)dьnь (Serbian , Croatian , Ukrainian , Czech , Polish ), from *tъ "this" + *dьnь "day". Chinese has , as it were "planetary time unit". An older Chinese form is , meaning "week, religious ceremony." Definition and duration A week is defined as an interval of exactly seven days, so that, except when passing through daylight saving time transitions or leap seconds, 1 week = 7 days = 168 hours = 10,080 minutes = 604,800 seconds. With respect to the Gregorian calendar: 1 Gregorian calendar year = 52 weeks + 1 day (2 days in a leap year) 1 week = ≈ 22.9984% of an average Gregorian month In a Gregorian mean year, there are 365.2425 days, and thus exactly or 52.1775 weeks (unlike the Julian year of 365.25 days or ≈ 52.1786 weeks, which cannot be represented by a finite decimal expansion). There are exactly 20,871 weeks in 400 Gregorian years, so was a just as was . Relative to the path of the Moon, a week is 23.659% of an average lunation or 94.637% of an average quarter lunation. Historically, the system of dominical letters (letters A to G identifying the weekday of the first day of a given year) has been used to facilitate calculation of the day of week. The day of the week can be easily calculated given a date's Julian day number (JD, i.e. the integer value at noon UT): Adding one to the remainder after dividing the Julian day number by seven (JD modulo 7 + 1) yields that date's ISO 8601 day of the week. For example, the Julian day number of is . Calculating yields , corresponding to . In 1973, John Conway devised the Doomsday rule for mental calculation of the weekday of any date in any year. Days of the week The days of the week were named for the seven classical planets, which included the Sun and Moon. This naming system persisted alongside an "ecclesiastical" tradition of numbering the days in ecclesiastical Latin beginning with Dominica (the Lord's Day) as the first day. The Greco-Roman gods associated with the classical planets were rendered in their interpretatio germanica at some point during the late Roman Empire, yielding the Germanic tradition of names based on indigenous deities. The ordering of the weekday names is not the classical order of the planets (by distance in the planetary spheres model, which is Moon, Mercury, Venus, Sun, Mars, Jupiter, Saturn; nor, equivalently, by their apparent speed of movement in the night sky). Instead, the planetary hours systems resulted in succeeding days being named for planets that are three places apart in their traditional listing. This characteristic was apparently discussed in Plutarch in a treatise written in c. 100 CE, which is reported to have addressed the question of Why are the days named after the planets reckoned in a different order from the actual order? (the text of Plutarch's treatise has been lost). Dio Cassius (early 3rd century) gives two explanations in a section of his Historia Romana after mentioning the Jewish practice of sanctifying the day called the day of Kronos (Saturday). An ecclesiastical, non-astrological, system of numbering the days of the week was adopted in Late Antiquity. This model also seems to have influenced (presumably via Gothic) the designation of Wednesday as "mid-week" in Old High German () and Old Church Slavonic (). Old Church Slavonic may have also modeled the name of Monday, , after the Latin . The ecclesiastical system became prevalent in Eastern Christianity, but in the Latin West it remains extant only in modern Icelandic, Galician, and Portuguese. History Ancient Near East The earliest evidence of an astrological significance of a seven-day period is decree of king Sargon of Akkad around 2300 BCE. Akkadians venerated the number seven, and the key celestial bodies visible to the naked eye numbered seven (the Sun, the Moon and the five closest planets). Gudea, the priest-king of Lagash in Sumer during the Gutian dynasty (about 2100 BCE), built a seven-room temple, which he dedicated with a seven-day festival. In the flood story of the Assyro-Babylonian Epic of Gilgamesh, the storm lasts for seven days, the dove is sent out after seven days (similarly to Genesis), and the Noah-like character of Utnapishtim leaves the ark seven days after it reaches the firm ground. Counting from the new moon, the Babylonians celebrated the 7th, 14th, 21st and 28th of the approximately 29- or 30-day lunar month as "holy days", also called "evil days" (meaning inauspicious for certain activities). On these days, officials were prohibited from various activities and common men were forbidden to "make a wish", and at least the 28th was known as a "rest day". On each of them, offerings were made to a different god and goddess. Though similar, the later practice of associating days of the week with deities or planets is not due to the Babylonians. Judaism A continuous seven-day cycle that runs throughout history without reference to the phases of the moon was first practiced in Judaism, dated to the 6th century BCE at the latest. There are several hypotheses concerning the origin of the biblical seven-day cycle. Friedrich Delitzsch and others suggested that the seven-day week being approximately a quarter of a lunation is the implicit astronomical origin of the seven-day week, and indeed the Babylonian calendar used intercalary days to synchronize the last week of a month with the new moon. According to this theory, the Jewish week was adopted from the Babylonians while removing the moon-dependency. George Aaron Barton speculated that the seven-day creation account of Genesis is connected to the Babylonian creation epic, Enûma Eliš, which is recorded on seven tablets. In a frequently-quoted suggestion going back to the early 20th century, the Hebrew Sabbath is compared to the Sumerian sa-bat "mid-rest", a term for the full moon. The Sumerian term has been reconstructed as rendered Sapattum or Sabattum in Babylonian, possibly present in the lost fifth tablet of the Enûma Eliš, tentatively reconstructed "[Sa]bbath shalt thou then encounter, mid[month]ly". However, Niels-Erik Andreasen, Jeffrey H. Tigay, and others claim that the Biblical Sabbath is mentioned as a day of rest in some of the earliest layers of the Pentateuch dated to the 9th century BCE at the latest, centuries before the Babylonian exile of Judah. They also find the resemblance between the Biblical Sabbath and the Babylonian system to be weak. Therefore, they suggest that the seven-day week may reflect an independent Israelite tradition. Tigay writes: It is clear that among neighboring nations that were in position to have an influence over Israel – and in fact which did influence it in various matters – there is no precise parallel to the Israelite Sabbatical week. This leads to the conclusion that the Sabbatical week, which is as unique to Israel as the Sabbath from which it flows, is an independent Israelite creation. The seven-day week seems to have been adopted, at different stages, by the Persian Empire, in Hellenistic astrology, and (via Greek transmission) in Gupta India and Tang China. The Babylonian system was received by the Greeks in the 4th century BCE (notably via Eudoxus of Cnidus). Although some sources, such as the Encyclopædia Britannica, state that the Babylonians named the days of the week after the five planets, the sun, and the moon, many scholars disagree. Eviatar Zerubavel says, "the establishment of a seven-day week based on the regular observance of the Sabbath is a distinctively Jewish contribution to civilization. The choice of the number 7 as the basis for the Jewish week might have had an Assyrian or Babylonian origin, yet it is crucial to remember that the ancient dwellers of Mesopotamia themselves did not have a seven-day week." The astrological concept of planetary hours is an innovation of Hellenistic astrology, probably first conceived in the 2nd century BCE. The seven-day week was widely known throughout the Roman Empire by the 1st century CE, along with references to the Jewish Sabbath by Roman authors such as Seneca and Ovid. When the seven-day week came into use in Rome during the early imperial period, it did not immediately replace the older eight-day nundinal system. The nundinal system had probably fallen out of use by the time Emperor Constantine adopted the seven-day week for official use in CE 321, making the Day of the Sun () a legal holiday. Achaemenid period The Zoroastrian calendar follows the Babylonian in relating the 7th, 14th, 21st, and 28th of the 29- or 30-day lunar month to Ahura Mazda. The forerunner of all modern Zoroastrian calendars is the system used to determine dates in the Persian Empire, adopted from the Babylonian calendar by the 4th century BCE. Frank C. Senn in his book Christian Liturgy: Catholic and Evangelical points to data suggesting evidence of an early continuous use of a seven-day week; referring to the Jews during the Babylonian captivity in the 6th century BCE, after the destruction of the Temple of Solomon. While the seven-day week in Judaism is tied to Creation account in the Book of Genesis in the Hebrew Bible (where God creates the heavens and the earth in six days and rests on the seventh; Genesis 1:1-2:3, in the Book of Exodus, the fourth of the Ten Commandments is to rest on the seventh day, Shabbat, which can be seen as implying a socially instituted seven-day week), it is not clear whether the Genesis narrative predates the Babylonian captivity of the Jews in the 6th century BCE. At least since the Second Temple period under Persian rule, Judaism relied on the seven-day cycle of recurring Sabbaths. Tablets from the Achaemenid period indicate that the lunation of 29 or 30 days basically contained three seven-day weeks, and a final week of eight or nine days inclusive, breaking the continuous seven-day cycle. The Babylonians additionally celebrated the 19th as a special "evil day", the "day of anger", because it was roughly the 49th day of the (preceding) month, completing a "week of weeks", also with sacrifices and prohibitions. Difficulties with Friedrich Delitzsch's origin theory connecting Hebrew Shabbat with the Babylonian lunar cycle include reconciling the differences between an unbroken week and a lunar week, and explaining the absence of texts naming the lunar week as Shabbat in any language. Hellenistic and Roman era In Jewish sources by the time of the Septuagint, the term "Sabbath" () by synecdoche also came to refer to an entire seven-day week, the interval between two weekly Sabbaths. Jesus's parable of the Pharisee and the Publican (Luke 18:12) describes the Pharisee as fasting "twice in the week" (). In the account of the women finding the tomb empty, they are described as coming there "toward the one of the sabbaths" (); translations substitute "week" for "sabbaths". The ancient Romans traditionally used the eight-day nundinum but, after the Julian calendar had come into effect in 45 BCE, the seven-day week came into increasing use. For a while, the week and the nundinal cycle coexisted, but by the time the week was officially adopted by Constantine in 321 CE, the nundinal cycle had fallen out of use. The association of the days of the week with the Sun, the Moon and the five planets visible to the naked eye dates to the Roman era (2nd century). The continuous seven-day cycle of the days of the week can be traced back to the reign of Augustus; the first identifiable date cited complete with day of the week is 6 February 60 CE, identified as a "Sunday" (as "eighth day before the ides of February, day of the Sun") in a Pompeiian graffito. According to the (contemporary) Julian calendar, 6 February 60 was, however, a Wednesday. This is explained by the existence of two conventions of naming days of the weeks based on the planetary hours system: 6 February was a "Sunday" based on the sunset naming convention, and a "Wednesday" based on the sunrise naming convention. Islamic concept According to Islamic beliefs, the seven-day a week concept started with the creation of the universe by Allah. Abu Huraira reported that Muhammad said: Allah, the Exalted and Glorious, created the clay on Saturday and He created the mountains on Sunday and He created the trees on Monday and He created the things entailing labour on Tuesday and created light on Wednesday and He caused the animals to spread on Thursday and created Adam after 'Asr on Friday; the last creation at the last hour of the hours of Friday, i. e. between afternoon and night. Adoption in Asia China and Japan The earliest known reference in Chinese writings to a seven-day week is attributed to Fan Ning, who lived in the late 4th century in the Jin dynasty, while diffusions from the Manichaeans are documented with the writings of the Chinese Buddhist monk Yi Jing and the Ceylonese or Central Asian Buddhist monk Bu Kong of the 7th century (Tang dynasty). The Chinese variant of the planetary system was brought to Japan by the Japanese monk Kūkai (9th century). Surviving diaries of the Japanese statesman Fujiwara Michinaga show the seven-day system in use in Heian Period Japan as early as 1007. In Japan, the seven-day system was kept in use for astrological purposes until its promotion to a full-fledged Western-style calendrical basis during the Meiji Period (1868–1912). India The seven-day week was known in India by the 6th century, referenced in the Pañcasiddhāntikā. Shashi (2000) mentions the Garga Samhita, which he places in the 1st century BCE or CE, as a possible earlier reference to a seven-day week in India. He concludes "the above references furnish a terminus ad quem (viz. 1st century) The terminus a quo cannot be stated with certainty". Christian Europe The seven-day weekly cycle has remained unbroken in Christendom, and hence in Western history, for almost two millennia, despite changes to the Coptic, Julian, and Gregorian calendars, demonstrated by the date of Easter Sunday having been traced back through numerous computistic tables to an Ethiopic copy of an early Alexandrian table beginning with the Easter of 311 CE. A tradition of divinations arranged for the days of the week on which certain feast days occur develops in the Early Medieval period. There are many later variants of this, including the German and the versions of Erra Pater published in 16th to 17th century England, mocked in Samuel Butler's Hudibras. South and East Slavic versions are known as koliadniki (from koliada, a loan of Latin ), with Bulgarian copies dating from the 13th century, and Serbian versions from the 14th century. Medieval Christian traditions associated with the lucky or unlucky nature of certain days of the week survived into the modern period. This concerns primarily Friday, associated with the crucifixion of Jesus. Sunday, sometimes personified as Saint Anastasia, was itself an object of worship in Russia, a practice denounced in a sermon extant in copies going back to the 14th century. Sunday, in the ecclesiastical numbering system also counted as the or the first day of the week; yet, at the same time, figures as the "eighth day", and has occasionally been so called in Christian liturgy. Justin Martyr wrote: "the first day after the Sabbath, remaining the first of all the days, is called, however, the eighth, according to the number of all the days of the cycle, and [yet] remains the first." A period of eight days, usually (but not always, mainly because of Christmas Day) starting and ending on a Sunday, is called an octave, particularly in Roman Catholic liturgy. In German, the phrase (literally "today in eight days") can also mean one week from today (i.e. on the same weekday). The same is true of the Italian phrase (literally "today eight"), the French , and the Spanish . Numbering Weeks in a Gregorian calendar year can be numbered for each year. This style of numbering is often used in European and Asian countries. It is less common in the U.S. and elsewhere. The ISO week date system The system for numbering weeks is the ISO week date system, which is included in ISO 8601. This system dictates that each week begins on a Monday and is associated with the year that contains that week's Thursday. Determining Week 1 In practice week 1 (W01 in ISO notation) of any year can be determined as follows: If 1 January falls on a Monday, Tuesday, Wednesday or Thursday, then the week of 1 January is Week 1. Except in the case of 1 January falling on a Monday, this Week 1 includes the last day(s) of the previous year. If 1 January falls on a Friday, Saturday, or Sunday, then 1 January is considered to be part of the last week of the previous year. Week 1 will begin on the first Monday after 1 January. Examples: Week 1 of 2015 (2015W01 in ISO notation) started on Monday, 29 December 2014 and ended on Sunday, 4 January 2015, because 1 January 2015 fell on Thursday. Week 1 of 2021 (2021W01 in ISO notation) started on Monday, 4 January 2021 and ended on Sunday, 10 January 2021, because 1 January 2021 fell on Friday. Week 52 and 53 It is also possible to determine if the last week of the previous year was Week 52 or Week 53 as follows: If 1 January falls on a Friday, then it is part of Week 53 of the previous year (W53-5). If 1 January falls on a Saturday, then it is part of Week 53 of the previous year if that is a leap year (W53-6), and part of Week 52 otherwise (W52-6), i.e. if the previous year is a common year. If 1 January falls on a Sunday, then it is part of Week 52 of the previous year (W52-7). Schematic representation of ISO week date
Physical sciences
Time
null
33550
https://en.wikipedia.org/wiki/Wood
Wood
Wood is a structural tissue/material found as xylem in the stems and roots of trees and other woody plants. It is an organic materiala natural composite of cellulosic fibers that are strong in tension and embedded in a matrix of lignin that resists compression. Wood is sometimes defined as only the secondary xylem in the stems of trees, or more broadly to include the same type of tissue elsewhere, such as in the roots of trees or shrubs. In a living tree, it performs a mechanical-support function, enabling woody plants to grow large or to stand up by themselves. It also conveys water and nutrients among the leaves, other growing tissues, and the roots. Wood may also refer to other plant materials with comparable properties, and to material engineered from wood, woodchips, or fibers. Wood has been used for thousands of years for fuel, as a construction material, for making tools and weapons, furniture and paper. More recently it emerged as a feedstock for the production of purified cellulose and its derivatives, such as cellophane and cellulose acetate. As of 2020, the growing stock of forests worldwide was about 557 billion cubic meters. As an abundant, carbon-neutral renewable resource, woody materials have been of intense interest as a source of renewable energy. In 2008, approximately 3.97 billion cubic meters of wood were harvested. Dominant uses were for furniture and building construction. Wood is scientifically studied and researched through the discipline of wood science, which was initiated since the beginning of the 20th century. History A 2011 discovery in the Canadian province of New Brunswick yielded the earliest known plants to have grown wood, approximately 395 to 400 million years ago. Wood can be dated by carbon dating and in some species by dendrochronology to determine when a wooden object was created. People have used wood for thousands of years for many purposes, including as a fuel or as a construction material for making houses, tools, weapons, furniture, packaging, artworks, and paper. Known constructions using wood date back ten thousand years. Buildings like the longhouses in Neolithic Europe were made primarily of wood. Recent use of wood has been enhanced by the addition of steel and bronze into construction. The year-to-year variation in tree-ring widths and isotopic abundances gives clues to the prevailing climate at the time a tree was cut. Physical properties Growth rings Wood, in the strict sense, is yielded by trees, which increase in diameter by the formation, between the existing wood and the inner bark, of new woody layers which envelop the entire stem, living branches, and roots. This process is known as secondary growth; it is the result of cell division in the vascular cambium, a lateral meristem, and subsequent expansion of the new cells. These cells then go on to form thickened secondary cell walls, composed mainly of cellulose, hemicellulose and lignin. Where the differences between the seasons are distinct, e.g. New Zealand, growth can occur in a discrete annual or seasonal pattern, leading to growth rings; these can usually be most clearly seen on the end of a log, but are also visible on the other surfaces. If the distinctiveness between seasons is annual (as is the case in equatorial regions, e.g. Singapore), these growth rings are referred to as annual rings. Where there is little seasonal difference growth rings are likely to be indistinct or absent. If the bark of the tree has been removed in a particular area, the rings will likely be deformed as the plant overgrows the scar. If there are differences within a growth ring, then the part of a growth ring nearest the center of the tree, and formed early in the growing season when growth is rapid, is usually composed of wider elements. It is usually lighter in color than that near the outer portion of the ring, and is known as earlywood or springwood. The outer portion formed later in the season is then known as the latewood or summerwood. There are major differences, depending on the kind of wood. If a tree grows all its life in the open and the conditions of soil and site remain unchanged, it will make its most rapid growth in youth, and gradually decline. The annual rings of growth are for many years quite wide, but later they become narrower and narrower. Since each succeeding ring is laid down on the outside of the wood previously formed, it follows that unless a tree materially increases its production of wood from year to year, the rings must necessarily become thinner as the trunk gets wider. As a tree reaches maturity its crown becomes more open and the annual wood production is lessened, thereby reducing still more the width of the growth rings. In the case of forest-grown trees so much depends upon the competition of the trees in their struggle for light and nourishment that periods of rapid and slow growth may alternate. Some trees, such as southern oaks, maintain the same width of ring for hundreds of years. On the whole, as a tree gets larger in diameter the width of the growth rings decreases. Knots As a tree grows, lower branches often die, and their bases may become overgrown and enclosed by subsequent layers of trunk wood, forming a type of imperfection known as a knot. The dead branch may not be attached to the trunk wood except at its base and can drop out after the tree has been sawn into boards. Knots affect the technical properties of the wood, usually reducing tension strength, but may be exploited for visual effect. In a longitudinally sawn plank, a knot will appear as a roughly circular "solid" (usually darker) piece of wood around which the grain of the rest of the wood "flows" (parts and rejoins). Within a knot, the direction of the wood (grain direction) is up to 90 degrees different from the grain direction of the regular wood. In the tree a knot is either the base of a side branch or a dormant bud. A knot (when the base of a side branch) is conical in shape (hence the roughly circular cross-section) with the inner tip at the point in stem diameter at which the plant's vascular cambium was located when the branch formed as a bud. In grading lumber and structural timber, knots are classified according to their form, size, soundness, and the firmness with which they are held in place. This firmness is affected by, among other factors, the length of time for which the branch was dead while the attaching stem continued to grow. Knots do not necessarily influence the stiffness of structural timber; this will depend on the size and location. Stiffness and elastic strength are more dependent upon the sound wood than upon localized defects. The breaking strength is very susceptible to defects. Sound knots do not weaken wood when subject to compression parallel to the grain. In some decorative applications, wood with knots may be desirable to add visual interest. In applications where wood is painted, such as skirting boards, fascia boards, door frames and furniture, resins present in the timber may continue to 'bleed' through to the surface of a knot for months or even years after manufacture and show as a yellow or brownish stain. A knot primer paint or solution (knotting), correctly applied during preparation, may do much to reduce this problem but it is difficult to control completely, especially when using mass-produced kiln-dried timber stocks. Heartwood and sapwood Heartwood (or duramen) is wood that as a result of a naturally occurring chemical transformation has become more resistant to decay. Heartwood formation is a genetically programmed process that occurs spontaneously. Some uncertainty exists as to whether the wood dies during heartwood formation, as it can still chemically react to decay organisms, but only once. The term heartwood derives solely from its position and not from any vital importance to the tree. This is evidenced by the fact that a tree can thrive with its heart completely decayed. Some species begin to form heartwood very early in life, so having only a thin layer of live sapwood, while in others the change comes slowly. Thin sapwood is characteristic of such species as chestnut, black locust, mulberry, osage-orange, and sassafras, while in maple, ash, hickory, hackberry, beech, and pine, thick sapwood is the rule. Some others never form heartwood. Heartwood is often visually distinct from the living sapwood and can be distinguished in a cross-section where the boundary will tend to follow the growth rings. For example, it is sometimes much darker. Other processes such as decay or insect invasion can also discolor wood, even in woody plants that do not form heartwood, which may lead to confusion. Sapwood (or alburnum) is the younger, outermost wood; in the growing tree it is living wood, and its principal functions are to conduct water from the roots to the leaves and to store up and give back according to the season the reserves prepared in the leaves. By the time they become competent to conduct water, all xylem tracheids and vessels have lost their cytoplasm and the cells are therefore functionally dead. All wood in a tree is first formed as sapwood. The more leaves a tree bears and the more vigorous its growth, the larger the volume of sapwood required. Hence trees making rapid growth in the open have thicker sapwood for their size than trees of the same species growing in dense forests. Sometimes trees (of species that do form heartwood) grown in the open may become of considerable size, or more in diameter, before any heartwood begins to form, for example, in second growth hickory, or open-grown pines. No definite relation exists between the annual rings of growth and the amount of sapwood. Within the same species the cross-sectional area of the sapwood is very roughly proportional to the size of the crown of the tree. If the rings are narrow, more of them are required than where they are wide. As the tree gets larger, the sapwood must necessarily become thinner or increase materially in volume. Sapwood is relatively thicker in the upper portion of the trunk of a tree than near the base, because the age and the diameter of the upper sections are less. When a tree is very young it is covered with limbs almost, if not entirely, to the ground, but as it grows older some or all of them will eventually die and are either broken off or fall off. Subsequent growth of wood may completely conceal the stubs which will remain as knots. No matter how smooth and clear a log is on the outside, it is more or less knotty near the middle. Consequently, the sapwood of an old tree, and particularly of a forest-grown tree, will be freer from knots than the inner heartwood. Since in most uses of wood, knots are defects that weaken the timber and interfere with its ease of working and other properties, it follows that a given piece of sapwood, because of its position in the tree, may well be stronger than a piece of heartwood from the same tree. Different pieces of wood cut from a large tree may differ decidedly, particularly if the tree is big and mature. In some trees, the wood laid on late in the life of a tree is softer, lighter, weaker, and more even textured than that produced earlier, but in other trees, the reverse applies. This may or may not correspond to heartwood and sapwood. In a large log the sapwood, because of the time in the life of the tree when it was grown, may be inferior in hardness, strength, and toughness to equally sound heartwood from the same log. In a smaller tree, the reverse may be true. Color In species which show a distinct difference between heartwood and sapwood the natural color of heartwood is usually darker than that of the sapwood, and very frequently the contrast is conspicuous (see section of yew log above). This is produced by deposits in the heartwood of chemical substances, so that a dramatic color variation does not imply a significant difference in the mechanical properties of heartwood and sapwood, although there may be a marked biochemical difference between the two. Some experiments on very resinous longleaf pine specimens indicate an increase in strength, due to the resin which increases the strength when dry. Such resin-saturated heartwood is called "fat lighter". Structures built of fat lighter are almost impervious to rot and termites, and very flammable. Tree stumps of old longleaf pines are often dug, split into small pieces and sold as kindling for fires. Stumps thus dug may actually remain a century or more since being cut. Spruce impregnated with crude resin and dried is also greatly increased in strength thereby. Since the latewood of a growth ring is usually darker in color than the earlywood, this fact may be used in visually judging the density, and therefore the hardness and strength of the material. This is particularly the case with coniferous woods. In ring-porous woods the vessels of the early wood often appear on a finished surface as darker than the denser latewood, though on cross sections of heartwood the reverse is commonly true. Otherwise the color of wood is no indication of strength. Abnormal discoloration of wood often denotes a diseased condition, indicating unsoundness. The black check in western hemlock is the result of insect attacks. The reddish-brown streaks so common in hickory and certain other woods are mostly the result of injury by birds. The discoloration is merely an indication of an injury, and in all probability does not of itself affect the properties of the wood. Certain rot-producing fungi impart to wood characteristic colors which thus become symptomatic of weakness. Ordinary sap-staining is due to fungal growth, but does not necessarily produce a weakening effect. Water content Water occurs in living wood in three locations, namely: in the cell walls in the protoplasmic contents of the cells as free water in the cell cavities and spaces, especially of the xylem In heartwood it occurs only in the first and last forms. Wood that is thoroughly air-dried (in equilibrium with the moisture content of the air) retains 8–16% of the water in the cell walls, and none, or practically none, in the other forms. Even oven-dried wood retains a small percentage of moisture, but for all except chemical purposes, may be considered absolutely dry. The general effect of the water content upon the wood substance is to render it softer and more pliable. A similar effect occurs in the softening action of water on rawhide, paper, or cloth. Within certain limits, the greater the water content, the greater its softening effect. The moisture in wood can be measured by several different moisture meters. Drying produces a decided increase in the strength of wood, particularly in small specimens. An extreme example is the case of a completely dry spruce block 5 cm in section, which will sustain a permanent load four times as great as a green (undried) block of the same size will. The greatest strength increase due to drying is in the ultimate crushing strength, and strength at elastic limit in endwise compression; these are followed by the modulus of rupture, and stress at elastic limit in cross-bending, while the modulus of elasticity is least affected. Structure Wood is a heterogeneous, hygroscopic, cellular and anisotropic (or more specifically, orthotropic) material. It consists of cells, and the cell walls are composed of micro-fibrils of cellulose (40–50%) and hemicellulose (15–25%) impregnated with lignin (15–30%). In coniferous or softwood species the wood cells are mostly of one kind, tracheids, and as a result the material is much more uniform in structure than that of most hardwoods. There are no vessels ("pores") in coniferous wood such as one sees so prominently in oak and ash, for example. The structure of hardwoods is more complex. The water conducting capability is mostly taken care of by vessels: in some cases (oak, chestnut, ash) these are quite large and distinct, in others (buckeye, poplar, willow) too small to be seen without a hand lens. In discussing such woods it is customary to divide them into two large classes, ring-porous and diffuse-porous. In ring-porous species, such as ash, black locust, catalpa, chestnut, elm, hickory, mulberry, and oak, the larger vessels or pores (as cross sections of vessels are called) are localized in the part of the growth ring formed in spring, thus forming a region of more or less open and porous tissue. The rest of the ring, produced in summer, is made up of smaller vessels and a much greater proportion of wood fibers. These fibers are the elements which give strength and toughness to wood, while the vessels are a source of weakness. In diffuse-porous woods the pores are evenly sized so that the water conducting capability is scattered throughout the growth ring instead of being collected in a band or row. Examples of this kind of wood are alder, basswood, birch, buckeye, maple, willow, and the Populus species such as aspen, cottonwood and poplar. Some species, such as walnut and cherry, are on the border between the two classes, forming an intermediate group. Earlywood and latewood In softwood In temperate softwoods, there often is a marked difference between latewood and earlywood. The latewood will be denser than that formed early in the season. When examined under a microscope, the cells of dense latewood are seen to be very thick-walled and with very small cell cavities, while those formed first in the season have thin walls and large cell cavities. The strength is in the walls, not the cavities. Hence the greater the proportion of latewood, the greater the density and strength. In choosing a piece of pine where strength or stiffness is the important consideration, the principal thing to observe is the comparative amounts of earlywood and latewood. The width of ring is not nearly so important as the proportion and nature of the latewood in the ring. If a heavy piece of pine is compared with a lightweight piece it will be seen at once that the heavier one contains a larger proportion of latewood than the other, and is therefore showing more clearly demarcated growth rings. In white pines there is not much contrast between the different parts of the ring, and as a result the wood is very uniform in texture and is easy to work. In hard pines, on the other hand, the latewood is very dense and is deep-colored, presenting a very decided contrast to the soft, straw-colored earlywood. It is not only the proportion of latewood, but also its quality, that counts. In specimens that show a very large proportion of latewood it may be noticeably more porous and weigh considerably less than the latewood in pieces that contain less latewood. One can judge comparative density, and therefore to some extent strength, by visual inspection. No satisfactory explanation can as yet be given for the exact mechanisms determining the formation of earlywood and latewood. Several factors may be involved. In conifers, at least, rate of growth alone does not determine the proportion of the two portions of the ring, for in some cases the wood of slow growth is very hard and heavy, while in others the opposite is true. The quality of the site where the tree grows undoubtedly affects the character of the wood formed, though it is not possible to formulate a rule governing it. In general, where strength or ease of working is essential, woods of moderate to slow growth should be chosen. In ring-porous woods In ring-porous woods, each season's growth is always well defined, because the large pores formed early in the season abut on the denser tissue of the year before. In the case of the ring-porous hardwoods, there seems to exist a pretty definite relation between the rate of growth of timber and its properties. This may be briefly summed up in the general statement that the more rapid the growth or the wider the rings of growth, the heavier, harder, stronger, and stiffer the wood. This, it must be remembered, applies only to ring-porous woods such as oak, ash, hickory, and others of the same group, and is, of course, subject to some exceptions and limitations. In ring-porous woods of good growth, it is usually the latewood in which the thick-walled, strength-giving fibers are most abundant. As the breadth of ring diminishes, this latewood is reduced so that very slow growth produces comparatively light, porous wood composed of thin-walled vessels and wood parenchyma. In good oak, these large vessels of the earlywood occupy from six to ten percent of the volume of the log, while in inferior material they may make up 25% or more. The latewood of good oak is dark colored and firm, and consists mostly of thick-walled fibers which form one-half or more of the wood. In inferior oak, this latewood is much reduced both in quantity and quality. Such variation is very largely the result of rate of growth. Wide-ringed wood is often called "second-growth", because the growth of the young timber in open stands after the old trees have been removed is more rapid than in trees in a closed forest, and in the manufacture of articles where strength is an important consideration such "second-growth" hardwood material is preferred. This is particularly the case in the choice of hickory for handles and spokes. Here not only strength, but toughness and resilience are important. The results of a series of tests on hickory by the U.S. Forest Service show that: "The work or shock-resisting ability is greatest in wide-ringed wood that has from 5 to 14 rings per inch (rings 1.8-5 mm thick), is fairly constant from 14 to 38 rings per inch (rings 0.7–1.8 mm thick), and decreases rapidly from 38 to 47 rings per inch (rings 0.5–0.7 mm thick). The strength at maximum load is not so great with the most rapid-growing wood; it is maximum with from 14 to 20 rings per inch (rings 1.3–1.8 mm thick), and again becomes less as the wood becomes more closely ringed. The natural deduction is that wood of first-class mechanical value shows from 5 to 20 rings per inch (rings 1.3–5 mm thick) and that slower growth yields poorer stock. Thus the inspector or buyer of hickory should discriminate against timber that has more than 20 rings per inch (rings less than 1.3 mm thick). Exceptions exist, however, in the case of normal growth upon dry situations, in which the slow-growing material may be strong and tough." The effect of rate of growth on the qualities of chestnut wood is summarized by the same authority as follows: "When the rings are wide, the transition from spring wood to summer wood is gradual, while in the narrow rings the spring wood passes into summer wood abruptly. The width of the spring wood changes but little with the width of the annual ring, so that the narrowing or broadening of the annual ring is always at the expense of the summer wood. The narrow vessels of the summer wood make it richer in wood substance than the spring wood composed of wide vessels. Therefore, rapid-growing specimens with wide rings have more wood substance than slow-growing trees with narrow rings. Since the more the wood substance the greater the weight, and the greater the weight the stronger the wood, chestnuts with wide rings must have stronger wood than chestnuts with narrow rings. This agrees with the accepted view that sprouts (which always have wide rings) yield better and stronger wood than seedling chestnuts, which grow more slowly in diameter." In diffuse-porous woods In the diffuse-porous woods, the demarcation between rings is not always so clear and in some cases is almost (if not entirely) invisible to the unaided eye. Conversely, when there is a clear demarcation there may not be a noticeable difference in structure within the growth ring. In diffuse-porous woods, as has been stated, the vessels or pores are even-sized, so that the water conducting capability is scattered throughout the ring instead of collected in the earlywood. The effect of rate of growth is, therefore, not the same as in the ring-porous woods, approaching more nearly the conditions in the conifers. In general, it may be stated that such woods of medium growth afford stronger material than when very rapidly or very slowly grown. In many uses of wood, total strength is not the main consideration. If ease of working is prized, wood should be chosen with regard to its uniformity of texture and straightness of grain, which will in most cases occur when there is little contrast between the latewood of one season's growth and the earlywood of the next. Monocots Structural material that resembles ordinary, "dicot" or conifer timber in its gross handling characteristics is produced by a number of monocot plants, and these also are colloquially called wood. Of these, bamboo, botanically a member of the grass family, has considerable economic importance, larger culms being widely used as a building and construction material and in the manufacture of engineered flooring, panels and veneer. Another major plant group that produces material that often is called wood are the palms. Of much less importance are plants such as Pandanus, Dracaena and Cordyline. With all this material, the structure and composition of the processed raw material is quite different from ordinary wood. Specific gravity The single most revealing property of wood as an indicator of wood quality is specific gravity (Timell 1986), as both pulp yield and lumber strength are determined by it. Specific gravity is the ratio of the mass of a substance to the mass of an equal volume of water; density is the ratio of a mass of a quantity of a substance to the volume of that quantity and is expressed in mass per unit substance, e.g., grams per milliliter (g/cm3 or g/ml). The terms are essentially equivalent as long as the metric system is used. Upon drying, wood shrinks and its density increases. Minimum values are associated with green (water-saturated) wood and are referred to as basic specific gravity (Timell 1986). The U.S. Forest Products Laboratory lists a variety of ways to define specific gravity (G) and density (ρ) for wood: The FPL has adopted Gb and G12 for specific gravity, in accordance with the ASTM D2555 standard. These are scientifically useful, but don't represent any condition that could physically occur. The FPL Wood Handbook also provides formulas for approximately converting any of these measurements to any other. Density Wood density is determined by multiple growth and physiological factors compounded into "one fairly easily measured wood characteristic" (Elliott 1970). Age, diameter, height, radial (trunk) growth, geographical location, site and growing conditions, silvicultural treatment, and seed source all to some degree influence wood density. Variation is to be expected. Within an individual tree, the variation in wood density is often as great as or even greater than that between different trees (Timell 1986). Variation of specific gravity within the bole of a tree can occur in either the horizontal or vertical direction. Because the specific gravity as defined above uses an unrealistic condition, woodworkers tend to use the "average dried weight", which is a density based on mass at 12% moisture content and volume at the same (ρ12). This condition occurs when the wood is at equilibrium moisture content with air at about 65% relative humidity and temperature at 30 °C (86 °F). This density is expressed in units of kg/m3 or lbs/ft3. Tables The following tables list the mechanical properties of wood and lumber plant species, including bamboo.
Technology
Materials
null
33555
https://en.wikipedia.org/wiki/Wheel
Wheel
A wheel is a rotating component (typically circular in shape) that is intended to turn on an axle bearing. The wheel is one of the key components of the wheel and axle which is one of the six simple machines. Wheels, in conjunction with axles, allow heavy objects to be moved easily facilitating movement or transportation while supporting a load, or performing labor in machines. Wheels are also used for other purposes, such as a ship's wheel, steering wheel, potter's wheel, and flywheel. Common examples can be found in transport applications. A wheel reduces friction by facilitating motion by rolling together with the use of axles. In order for wheels to rotate, a moment needs to be applied to the wheel about its axis, either by way of gravity or by the application of another external force or torque. Terminology The English word wheel comes from the Old English word , from Proto-Germanic , from Proto-Indo-European , an extended form of the root . Cognates within Indo-European include Icelandic , Greek , and Sanskrit , the last two both meaning or . History Mesopotamian civilization was credited with the invention of the wheel by several, mainly old sources. Research in the 2000s suggests that the wheel was invented independently in Eastern Europe, with the oldest potter's wheel and oldest wheels for vehicles found in the Cucuteni–Trypillia culture, which predates Mesopotamia finds by several hundred years. In 2024, a team of researchers reported what may be the earliest evidence of wheel-like stones in what is now Israel, being 12,000 years old. The invention of the solid wooden disk wheel falls into the late Neolithic, and may be seen in conjunction with other technological advances that gave rise to the early Bronze Age. This implies the passage of several wheel-less millennia after the invention of agriculture and of pottery, during the Aceramic Neolithic. 4500–3300 BCE (Copper Age): invention of the potter's wheel; earliest solid wooden wheels (disks with a hole for the axle); earliest wheeled vehicles 3300–2200 BCE (Early Bronze Age) 2200–1550 BCE (Middle Bronze Age): invention of the spoked wheel and the chariot; domestication of the horse The Halaf culture of 6500–5100 BCE is sometimes credited with the earliest depiction of a wheeled vehicle, but there is no evidence of Halafians using either wheeled vehicles or pottery wheels. Potter's wheels are thought to have been used in the 4th millennium BCE in the Middle East. The oldest surviving example of a potter's wheel was thought to be one found in Ur (modern day Iraq) dating to approximately 3100 BCE. However, a potter's wheel found in western Ukraine, of the Cucuteni–Trypillia culture, dates to the middle of the 5th millennium BCE which pre-dates the earliest use of the potter's wheel in Mesopotamia. Wheels of uncertain dates have been found in the Indus Valley civilization of the late 4th millennium BCE covering areas of present-day India and Pakistan. The oldest indirect evidence of wheeled movement was found in the form of miniature clay wheels north of the Black Sea before 4000BCE. From the middle of the 4th millennium BCE onward, the evidence is condensed throughout Europe in the form of toy cars, depictions, or ruts, with the oldest find in Northern Germany dating back to around 3400BCE. In Mesopotamia, depictions of wheeled wagons found on clay tablet pictographs at the Eanna district of Uruk, in the Sumerian civilization are dated to c.3500–3350BCE. In the second half of the 4thmillennium BCE, evidence of wheeled vehicles appeared near-simultaneously in the Northern (Maykop culture) and South Caucasus and Eastern Europe (Cucuteni-Trypillian culture). Depictions of a wheeled vehicle appeared between 3631 and 3380 BCE in the Bronocice clay pot excavated in a Funnelbeaker culture settlement in southern Poland. In nearby Olszanica, a 2.2m wide door was constructed for wagon entry; this barn was 40m long with three doors, dated to 5000 BCE, and belonged to the neolithic Linear Pottery culture. Surviving evidence of a wheel-axle combination, from Stare Gmajne near Ljubljana in Slovenia (Ljubljana Marshes Wooden Wheel), is dated within two standard deviations to 3340–3030 BCE, the axle to 3360–3045 BCE. Two types of early Neolithic European wheel and axle are known: a circumalpine type of wagon construction (the wheel and axle rotate together, as in Ljubljana Marshes Wheel), and that of the Baden culture in Hungary (axle does not rotate). They both are dated to c.3200–3000 BCE. Some historians believe that there was a diffusion of the wheeled vehicle from the Near East to Europe around the mid-4th millennium BCE. Early wheels were simple wooden disks with a hole for the axle. Some of the earliest wheels were made from horizontal slices of tree trunks. Because of the uneven structure of wood, a wheel made from a horizontal slice of a tree trunk will tend to be inferior to one made from rounded pieces of longitudinal boards. The spoked wheel was invented more recently and allowed the construction of lighter and swifter vehicles. The earliest known examples of wooden spoked wheels are in the context of the Sintashta culture, dating to c.2000 BCE (Krivoye Lake). Soon after this, horse cultures of the Caucasus region used horse-drawn spoked-wheel war chariots for the greater part of three centuries. They moved deep into the Greek peninsula where they joined with the existing Mediterranean peoples to give rise, eventually, to classical Greece after the breaking of Minoan dominance and consolidations led by pre-classical Sparta and Athens. Celtic chariots introduced an iron rim around the wheel in the 1stmillennium BCE. In China, wheel tracks dating to around 2200BCE have been found at Pingliangtai, a site of the Longshan Culture. Similar tracks were also found at Yanshi, a city of the Erlitou culture, dating to around 1700 BCE. The earliest evidence of spoked wheels in China comes from Qinghai, in the form of two wheel hubs from a site dated between 2000 and 1500BCE. Wheeled vehicles were introduced to China from the west. In Britain, a large wooden wheel, measuring about in diameter, was uncovered at the Must Farm site in East Anglia in 2016. The specimen, dating from 1,100 to 800 BCE, represents the most complete and earliest of its type found in Britain. The wheel's hub is also present. A horse's spine found nearby suggests the wheel may have been part of a horse-drawn cart. The wheel was found in a settlement built on stilts over wetland, indicating that the settlement had some sort of link to dry land. Although large-scale use of wheels did not occur in the Americas prior to European contact, numerous small wheeled artifacts, identified as children's toys, have been found in Mexican archeological sites, some dating to approximately 1500 BCE. Some argue that the primary obstacle to large-scale development of the wheel in the Americas was the absence of domesticated large animals that could be used to pull wheeled carriages. The closest relative of cattle present in Americas in pre-Columbian times, the American bison, is difficult to domesticate and was never domesticated by Native Americans; several horse species existed until about 12,000 years ago, but ultimately became extinct. The only large animal that was domesticated in the Western hemisphere, the llama, a pack animal, was not physically suited to use as a draft animal to pull wheeled vehicles, and use of the llama did not spread far beyond the Andes by the time of the arrival of Europeans. On the other hand, Mesoamericans never developed the wheelbarrow, the potter's wheel, nor any other practical object with a wheel or wheels. Although present in a number of toys, very similar to those found throughout the world and still made for children today ("pull toys"), the wheel was never put into practical use in Mesoamerica before the 16th century. Possibly the closest the Mayas came to the utilitarian wheel is the spindle whorl, and some scholars believe that these toys were originally made with spindle whorls and spindle sticks as "wheels" and "axes". Aboriginal Australians traditionally used circular discs rolled along the ground for target practice. Nubians from after about 400BCE used wheels for spinning pottery and as water wheels. It is thought that Nubian waterwheels may have been ox-driven. It is also known that Nubians used horse-drawn chariots imported from Egypt. Starting from the 18th century in West Africa, wheeled vehicles were mostly used for ceremonial purposes in places like Dahomey. The wheel was barely used for transportation, with the exception of Ethiopia and Somalia in Sub-Saharan Africa well into the 19th century. The spoked wheel was in continued use without major modification until the 1870s, when wire-spoked wheels and pneumatic tires were invented. Pneumatic tires can greatly reduce rolling resistance and improve comfort. Wire spokes are under tension, not compression, making it possible for the wheel to be both stiff and light. Early radially-spoked wire wheels gave rise to tangentially-spoked wire wheels, which were widely used on cars into the late 20th century. Cast alloy wheels are now more commonly used; forged alloy wheels are used when weight is critical. The invention of the wheel has also been important for technology in general, important applications including the water wheel, the cogwheel (see also antikythera mechanism), the spinning wheel, and the astrolabe or torquetum. More modern descendants of the wheel include the propeller, the jet engine, the flywheel (gyroscope) and the turbine. Mechanics and function A wheeled vehicle requires much less work to move than simply dragging the same weight. The low resistance to motion is explained by the fact that the frictional work done is no longer at the surface that the vehicle is traversing, but in the bearings. In the simplest and oldest case the bearing is just a round hole through which the axle passes (a "plain bearing"). Even with a plain bearing, the frictional work is greatly reduced because: The normal force at the sliding interface is same as with simple dragging. The sliding distance is reduced for a given distance of travel. The coefficient of friction at the interface is usually lower. Example: If a 100 kg object is dragged for 10 m along a surface with the coefficient of friction μ = 0.5, the normal force is 981 N and the work done (required energy) is (work=force x distance) 981 × 0.5 × 10 = 4905 joules. Now give the object 4 wheels. The normal force between the 4 wheels and axles is the same (in total) 981 N. Assume, for wood, μ = 0.25, and say the wheel diameter is 1000 mm and axle diameter is 50 mm. So while the object still moves 10 m the sliding frictional surfaces only slide over each other a distance of 0.5 m. The work done is 981 × 0.25 × 0.5 = 123 joules; the work done has reduced to 1/40 of that of dragging. Additional energy is lost from the wheel-to-road interface. This is termed rolling resistance which is predominantly a deformation loss. It depends on the nature of the ground, of the material of the wheel, its inflation in the case of a tire, the net torque exerted by the eventual engine, and many other factors. A wheel can also offer advantages in traversing irregular surfaces if the wheel radius is sufficiently large compared to the irregularities. The wheel alone is not a machine, but when attached to an axle in conjunction with bearing, it forms the wheel and axle, one of the simple machines. A driven wheel is an example of a wheel and axle. Wheels pre-date driven wheels by about 6000 years, themselves an evolution of using round logs as rollers to move a heavy load—a practice going back in pre-history so far that it has not been dated. Construction Rim The rim is the "outer edge of a wheel, holding the tire". It makes up the outer circular design of the wheel on which the inside edge of the tire is mounted on vehicles such as automobiles. For example, on a bicycle wheel the rim is a large hoop attached to the outer ends of the spokes of the wheel that holds the tire and tube. In the 1st millennium BCE an iron rim was introduced around the wooden wheels of chariots. Hub The hub is the center of the wheel, and typically houses a bearing, and is where the spokes meet. A hubless wheel (also known as a rim-rider or centerless wheel) is a type of wheel with no center hub. More specifically, the hub is actually almost as big as the wheel itself. The axle is hollow, following the wheel at very close tolerances. Spokes A spoke is one of some number of rods radiating from the center of a wheel (the hub where the axle connects), connecting the hub with the round traction surface. The term originally referred to portions of a log which had been split lengthwise into four or six sections. The radial members of a wagon wheel were made by carving a spoke (from a log) into their finished shape. A spokeshave is a tool originally developed for this purpose. Eventually, the term spoke was more commonly applied to the finished product of the wheelwright's work, than to the materials used. Wire The rims of wire wheels (or "wire spoked wheels") are connected to their hubs by wire spokes. Although these wires are generally stiffer than a typical wire rope, they function mechanically the same as tensioned flexible wires, keeping the rim true while supporting applied loads. Wire wheels are used on most bicycles and still used on many motorcycles. They were invented by aeronautical engineer George Cayley and first used in bicycles by James Starley. A process of assembling wire wheels is described as wheelbuilding. Tire A tire (American English and Canadian English) or tyre (Commonwealth English) is a ring-shaped covering that fits around a wheel rim to protect it and enable better vehicle performance by providing a flexible cushion that absorbs shock while keeping the wheel in close contact with the ground. The word itself may be derived from the word "tie", which refers to the outer steel ring part of a wooden cart wheel that ties the wood segments together (see Etymology above). The fundamental materials of modern tires are synthetic rubber, natural rubber, fabric, and wire, along with other compound chemicals. They consist of a tread and a body. The tread provides traction while the body ensures support. Before rubber was invented, the first versions of tires were simply bands of metal that fitted around wooden wheels to prevent wear and tear. Today, the vast majority of tires are pneumatic inflatable structures, comprising a doughnut-shaped body of cords and wires encased in rubber and generally filled with compressed air to form an inflatable cushion. Pneumatic tires are used on many types of vehicles, such as cars, bicycles, motorcycles, trucks, earthmovers, and aircraft. Protruding or covering attachments Extreme off-road conditions have resulted in the invention of several types of wheel cover, which may be constructed as removable attachments or as permanent covers. Wheels like this are no longer necessarily round, or have panels that make the ground-contact area flat. Examples include: Snow chains - Specially designed chain assemblies that wrap around the tire to provide increased grip, designed for deep snow. Dreadnaught wheel - A type of permanently attached hinged panels for general extreme off-road use. These are not connected directly to the wheels, but to each other. Pedrail wheel - A system of rails that holds panels that hold the vehicle. These do not necessarily have to be built as a circle (wheel) and are thus also a form of Continuous track. A version of the above examples (name unknown to the writer) was commonly used on heavy artillery during World War I. Specific examples: Cannone da 149/35 A and the Big Bertha. These were panels that were connected to each other by multiple hinges and could be installed over a contemporary wheel. Continuous track - A system of linked and hinged chains/panels that cover multiple wheels in a way that allows the vehicles mass to be distributed across the space between wheels that are positioned in front of / behind other wheels. "Tire totes" - A bag designed to cover a tire to improve traction in deep snow. Truck and bus wheels may block (stop rotating) under certain circumstances, such as brake system failure. To help detect this, they sometimes feature "wheel rotation indicators": colored strips of plastic attached to the rim and protruding out from it, such that they can be seen by the driver in the side-view mirrors. These devices were invented and patented in 1998 by a Canadian truck shop owner. Alternatives While wheels are very widely used for ground transport, there are alternatives, some of which are suitable for terrain where wheels are ineffective. Alternative methods for ground transport without wheels include: Maglev Sled, ski or travois Hovercraft and ekranoplans Walking pedestrian, Litter (vehicle) or a walking machine Horse riding Caterpillar tracks (operated by wheels) Pedrail wheels, using aspects of both wheel and caterpillar track Spheres, as used by Dyson vacuum cleaners and hamster balls Screw-propelled vehicle Symbolism The wheel has also become a strong cultural and spiritual metaphor for a cycle or regular repetition (see chakra, reincarnation, Yin and Yang among others). As such and because of the difficult terrain, wheeled vehicles were forbidden in old Tibet. The wheel in ancient China is seen as a symbol of health and strength and used by some villages as a tool to predict future health and success. The diameter of the wheel is indicator of one's future health. The Kalachakra or wheel of time is also a subject in some forms of Buddhism, along with the dharmachakra. The winged wheel is a symbol of progress, seen in many contexts including the coat of arms of Panama, the logo of the Ohio State Highway Patrol and the State Railway of Thailand. The wheel is also the prominent figure on the flag of India. The wheel in this case represents law (dharma). It also appears in the flag of the Romani people, hinting to their nomadic history and their Indian origins. The introduction of spoked (chariot) wheels in the Middle Bronze Age appears to have carried somewhat of a prestige. The sun cross appears to have a significance in Bronze Age religion, replacing the earlier concept of a solar barge with the more 'modern' and technologically advanced solar chariot. The wheel was also a solar symbol for the Ancient Egyptians. In modern usage, the 'invention of the wheel' can be considered as a symbol of one of the first technologies of early civilization, alongside farming and metalwork, and thus be used as a benchmark to grade the level of societal progress. Some Neopagans such as Wiccans have adopted the Wheel of the Year into their religious practices.
Technology
Tools and machinery
null
33629
https://en.wikipedia.org/wiki/Weak%20interaction
Weak interaction
In nuclear physics and particle physics, the weak interaction, also called the weak force, is one of the four known fundamental interactions, with the others being electromagnetism, the strong interaction, and gravitation. It is the mechanism of interaction between subatomic particles that is responsible for the radioactive decay of atoms: The weak interaction participates in nuclear fission and nuclear fusion. The theory describing its behaviour and effects is sometimes called quantum flavordynamics (QFD); however, the term QFD is rarely used, because the weak force is better understood by electroweak theory (EWT). The effective range of the weak force is limited to subatomic distances and is less than the diameter of a proton. Background The Standard Model of particle physics provides a uniform framework for understanding electromagnetic, weak, and strong interactions. An interaction occurs when two particles (typically, but not necessarily, half-integer spin fermions) exchange integer-spin, force-carrying bosons. The fermions involved in such exchanges can be either elementary (e.g. electrons or quarks) or composite (e.g. protons or neutrons), although at the deepest levels, all weak interactions ultimately are between elementary particles. In the weak interaction, fermions can exchange three types of force carriers, namely , , and  bosons. The masses of these bosons are far greater than the mass of a proton or neutron, which is consistent with the short range of the weak force. In fact, the force is termed weak because its field strength over any set distance is typically several orders of magnitude less than that of the electromagnetic force, which itself is further orders of magnitude less than the strong nuclear force. The weak interaction is the only fundamental interaction that breaks parity symmetry, and similarly, but far more rarely, the only interaction to break charge–parity symmetry. Quarks, which make up composite particles like neutrons and protons, come in six "flavours" up, down, charm, strange, top and bottom which give those composite particles their properties. The weak interaction is unique in that it allows quarks to swap their flavour for another. The swapping of those properties is mediated by the force carrier bosons. For example, during beta-minus decay, a down quark within a neutron is changed into an up quark, thus converting the neutron to a proton and resulting in the emission of an electron and an electron antineutrino. Weak interaction is important in the fusion of hydrogen into helium in a star. This is because it can convert a proton (hydrogen) into a neutron to form deuterium which is important for the continuation of nuclear fusion to form helium. The accumulation of neutrons facilitates the buildup of heavy nuclei in a star. Most fermions decay by a weak interaction over time. Such decay makes radiocarbon dating possible, as carbon-14 decays through the weak interaction to nitrogen-14. It can also create radioluminescence, commonly used in tritium luminescence, and in the related field of betavoltaics (but not similar to radium luminescence). The electroweak force is believed to have separated into the electromagnetic and weak forces during the quark epoch of the early universe. History In 1933, Enrico Fermi proposed the first theory of the weak interaction, known as Fermi's interaction. He suggested that beta decay could be explained by a four-fermion interaction, involving a contact force with no range. In the mid-1950s, Chen-Ning Yang and Tsung-Dao Lee first suggested that the handedness of the spins of particles in weak interaction might violate the conservation law or symmetry. In 1957, Chien Shiung Wu and collaborators confirmed the symmetry violation. In the 1960s, Sheldon Glashow, Abdus Salam and Steven Weinberg unified the electromagnetic force and the weak interaction by showing them to be two aspects of a single force, now termed the electroweak force. The existence of the and  bosons was not directly confirmed until 1983. Properties The electrically charged weak interaction is unique in a number of respects: It is the only interaction that can change the flavour of quarks and leptons (i.e., of changing one type of quark into another). It is the only interaction that violates P, or parity symmetry. It is also the only one that violates charge–parity (CP) symmetry. Both the electrically charged and the electrically neutral interactions are mediated (propagated) by force carrier particles that have significant masses, an unusual feature which is explained in the Standard Model by the Higgs mechanism. Due to their large mass (approximately 90 GeV/c2) these carrier particles, called the and  bosons, are short-lived with a lifetime of under  seconds. The weak interaction has a coupling constant (an indicator of how frequently interactions occur) between and , compared to the electromagnetic coupling constant of about and the strong interaction coupling constant of about 1; consequently the weak interaction is "weak" in terms of intensity. The weak interaction has a very short effective range (around to  m (0.01 to 0.1 fm)). At distances around  meters (0.001 fm), the weak interaction has an intensity of a similar magnitude to the electromagnetic force, but this starts to decrease exponentially with increasing distance. Scaled up by just one and a half orders of magnitude, at distances of around 3 m, the weak interaction becomes 10,000 times weaker. The weak interaction affects all the fermions of the Standard Model, as well as the Higgs boson; neutrinos interact only through gravity and the weak interaction. The weak interaction does not produce bound states, nor does it involve binding energy something that gravity does on an astronomical scale, the electromagnetic force does at the molecular and atomic levels, and the strong nuclear force does only at the subatomic level, inside of nuclei. Its most noticeable effect is due to its first unique feature: The charged weak interaction causes flavour change. For example, a neutron is heavier than a proton (its partner nucleon) and can decay into a proton by changing the flavour (type) of one of its two down quarks to an up quark. Neither the strong interaction nor electromagnetism permit flavour changing, so this can only proceed by weak decay; without weak decay, quark properties such as strangeness and charm (associated with the strange quark and charm quark, respectively) would also be conserved across all interactions. All mesons are unstable because of weak decay. In the process known as beta decay, a down quark in the neutron can change into an up quark by emitting a virtual  boson, which then decays into an electron and an electron antineutrino. Another example is electron capture a common variant of radioactive decay wherein a proton and an electron within an atom interact and are changed to a neutron (an up quark is changed to a down quark), and an electron neutrino is emitted. Due to the large masses of the W bosons, particle transformations or decays (e.g., flavour change) that depend on the weak interaction typically occur much more slowly than transformations or decays that depend only on the strong or electromagnetic forces. For example, a neutral pion decays electromagnetically, and so has a life of only about  seconds. In contrast, a charged pion can only decay through the weak interaction, and so lives about  seconds, or a hundred million times longer than a neutral pion. A particularly extreme example is the weak-force decay of a free neutron, which takes about 15 minutes. Weak isospin and weak hypercharge All particles have a property called weak isospin (symbol ), which serves as an additive quantum number that restricts how the particle can interact with the of the weak force. Weak isospin plays the same role in the weak interaction with as electric charge does in electromagnetism, and color charge in the strong interaction; a different number with a similar name, weak charge, discussed below, is used for interactions with the . All left-handed fermions have a weak isospin value of either or ; all right-handed fermions have 0 isospin. For example, the up quark has and the down quark has . A quark never decays through the weak interaction into a quark of the same : Quarks with a of only decay into quarks with a of and conversely. In any given strong, electromagnetic, or weak interaction, weak isospin is conserved: The sum of the weak isospin numbers of the particles entering the interaction equals the sum of the weak isospin numbers of the particles exiting that interaction. For example, a (left-handed) with a weak isospin of +1 normally decays into a (with ) and a (as a right-handed antiparticle, ). For the development of the electroweak theory, another property, weak hypercharge, was invented, defined as where is the weak hypercharge of a particle with electrical charge (in elementary charge units) and weak isospin . Weak hypercharge is the generator of the U(1) component of the electroweak gauge group; whereas some particles have a weak isospin of zero, all known spin- particles have a non-zero weak hypercharge. Interaction types There are two types of weak interaction (called vertices). The first type is called the "charged-current interaction" because the weakly interacting fermions form a current with total electric charge that is nonzero. The second type is called the "neutral-current interaction" because the weakly interacting fermions form a current with total electric charge of zero. It is responsible for the (rare) deflection of neutrinos. The two types of interaction follow different selection rules. This naming convention is often misunderstood to label the electric charge of the and bosons, however the naming convention predates the concept of the mediator bosons, and clearly (at least in name) labels the charge of the current (formed from the fermions), not necessarily the bosons. Charged-current interaction In one type of charged current interaction, a charged lepton (such as an electron or a muon, having a charge of −1) can absorb a  boson (a particle with a charge of +1) and be thereby converted into a corresponding neutrino (with a charge of 0), where the type ("flavour") of neutrino (electron , muon , or tau ) is the same as the type of lepton in the interaction, for example: Similarly, a down-type quark (, , or , with a charge of ) can be converted into an up-type quark (, , or , with a charge of ), by emitting a  boson or by absorbing a  boson. More precisely, the down-type quark becomes a quantum superposition of up-type quarks: that is to say, it has a possibility of becoming any one of the three up-type quarks, with the probabilities given in the CKM matrix tables. Conversely, an up-type quark can emit a  boson, or absorb a  boson, and thereby be converted into a down-type quark, for example: The W boson is unstable so will rapidly decay, with a very short lifetime. For example: Decay of a W boson to other products can happen, with varying probabilities. In the so-called beta decay of a neutron (see picture, above), a down quark within the neutron emits a virtual boson and is thereby converted into an up quark, converting the neutron into a proton. Because of the limited energy involved in the process (i.e., the mass difference between the down quark and the up quark), the virtual boson can only carry sufficient energy to produce an electron and an electron-antineutrino – the two lowest-possible masses among its prospective decay products. At the quark level, the process can be represented as: Neutral-current interaction In neutral current interactions, a quark or a lepton (e.g., an electron or a muon) emits or absorbs a neutral boson. For example: Like the  bosons, the  boson also decays rapidly, for example: Unlike the charged-current interaction, whose selection rules are strictly limited by chirality, electric charge, weak isospin, the neutral-current interaction can cause any two fermions in the standard model to deflect: Either particles or anti-particles, with any electric charge, and both left- and right-chirality, although the strength of the interaction differs. The quantum number weak charge () serves the same role in the neutral current interaction with the that electric charge (, with no subscript) does in the electromagnetic interaction: It quantifies the vector part of the interaction. Its value is given by: Since the weak mixing angle , the parenthetic expression , with its value varying slightly with the momentum difference (called "running") between the particles involved. Hence since by convention , and for all fermions involved in the weak interaction . The weak charge of charged leptons is then close to zero, so these mostly interact with the  boson through the axial coupling. Electroweak theory The Standard Model of particle physics describes the electromagnetic interaction and the weak interaction as two different aspects of a single electroweak interaction. This theory was developed around 1968 by Sheldon Glashow, Abdus Salam, and Steven Weinberg, and they were awarded the 1979 Nobel Prize in Physics for their work. The Higgs mechanism provides an explanation for the presence of three massive gauge bosons (, , , the three carriers of the weak interaction), and the photon (, the massless gauge boson that carries the electromagnetic interaction). According to the electroweak theory, at very high energies, the universe has four components of the Higgs field whose interactions are carried by four massless scalar bosons forming a complex scalar Higgs field doublet. Likewise, there are four massless electroweak vector bosons, each similar to the photon. However, at low energies, this gauge symmetry is spontaneously broken down to the symmetry of electromagnetism, since one of the Higgs fields acquires a vacuum expectation value. Naïvely, the symmetry-breaking would be expected to produce three massless bosons, but instead those "extra" three Higgs bosons become incorporated into the three weak bosons, which then acquire mass through the Higgs mechanism. These three composite bosons are the , , and  bosons actually observed in the weak interaction. The fourth electroweak gauge boson is the photon () of electromagnetism, which does not couple to any of the Higgs fields and so remains massless. This theory has made a number of predictions, including a prediction of the masses of the and  bosons before their discovery and detection in 1983. On 4 July 2012, the CMS and the ATLAS experimental teams at the Large Hadron Collider independently announced that they had confirmed the formal discovery of a previously unknown boson of mass between 125 and , whose behaviour so far was "consistent with" a Higgs boson, while adding a cautious note that further data and analysis were needed before positively identifying the new boson as being a Higgs boson of some type. By 14 March 2013, a Higgs boson was tentatively confirmed to exist. In a speculative case where the electroweak symmetry breaking scale were lowered, the unbroken interaction would eventually become confining. Alternative models where becomes confining above that scale appear quantitatively similar to the Standard Model at lower energies, but dramatically different above symmetry breaking. Violation of symmetry The laws of nature were long thought to remain the same under mirror reflection. The results of an experiment viewed via a mirror were expected to be identical to the results of a separately constructed, mirror-reflected copy of the experimental apparatus watched through the mirror. This so-called law of parity conservation was known to be respected by classical gravitation, electromagnetism and the strong interaction; it was assumed to be a universal law. However, in the mid-1950s Chen-Ning Yang and Tsung-Dao Lee suggested that the weak interaction might violate this law. Chien Shiung Wu and collaborators in 1957 discovered that the weak interaction violates parity, earning Yang and Lee the 1957 Nobel Prize in Physics. Although the weak interaction was once described by Fermi's theory, the discovery of parity violation and renormalization theory suggested that a new approach was needed. In 1957, Robert Marshak and George Sudarshan and, somewhat later, Richard Feynman and Murray Gell-Mann proposed a V − A (vector minus axial vector or left-handed) Lagrangian for weak interactions. In this theory, the weak interaction acts only on left-handed particles (and right-handed antiparticles). Since the mirror reflection of a left-handed particle is right-handed, this explains the maximal violation of parity. The V − A theory was developed before the discovery of the Z boson, so it did not include the right-handed fields that enter in the neutral current interaction. However, this theory allowed a compound symmetry CP to be conserved. CP combines parity P (switching left to right) with charge conjugation C (switching particles with antiparticles). Physicists were again surprised when in 1964, James Cronin and Val Fitch provided clear evidence in kaon decays that CP symmetry could be broken too, winning them the 1980 Nobel Prize in Physics. In 1973, Makoto Kobayashi and Toshihide Maskawa showed that CP violation in the weak interaction required more than two generations of particles, effectively predicting the existence of a then unknown third generation. This discovery earned them half of the 2008 Nobel Prize in Physics. Unlike parity violation, CP violation occurs only in rare circumstances. Despite its limited occurrence under present conditions, it is widely believed to be the reason that there is much more matter than antimatter in the universe, and thus forms one of Andrei Sakharov's three conditions for baryogenesis.
Physical sciences
Physics
null
33637
https://en.wikipedia.org/wiki/Wasabi
Wasabi
Wasabi (Japanese: , , or , ) or Japanese horseradish (Eutrema japonicum syn. Wasabia japonica) is a plant of the family Brassicaceae, which also includes horseradish and mustard in other genera. The plant is native to Japan, the Russian Far East including Sakhalin, and the Korean Peninsula. It grows naturally along stream beds in mountain river valleys in Japan. Wasabi is grown for its rhizomes, which are ground into a paste as a pungent condiment for sushi and other foods. It is similar in taste to hot mustard or horseradish rather than chilli peppers, in that it stimulates the nose more than the tongue, but freshly grated wasabi has a subtly distinct flavour. The main cultivars in the marketplace are E. japonicum 'Daruma' and 'Mazuma', but there are many others. The oldest record of wasabi as a food dates to the 8th century AD. The popularity of wasabi in English-speaking countries has coincided with that of sushi, growing steadily from about 1980. Due to constraints that limit the Japanese wasabi plant's mass cultivation and thus increase its price and decrease availability outside Japan, the western horseradish plant is widely used in place of wasabi. This is commonly referred to as "western wasabi" () in Japan. Taxonomy Siebold named Cochlearia (?) wasabi in 1830, noting its use pro condimento or "as a condiment"; however, this is a nomen nudum, and the synonym Eutrema wasabi, published by Maximovich in 1873, is thus an illegitimate name. The wasabi plant was first described by Miquel in 1866, as Lunaria (?) japonica, from the type collected by Siebold in Japan, though the precise type locality was not recorded. In 1899 Matsumura erected the genus Wasabia, recognising within it the species Wasabia pungens and Wasabia hederaefolia; these are now regarded as synonyms of Eutrema japonicum. In 1912 Matsumura recognised the species Wasabia japonica, treating his earlier Wasabia pungens as a synonym. In 1930, Koidzumi transferred the wasabi plant to the genus Eutrema, the correct name and author citation being Eutrema japonicum (Miq.) Koidz. Description It has large leaves produced from long, thin stalks. They are simple and large, long and wide with palmate veins. Wasabi flowers appear in clusters from long stems that bloom from late winter to early spring. Culinary uses As condiment Wasabi is mainly used to make wasabi paste, which is a pungent, spicy condiment eaten with foods like sushi. The part used for wasabi paste has been characterized as the rhizome or the stem, or the "rhizome plus the base part of the stem". Stores generally sell only this part of the plant. The fresh rhizome is grated into a paste, and eaten in small amounts at a time. Traditionally, coarse sharkskin is used to grate the root, but metal graters called oroshigane are used in modern times. Fresh wasabi paste loses its flavor quickly if left uncovered, and so the paste is grated on the spot in some high-end restaurants. Sushi chefs usually put the wasabi between the fish and the rice, to cover the wasabi and preserve its flavour. Store-bought wasabi paste is usually made from dried wasabi powder, and sold in bottles or squeezable toothpaste-like tubes. As flavoring Wasabi is used to flavor many foods, especially dry snacks. are legumes (peanuts, soybeans, or peas) that are roasted or fried and then coated with wasabi powder, and eaten as a snack. Others Fresh wasabi leaves can be eaten raw, having a spicy flavor, but a common side effect is diarrhea. Wasabizuke is made of wasabi leaves pickled in sake lees, and is considered a specialty of Shizuoka Prefecture. Surrogates Wasabi favors growing conditions that restrict its wide cultivation – among other things, it is quite intolerant of direct sunlight, requires an air temperature between , and prefers high humidity in summer. This makes fully satisfying commercial demand impossible for growers, which makes wasabi quite expensive. Therefore, outside Japan, finding real wasabi plants is rare. A common substitute is a mixture of horseradish, mustard, starch, and green food colouring or spinach powder. Often packages are labelled as wasabi while the ingredients do not include any part of the wasabi plant. The primary difference is colour, with wasabi being naturally green. Fresh horseradish root is described as having a similar (albeit simpler) flavor and texture to that of fresh wasabi. In Japan, horseradish is referred to as . Outside of Japan, where fresh wasabi is hard to obtain, a powdered mixture of horseradish and mustard oil, known as , is used at a majority of sushi restaurants, including reputable ones. In the United States, true wasabi is generally found only at specialty grocers and high-end restaurants. Chemistry The chemical in wasabi that provides its initial pungency is the volatile compound allyl isothiocyanate, which is produced by hydrolysis of allyl glucosinolate, a natural thioglucoside (conjugates of the sugar glucose and sulfur-containing organic compounds); the hydrolysis reaction is catalyzed by myrosinase and occurs when the enzyme is released on cell rupture caused by maceration – e.g., grating – of the plant. The same compound is responsible for the pungency of horseradish and mustard. Allyl isothiocyanate can also be released when the wasabi plants have been damaged because it is being used as a defense mechanism. The sensory neural target of mustard oil is the chemosensory receptor, TRPA1, also known as the wasabi receptor. The unique flavour of wasabi is a result of complex chemical mixtures from the broken cells of the plant, including those resulting from the hydrolysis of thioglucosides, including sinigrin and other glucosinolates, into glucose and methylthioalkyl isothiocyanates: 6-(Methylsulfinyl)hexyl isothiocyanate (6-MITC) 7-Methylthioheptyl isothiocyanate 8-Methylthiooctyl isothiocyanate Such isothiocyanates inhibit microbial growth, perhaps with implications for preserving food against spoilage and suppressing oral bacterial growth. Because the burning sensations of wasabi are not oil-based, they are short-lived compared to the effects of capsaicin in chilli peppers and are washed away with more food or liquid. The sensation is felt primarily in the nasal passage and can be painful depending on the amount consumed. Inhaling or sniffing wasabi vapor has an effect like smelling salts, a property exploited by researchers attempting to create a smoke alarm for the deaf. One deaf subject participating in a test of the prototype awoke within 10 seconds of wasabi vapour sprayed into his sleeping chamber. The 2011 Ig Nobel Prize in Chemistry was awarded to the researchers for determining the ideal density of airborne wasabi to wake people in the event of an emergency. Nutritional information Wasabi is normally consumed in such small quantities that its nutritional value is negligible. The major constituents of raw wasabi root are carbohydrates (23.5%), water (69.1%), fat (0.63%), and protein (4.8%). Cultivation Few places are suitable for large-scale wasabi cultivation, which is difficult even in ideal conditions. In Japan, wasabi is cultivated mainly in these regions: Izu Peninsula in Shizuoka Prefecture ("Traditional Wasabi Cultivation in Shizuoka, Japan" is a Globally and Japanese Nationally Important Agricultural Heritage System) Nagano Prefecture including the Daio Wasabi Farm in Azumino (a popular tourist attraction and the world's largest commercial wasabi farm) Iwate Prefecture Shimane Prefecture known as its Hikimi wasabi Numerous artificial cultivation facilities also exist as far north as Hokkaido and as far south as Kyushu. As the demand for real wasabi is higher than that which can be produced within Japan, Japan imports copious amounts of wasabi from the United States, Canada, Taiwan, South Korea, Pakistan, Thailand and New Zealand. In North America, wasabi is cultivated by a handful of small farmers and companies in the rain forests on the coast of Western Canada, the Oregon Coast, and in areas of the Blue Ridge Mountains in North Carolina and Tennessee. In Europe, wasabi is grown commercially in Iceland, the Netherlands, Hungary, and the UK. Modern cultivars of wasabi mostly derive from three traditional cultivars, 'Fujidaruma', 'Shimane No. 3' and 'Mazuma'. Sequencing of the chloroplastic genome, which is inherited maternally in wasabi, supports this conclusion.
Biology and health sciences
Herbs and spices
Plants
33645
https://en.wikipedia.org/wiki/Blog
Blog
A blog (a truncation of "weblog") is an informational website consisting of discrete, often informal diary-style text entries (posts). Posts are typically displayed in reverse chronological order so that the most recent post appears first, at the top of the web page. In the 2000s, blogs were often the work of a single individual, occasionally of a small group, and often covered a single subject or topic. In the 2010s, "multi-author blogs" (MABs) emerged, featuring the writing of multiple authors and sometimes professionally edited. MABs from newspapers, other media outlets, universities, think tanks, advocacy groups, and similar institutions account for an increasing quantity of blog traffic. The rise of Twitter and other "microblogging" systems helps integrate MABs and single-author blogs into the news media. Blog can also be used as a verb, meaning to maintain or add content to a blog. The emergence and growth of blogs in the late 1990s coincided with the advent of web publishing tools that facilitated the posting of content by non-technical users who did not have much experience with HTML or computer programming. Previously, knowledge of such technologies as HTML and File Transfer Protocol had been required to publish content on the Web, and early Web users therefore tended to be hackers and computer enthusiasts. As of the 2010s, the majority are interactive Web 2.0 websites, allowing visitors to leave online comments, and it is this interactivity that distinguishes them from other static websites. In that sense, blogging can be seen as a form of social networking service. Indeed, bloggers not only produce content to post on their blogs but also often build social relations with their readers and other bloggers. Blog owners or authors often moderate and filter online comments to remove hate speech or other offensive content. There are also high-readership blogs which do not allow comments. Many blogs provide commentary on a particular subject or topic, ranging from philosophy, religion, and arts to science, politics, and sports. Others function as more personal online diaries or online brand advertising of a particular individual or company. A typical blog combines text, digital images, and links to other blogs, web pages, and other media related to its topic. Most blogs are primarily textual, although some focus on art (art blogs), photographs (photoblogs), videos (video blogs or vlogs), music (MP3 blogs), and audio (podcasts). In education, blogs can be used as instructional resources; these are referred to as edublogs. Microblogging is another type of blogging, featuring very short posts. Blog and blogging are now loosely used for content creation and sharing on social media, especially when the content is long-form and one creates and shares content on regular basis, so one could be maintaining a blog on Facebook or blogging on Instagram. A 2022 estimate suggested that there were over 600 million public blogs out of more than 1.9 billion websites. History The term "weblog" was coined by Jorn Barger on December 17, 1997. The short form "blog" was coined by Peter Merholz, who jokingly broke the word weblog into the phrase we blog in the sidebar of his blog Peterme.com in May 1999. Shortly thereafter, Evan Williams at Pyra Labs used "blog" as both a noun and verb ("to blog", meaning "to edit one's weblog or to post to one's weblog") and devised the term "blogger" in connection with Pyra Labs' Blogger product, leading to the popularization of the terms. Origins Before blogging became popular, digital communities took many forms, including Usenet, commercial online services such as GEnie, Byte Information Exchange (BIX) and the early CompuServe, e-mail lists, and Bulletin Board Systems (BBS). In the 1990s, Internet forum software created running conversations with "threads". Threads are topical connections between messages on a virtual "corkboard". Berners-Lee also created what is considered by Encyclopedia Britannica to be "the first 'blog in 1992 to discuss the progress made on creating the World Wide Web and software used for it. From June 14, 1993, Mosaic Communications Corporation maintained their "What's New" list of new websites, updated daily and archived monthly. The page was accessible by a special "What's New" button in the Mosaic web browser. In November 1993 Ranjit Bhatnagar started writing about interesting sites, pages and discussion groups he found on the internet, as well as some personal information, on his website Moonmilk, arranging them chronologically in a special section called Ranjit's HTTP Playground. Other early pioneers of blogging, such as Justin Hall, credit him with being an inspiration. The earliest instance of a commercial blog was on the first business to consumer Web site created in 1995 by Ty, Inc., which featured a blog in a section called "Online Diary". The entries were maintained by featured Beanie Babies that were voted for monthly by Web site visitors. The modern blog evolved from the online diary where people would keep a running account of the events in their personal lives. Most such writers called themselves diarists, journalists, or journalers. Justin Hall, who began personal blogging in 1994 while a student at Swarthmore College, is generally recognized as one of the earlier bloggers, as is Jerry Pournelle. Dave Winer's Scripting News is also credited with being one of the older and longer running weblogs. The Australian Netguide magazine maintained the Daily Net News on their web site from 1996. Daily Net News ran links and daily reviews of new websites, mostly in Australia. Another early blog was Wearable Wireless Webcam, an online shared diary of a person's personal life combining text, digital video, and digital pictures transmitted live from a wearable computer and EyeTap device to a web site in 1994. This practice of semi-automated blogging with live video together with text was referred to as sousveillance, and such journals were also used as evidence in legal matters. Some early bloggers, such as The Misanthropic Bitch, who began in 1997, referred to their online presence as a zine, before the term blog entered common usage. The first research paper about blogging was Torill Mortensen and Jill Walker Rettberg's paper "Blogging Thoughts", which analysed how blogs were being used to foster research communities and the exchange of ideas and scholarship, and how this new means of networking overturns traditional power structures. Technology Early blogs were simply manually updated components of common Websites. In 1995, the "Online Diary" on the Ty, Inc. Web site was produced and updated manually before any blogging programs were available. Posts were made to appear in reverse chronological order by manually updating text-based HTML code using FTP software in real time several times a day. To users, this offered the appearance of a live diary that contained multiple new entries per day. At the beginning of each new day, new diary entries were manually coded into a new HTML file, and at the start of each month, diary entries were archived into their own folder, which contained a separate HTML page for every day of the month. Then, menus that contained links to the most recent diary entry were updated manually throughout the site. This text-based method of organizing thousands of files served as a springboard to define future blogging styles that were captured by blogging software developed years later. The evolution of electronic and software tools to facilitate the production and maintenance of Web articles posted in reverse chronological order made the publishing process feasible for a much larger and less technically-inclined population. Ultimately, this resulted in the distinct class of online publishing that produces blogs we recognize today. For instance, the use of some sort of browser-based software is now a typical aspect of "blogging". Blogs can be hosted by dedicated blog hosting services, on regular web hosting services, or run using blog software. Rise in popularity After a slow start, blogging rapidly gained in popularity. Blog usage spread during 1999 and the years following, being further popularized by the near-simultaneous arrival of the first hosted blog tools: Bruce Ableson launched Open Diary in October 1998, which soon grew to thousands of online diaries. Open Diary innovated the reader comment, becoming the first blog community where readers could add comments to other writers' blog entries. Brad Fitzpatrick started LiveJournal in March 1999. Andrew Smales created Pitas.com in July 1999 as an easier alternative to maintaining a "news page" on a Web site, followed by DiaryLand in September 1999, focusing more on a personal diary community. Blogger (blogspot.com) was launched in 1999 Political impact An early milestone in the rise in importance of blogs came in 2002, when many bloggers focused on comments by U.S. Senate Majority Leader Trent Lott. Senator Lott, at a party honoring U.S. Senator Strom Thurmond, praised Senator Thurmond by suggesting that the United States would have been better off had Thurmond been elected president. Lott's critics saw these comments as tacit approval of racial segregation, a policy advocated by Thurmond's 1948 presidential campaign. This view was reinforced by documents and recorded interviews dug up by bloggers. (See Josh Marshall's Talking Points Memo.) Though Lott's comments were made at a public event attended by the media, no major media organizations reported on his controversial comments until after blogs broke the story. Blogging helped to create a political crisis that forced Lott to step down as majority leader. Similarly, blogs were among the driving forces behind the "Rathergate" scandal. Television journalist Dan Rather presented documents on the CBS show 60 Minutes that conflicted with accepted accounts of President Bush's military service record. Bloggers declared the documents to be forgeries and presented evidence and arguments in support of that view. Consequently, CBS apologized for what it said were inadequate reporting techniques (see: Little Green Footballs). The impact of these stories gave greater credibility to blogs as a medium of news dissemination. In Russia, some political bloggers have started to challenge the dominance of official, overwhelmingly pro-government media. Bloggers such as Rustem Adagamov and Alexei Navalny have many followers, and the latter's nickname for the ruling United Russia party as the "party of crooks and thieves" has been adopted by anti-regime protesters. This led to The Wall Street Journal calling Navalny "the man Vladimir Putin fears most" in March 2012. Mainstream popularity By 2004, the role of blogs became increasingly mainstream, as political consultants, news services, and candidates began using them as tools for outreach and opinion forming. Blogging was established by politicians and political candidates to express opinions on war and other issues and cemented blogs' role as a news source. (See Howard Dean and Wesley Clark.) Even politicians not actively campaigning, such as the UK's Labour Party's Member of Parliament (MP) Tom Watson, began to blog to bond with constituents. In January 2005, Fortune magazine listed eight bloggers whom business people "could not ignore": Peter Rojas, Xeni Jardin, Ben Trott, Mena Trott, Jonathan Schwartz, Jason Goldman, Robert Scoble, and Jason Calacanis. Israel was among the first national governments to set up an official blog. Under David Saranga, the Israeli Ministry of Foreign Affairs became active in adopting Web 2.0 initiatives, including an official video blog and a political blog. The Foreign Ministry also held a microblogging press conference via Twitter about its war with Hamas, with Saranga answering questions from the public in common text-messaging abbreviations during a live worldwide press conference. The questions and answers were later posted on IsraelPolitik, the country's official political blog. The impact of blogging on the mainstream media has also been acknowledged by governments. In 2009, the presence of the American journalism industry had declined to the point that several newspaper corporations were filing for bankruptcy, resulting in less direct competition between newspapers within the same circulation area. Discussion emerged as to whether the newspaper industry would benefit from a stimulus package by the federal government. U.S. President Barack Obama acknowledged the emerging influence of blogging upon society by saying, "if the direction of the news is all blogosphere, all opinions, with no serious fact-checking, no serious attempts to put stories in context, then what you will end up getting is people shouting at each other across the void, but not a lot of mutual understanding". Between 2009 and 2012, an Orwell Prize for blogging was awarded. In the late 2000s, blogs were often used on business websites and for grassroots political activism. Types There are many different types of blogs, differing not only in the type of content, but also in the way that content is delivered or written. Personal blogs The personal blog is an ongoing online diary or commentary written by an individual, rather than a corporation or organization. While the vast majority of personal blogs attract very few readers, other than the blogger's immediate family and friends, a small number of personal blogs have become popular, to the point that they have attracted lucrative advertising sponsorship. A tiny number of personal bloggers have become famous, both in the online community and in the real world. Collaborative blogs or group blogs A type of weblog in which posts are written and published by more than one author. The majority of high-profile collaborative blogs are organised according to a single uniting theme, such as politics, technology or advocacy. In recent years, the blogosphere has seen the emergence and growing popularity of more collaborative efforts, often set up by already established bloggers wishing to pool time and resources, both to reduce the pressure of maintaining a popular website and to attract a larger readership. Microblogging Microblogging is the practice of posting small pieces of digital content—which could be text, pictures, links, short videos, or other media—on the internet. Microblogging offers a portable communication mode that feels organic and spontaneous to many users. It has captured the public imagination, in part because the short posts are easy to read on the go or when waiting. Friends use it to keep in touch, business associates use it to coordinate meetings or share useful resources, and celebrities and politicians (or their publicists) microblog about concert dates, lectures, book releases, or tour schedules. A wide and growing range of add-on tools enables sophisticated updates and interaction with other applications. The resulting profusion of functionality is helping to define new possibilities for this type of communication. Examples of these include Twitter, Facebook, Tumblr and, by far the largest, Weibo. Corporate and organizational blogs A blog can be private, as in most cases, or it can be for business or not-for-profit organization or government purposes. Blogs used internally and only available to employees via an Intranet are called corporate blogs. Companies use internal corporate blogs to enhance the communication, culture and employee engagement in a corporation. Internal corporate blogs can be used to communicate news about company policies or procedures, build employee esprit de corps and improve morale. Companies and other organizations also use external, publicly accessible blogs for marketing, branding, or public relations purposes. Some organizations have a blog authored by their executive; in practice, many of these executive blog posts are penned by a ghostwriter who makes posts in the style of the credited author. Similar blogs for clubs and societies are called club blogs, group blogs, or by similar names; typical use is to inform members and other interested parties of club and member activities. Aggregated blogs Individuals or organization may aggregate selected feeds on a specific topic, product or service and provide a combined view for its readers. This allows readers to concentrate on reading instead of searching for quality on-topic content and managing subscriptions. Many such aggregations called planets from name of Planet (software) that perform such aggregation, hosting sites usually have planet. subdomain in domain name (like http://planet.gnome.org/). By genre Some blogs focus on a particular subject, such as political blogs, journalism blogs, health blogs, travel blogs (also known as travelogs), gardening blogs, house blogs, Book Blogs, fashion blogs, beauty blogs, lifestyle blogs, party blogs, wedding blogs, photography blogs, project blogs, psychology blogs, sociology blogs, education blogs, niche blogs, classical music blogs, quizzing blogs, legal blogs (often referred to as a blawgs), or dreamlogs. How-to/Tutorial blogs are becoming increasing popular. Two common types of genre blogs are art blogs and music blogs. A blog featuring discussions, especially about home and family is not uncommonly called a mom blog. While not a legitimate type of blog, one used for the sole purpose of spamming is known as a splog. By media type A blog comprising videos is called a vlog, one comprising links is called a linklog, a site containing a portfolio of sketches is called a sketchblog or one comprising photos is called a photoblog. Blogs with shorter posts and mixed media types are called tumblelogs. Blogs that are written on typewriters and then scanned are called typecast or typecast blogs. A rare type of blog hosted on the Gopher Protocol is known as a phlog. By device A blog can also be defined by which type of device is used to compose it. A blog written by a mobile device like a mobile phone or PDA could be called a moblog. One early blog was Wearable Wireless Webcam, an online shared diary of a person's personal life combining text, video, and pictures transmitted live from a wearable computer and EyeTap device to a web site. This practice of semi-automated blogging with live video together with text was referred to as sousveillance. Such journals have been used as evidence in legal matters. Reverse blog A reverse blog is composed by its users rather than a single blogger. This system has the characteristics of a blog and the writing of several authors. These can be written by several contributing authors on a topic or opened up for anyone to write. There is typically some limit to the number of entries to keep it from operating like a web forum. Community and cataloging Blogosphere The collective community of all blogs and blog authors, particularly notable and widely read blogs, is known as the blogosphere. Since all blogs are on the internet by definition, they may be seen as interconnected and socially networked, through blogrolls, comments, linkbacks (refbacks, trackbacks or pingbacks), and backlinks. Discussions "in the blogosphere" were occasionally used by the media as a gauge of public opinion on various issues. Because new, untapped communities of bloggers and their readers can emerge in the space of a few years, Internet marketers pay close attention to "trends in the blogosphere". Blog search engines Several blog search engines have been used to search blog contents, such as Bloglines (defunct), BlogScope (defunct), and Technorati (defunct). Blogging communities and directories Several online communities exist that connect people to blogs and bloggers to other bloggers. Interest-specific blogging platforms are also available. For instance, Blogster has a sizable community of political bloggers among its members. Global Voices aggregates international bloggers, "with emphasis on voices that are not ordinarily heard in international mainstream media." Blogging and advertising It is common for blogs to feature banner advertisements or promotional content, either to financially benefit the blogger, support website hosting costs, or to promote the blogger's favourite causes or products. The popularity of blogs has also given rise to "fake blogs" in which a company will create a fictional blog as a marketing tool to promote a product. As the popularity of blogging continued to rise (as of 2006), the commercialisation of blogging is rapidly increasing. Many corporations and companies collaborate with bloggers to increase advertising and engage online communities with their products. In the book Fans, Bloggers, and Gamers, Henry Jenkins stated that "Bloggers take knowledge into their own hands, enabling successful navigation within and between these emerging knowledge cultures. One can see such behaviour as co-optation into commodity culture insofar as it sometimes collaborates with corporate interests, but one can also see it as increasing the diversity of media culture, providing opportunities for greater inclusiveness, and making more responsive to consumers." Early popularity Before 2006: The blogdex project was launched by researchers in the MIT Media Lab to crawl the Web and gather data from thousands of blogs to investigate their social properties. Information was gathered by the tool for over four years, during which it autonomously tracked the most contagious information spreading in the blog community, ranking it by recency and popularity. It can, therefore, be considered the first instantiation of a memetracker. The project was replaced by tailrank.com, which in turn has been replaced by spinn3r.com. 2006: Blogs are given rankings by Alexa Internet (web hits of Alexa Toolbar users), and formerly by blog search engine Technorati based on the number of incoming links (Technorati stopped doing this in 2014). In August 2006, Technorati found that the most linked-to blog on the internet was that of Chinese actress Xu Jinglei. Chinese media Xinhua reported that this blog received more than 50 million page views, claiming it to be the most popular blog in the world at the time. Technorati rated Boing Boing to be the most-read group-written blog. 2008: , blogging had "become such a mania that a new blog was created every second of every minute of every hour of every day." Researchers have actively analyzed the dynamics of how blogs become popular. There are essentially two measures of this: popularity through citations, as well as popularity through affiliation (i.e., blogroll). The basic conclusion from studies of the structure of blogs is that while it takes time for a blog to become popular through blogrolls, permalinks can boost popularity more quickly and are perhaps more indicative of popularity and authority than blogrolls since they denote that people are reading the blog's content and deem it valuable or noteworthy in specific cases. Software Blogs are a form of websites and can therefore be created via the same software as can be used for creating websites. Many people use managed platforms such as Medium (website) or Substack. These platforms have built-in support for many features such as previewing posts, paywalls, and newsletters. Other people self-host their website via open source software such as Wordpress or static site generators such as Hugo (software) or Jekyll (software). Blurring with the mass media Many bloggers, particularly those engaged in participatory journalism, are amateur journalists, and thus they differentiate themselves from the professional reporters and editors who work in mainstream media organizations. Other bloggers are media professionals who are publishing online, rather than via a TV station or newspaper, either as an add-on to a traditional media presence (e.g., hosting a radio show or writing a column in a paper newspaper), or as their sole journalistic output. Some institutions and organizations see blogging as a means of "getting around the filter" of media "gatekeepers" and pushing their messages directly to the public. Many mainstream journalists, meanwhile, write their own blogs—well over 300, according to CyberJournalist.net's J-blog list. The first known use of a blog on a news site was in August 1998, when Jonathan Dube of The Charlotte Observer published one chronicling Hurricane Bonnie. Some bloggers have moved over to other media. The following bloggers (and others) have appeared on radio and television: Duncan Black (known widely by his pseudonym, Atrios), Glenn Reynolds (Instapundit), Markos Moulitsas Zúniga (Daily Kos), Alex Steffen (Worldchanging), Ana Marie Cox (Wonkette), Nate Silver (FiveThirtyEight.com), and Ezra Klein (Ezra Klein blog in The American Prospect, now in The Washington Post). In counterpoint, Hugh Hewitt exemplifies a mass media personality who has moved in the other direction, adding to his reach in "old media" by being an influential blogger. Similarly, it was Emergency Preparedness and Safety Tips On Air and Online blog articles that captured Surgeon General of the United States Richard Carmona's attention and earned his kudos for the associated broadcasts by talk show host Lisa Tolliver and Westchester Emergency Volunteer Reserves-Medical Reserve Corps Director Marianne Partridge. Blogs have also had an influence on minority languages, bringing together scattered speakers and learners; this is particularly so with blogs in Gaelic languages. Minority language publishing (which may lack economic feasibility) can find its audience through inexpensive blogging. There are examples of bloggers who have published books based on their blogs, e.g., Salam Pax, Ellen Simonetti, Jessica Cutler, and ScrappleFace. Blog-based books have been given the name blook. A prize for the best blog-based book was initiated in 2005, the Lulu Blooker Prize. However, success has been elusive offline, with many of these books not selling as well as their blogs. The book based on Julie Powell's blog "The Julie/Julia Project" was made into the film Julie & Julia, apparently the first to do so. Consumer-generated advertising Consumer-generated advertising is a relatively new and controversial development, and it has created a new model of marketing communication from businesses to consumers. Among the various forms of advertising on blog, the most controversial are the sponsored posts. These are blog entries or posts and may be in the form of feedback, reviews, opinion, videos, etc. and usually contain a link back to the desired site using a keyword or several keywords. Blogs have led to some disintermediation and a breakdown of the traditional advertising model, where companies can skip over the advertising agencies (previously the only interface with the customer) and contact the customers directly via social media websites. On the other hand, new companies specialised in blog advertising have been established to take advantage of this new development as well. However, there are many people who look negatively on this new development. Some believe that any form of commercial activity on blogs will destroy the blogosphere's credibility. Legal and social consequences Blogging can result in a range of legal liabilities and other unforeseen consequences. Defamation or liability Several cases have been brought before the national courts against bloggers concerning issues of defamation or liability. U.S. payouts related to blogging totalled $17.4 million by 2009; in some cases these have been covered by umbrella insurance. The courts have returned with mixed verdicts. Internet Service Providers (ISPs), in general, are immune from liability for information that originates with third parties (U.S. Communications Decency Act and the EU Directive 2000/31/EC). In Doe v. Cahill, the Delaware Supreme Court held that stringent standards had to be met to unmask the anonymous bloggers and also took the unusual step of dismissing the libel case itself (as unfounded under American libel law) rather than referring it back to the trial court for reconsideration. In a bizarre twist, the Cahills were able to obtain the identity of John Doe, who turned out to be the person they suspected: the town's mayor, Councilman Cahill's political rival. The Cahills amended their original complaint, and the mayor settled the case rather than going to trial. In January 2007, two prominent Malaysian political bloggers, Jeff Ooi and Ahirudin Attan, were sued by a pro-government newspaper, The New Straits Times Press (Malaysia) Berhad, Kalimullah bin Masheerul Hassan, Hishamuddin bin Aun and Brenden John a/l John Pereira over alleged defamation. The plaintiff was supported by the Malaysian government. Following the suit, the Malaysian government proposed to "register" all bloggers in Malaysia to better control parties against their interests. This is the first such legal case against bloggers in the country. In the United States, blogger Aaron Wall was sued by Traffic Power for defamation and publication of trade secrets in 2005. According to Wired magazine, Traffic Power had been "banned from Google for allegedly rigging search engine results." Wall and other "white hat" search engine optimization consultants had exposed Traffic Power in what they claim was an effort to protect the public. The case was dismissed for lack of personal jurisdiction, and Traffic Power failed to appeal within the allowed time. In 2009, NDTV issued a legal notice to Indian blogger Kunte for a blog post criticizing their coverage of the Mumbai attacks. The blogger unconditionally withdrew his post, which resulted in several Indian bloggers criticizing NDTV for trying to silence critics. Employment Employees who blog about elements of their place of employment can begin to affect the reputation of their employer, either in a positive way, if the employee is praising the employer and its workplaces, or in a negative way, if the blogger is making negative comments about the company or its practices. In general, attempts by employee bloggers to protect themselves by maintaining anonymity have proved ineffective. In 2009, a controversial and landmark decision by The Hon. Mr Justice Eady refused to grant an order to protect the anonymity of Richard Horton. Horton was a police officer in the United Kingdom who blogged about his job under the name "NightJack". Delta Air Lines fired flight attendant Ellen Simonetti because she posted photographs of herself in uniform on an aeroplane and because of comments posted on her blog "Queen of Sky: Diary of a Flight Attendant" which the employer deemed inappropriate. This case highlighted the issue of personal blogging and freedom of expression versus employer rights and responsibilities, and so it received wide media attention. Simonetti took legal action against the airline for "wrongful termination, defamation of character and lost future wages". The suit was postponed while Delta was in bankruptcy proceedings. In early 2006, Erik Ringmar, a senior lecturer at the London School of Economics, was ordered by the convenor of his department to "take down and destroy" his blog in which he discussed the quality of education at the school. Mark Jen was terminated in 2005 after 10 days of employment as an assistant product manager at Google for discussing corporate secrets on his personal blog, then called 99zeros and hosted on the Google-owned Blogger service. He blogged about unreleased products and company finances a week before the company's earnings announcement. He was fired two days after he complied with his employer's request to remove the sensitive material from his blog. In India, blogger Gaurav Sabnis resigned from IBM after his posts questioned the claims made by a management school. Jessica Cutler, aka "The Washingtonienne", blogged about her sex life while employed as a congressional assistant. After the blog was discovered and she was fired, she wrote a novel based on her experiences and blog: The Washingtonienne: A Novel. , Cutler is being sued by one of her former lovers in a case that could establish the extent to which bloggers are obligated to protect the privacy of their real life associates. Catherine Sanderson, a.k.a. Petite Anglaise, lost her job in Paris at a British accountancy firm because of blogging. Although given in the blog in a fairly anonymous manner, some of the descriptions of the firm and some of its people were less than flattering. Sanderson later won a compensation claim case against the British firm, however. On the other hand, Penelope Trunk wrote an upbeat article in The Boston Globe in 2006, entitled "Blogs 'essential' to a good career". She was one of the first journalists to point out that a large portion of bloggers are professionals and that a well-written blog can help attract employers. Business owners Business owners who blog about their business can also run into legal consequences. Mark Cuban, owner of the Dallas Mavericks, was fined during the 2006 NBA playoffs for criticizing NBA officials on the court and in his blog. Political dangers Blogging can sometimes have unforeseen consequences in politically sensitive areas. In some countries, Internet police or secret police may monitor blogs and arrest blog authors or commentators. Blogs can be much harder to control than broadcast or print media because a person can create a blog whose authorship is hard to trace by using anonymity technology such as Tor. As a result, totalitarian and authoritarian regimes often seek to suppress blogs and punish those who maintain them. In Singapore, two ethnic Chinese individuals were imprisoned under the country's anti-sedition law for posting anti-Muslim remarks in their blogs. Egyptian blogger Kareem Amer was charged with insulting the Egyptian president Hosni Mubarak and an Islamic institution through his blog. It is the first time in the history of Egypt that a blogger was prosecuted. After a brief trial session that took place in Alexandria, the blogger was found guilty and sentenced to prison terms of three years for insulting Islam and inciting sedition and one year for insulting Mubarak. Egyptian blogger Abdel Monem Mahmoud was arrested in April 2007 for anti-government writings in his blog. Monem is a member of the then banned Muslim Brotherhood. After the 2011 Egyptian revolution, the Egyptian blogger Maikel Nabil Sanad was charged with insulting the military for an article he wrote on his personal blog and sentenced to three years. After expressing opinions in his personal blog about the state of the Sudanese armed forces, Jan Pronk, United Nations Special Representative for Sudan, was given three days notice to leave Sudan. The Sudanese army had demanded his deportation. In Myanmar, Nay Phone Latt, a blogger, was sentenced to 20 years in jail for posting a cartoon critical of head of state Than Shwe. Personal safety One consequence of blogging is the possibility of online or in-person attacks or threats against the blogger, sometimes without apparent reason. In some cases, bloggers have faced cyberbullying. Kathy Sierra, author of the blog "Creating Passionate Users", was the target of threats and misogynistic insults to the point that she cancelled her keynote speech at a technology conference in San Diego, fearing for her safety. While a blogger's anonymity is often tenuous, Internet trolls who would attack a blogger with threats or insults can be emboldened by the anonymity of the online environment, where some users are known only by a pseudonymous "username" (e.g., "Hacker1984"). Sierra and supporters initiated an online discussion aimed at countering abusive online behaviour and developed a Blogger's Code of Conduct, which set out a rules for behaviour in the online space.
Technology
Internet
null
33691
https://en.wikipedia.org/wiki/Wave%20equation
Wave equation
The wave equation is a second-order linear partial differential equation for the description of waves or standing wave fields such as mechanical waves (e.g. water waves, sound waves and seismic waves) or electromagnetic waves (including light waves). It arises in fields like acoustics, electromagnetism, and fluid dynamics. This article focuses on waves in classical physics. Quantum physics uses an operator-based wave equation often as a relativistic wave equation. Introduction The wave equation is a hyperbolic partial differential equation describing waves, including traveling and standing waves; the latter can be considered as linear superpositions of waves traveling in opposite directions. This article mostly focuses on the scalar wave equation describing waves in scalars by scalar functions of a time variable (a variable representing time) and one or more spatial variables (variables representing a position in a space under discussion). At the same time, there are vector wave equations describing waves in vectors such as waves for an electrical field, magnetic field, and magnetic vector potential and elastic waves. By comparison with vector wave equations, the scalar wave equation can be seen as a special case of the vector wave equations; in the Cartesian coordinate system, the scalar wave equation is the equation to be satisfied by each component (for each coordinate axis, such as the x component for the x axis) of a vector wave without sources of waves in the considered domain (i.e., space and time). For example, in the Cartesian coordinate system, for as the representation of an electric vector field wave in the absence of wave sources, each coordinate axis component (i = x, y, z) must satisfy the scalar wave equation. Other scalar wave equation solutions are for physical quantities in scalars such as pressure in a liquid or gas, or the displacement along some specific direction of particles of a vibrating solid away from their resting (equilibrium) positions. The scalar wave equation is where is a fixed non-negative real coefficient representing the propagation speed of the wave is a scalar field representing the displacement or, more generally, the conserved quantity (e.g. pressure or density) , and are the three spatial coordinates and being the time coordinate. The equation states that, at any given point, the second derivative of with respect to time is proportional to the sum of the second derivatives of with respect to space, with the constant of proportionality being the square of the speed of the wave. Using notations from vector calculus, the wave equation can be written compactly as or where the double subscript denotes the second-order partial derivative with respect to time, is the Laplace operator and the d'Alembert operator, defined as: A solution to this (two-way) wave equation can be quite complicated. Still, it can be analyzed as a linear combination of simple solutions that are sinusoidal plane waves with various directions of propagation and wavelengths but all with the same propagation speed . This analysis is possible because the wave equation is linear and homogeneous, so that any multiple of a solution is also a solution, and the sum of any two solutions is again a solution. This property is called the superposition principle in physics. The wave equation alone does not specify a physical solution; a unique solution is usually obtained by setting a problem with further conditions, such as initial conditions, which prescribe the amplitude and phase of the wave. Another important class of problems occurs in enclosed spaces specified by boundary conditions, for which the solutions represent standing waves, or harmonics, analogous to the harmonics of musical instruments. Wave equation in one space dimension The wave equation in one spatial dimension can be written as follows: This equation is typically described as having only one spatial dimension , because the only other independent variable is the time . Derivation The wave equation in one space dimension can be derived in a variety of different physical settings. Most famously, it can be derived for the case of a string vibrating in a two-dimensional plane, with each of its elements being pulled in opposite directions by the force of tension. Another physical setting for derivation of the wave equation in one space dimension uses Hooke's law. In the theory of elasticity, Hooke's law is an approximation for certain materials, stating that the amount by which a material body is deformed (the strain) is linearly related to the force causing the deformation (the stress). Hooke's law The wave equation in the one-dimensional case can be derived from Hooke's law in the following way: imagine an array of little weights of mass interconnected with massless springs of length . The springs have a spring constant of : Here the dependent variable measures the distance from the equilibrium of the mass situated at , so that essentially measures the magnitude of a disturbance (i.e. strain) that is traveling in an elastic material. The resulting force exerted on the mass at the location is: By equating the latter equation with the equation of motion for the weight at the location is obtained: If the array of weights consists of weights spaced evenly over the length of total mass , and the total spring constant of the array , we can write the above equation as Taking the limit and assuming smoothness, one gets which is from the definition of a second derivative. is the square of the propagation speed in this particular case. Stress pulse in a bar In the case of a stress pulse propagating longitudinally through a bar, the bar acts much like an infinite number of springs in series and can be taken as an extension of the equation derived for Hooke's law. A uniform bar, i.e. of constant cross-section, made from a linear elastic material has a stiffness given by where is the cross-sectional area, and is the Young's modulus of the material. The wave equation becomes is equal to the volume of the bar, and therefore where is the density of the material. The wave equation reduces to The speed of a stress wave in a bar is therefore . General solution Algebraic approach For the one-dimensional wave equation a relatively simple general solution may be found. Defining new variables changes the wave equation into which leads to the general solution In other words, the solution is the sum of a right-traveling function and a left-traveling function . "Traveling" means that the shape of these individual arbitrary functions with respect to stays constant, however, the functions are translated left and right with time at the speed . This was derived by Jean le Rond d'Alembert. Another way to arrive at this result is to factor the wave equation using two first-order differential operators: Then, for our original equation, we can define and find that we must have This advection equation can be solved by interpreting it as telling us that the directional derivative of in the direction is 0. This means that the value of is constant on characteristic lines of the form , and thus that must depend only on , that is, have the form . Then, to solve the first (inhomogenous) equation relating to , we can note that its homogenous solution must be a function of the form , by logic similar to the above. Guessing a particular solution of the form , we find that Expanding out the left side, rearranging terms, then using the change of variables simplifies the equation to This means we can find a particular solution of the desired form by integration. Thus, we have again shown that obeys . For an initial-value problem, the arbitrary functions and can be determined to satisfy initial conditions: The result is d'Alembert's formula: In the classical sense, if , and , then . However, the waveforms and may also be generalized functions, such as the delta-function. In that case, the solution may be interpreted as an impulse that travels to the right or the left. The basic wave equation is a linear differential equation, and so it will adhere to the superposition principle. This means that the net displacement caused by two or more waves is the sum of the displacements which would have been caused by each wave individually. In addition, the behavior of a wave can be analyzed by breaking up the wave into components, e.g. the Fourier transform breaks up a wave into sinusoidal components. Plane-wave eigenmodes Another way to solve the one-dimensional wave equation is to first analyze its frequency eigenmodes. A so-called eigenmode is a solution that oscillates in time with a well-defined constant angular frequency , so that the temporal part of the wave function takes the form , and the amplitude is a function of the spatial variable , giving a separation of variables for the wave function: This produces an ordinary differential equation for the spatial part : Therefore, which is precisely an eigenvalue equation for , hence the name eigenmode. Known as the Helmholtz equation, it has the well-known plane-wave solutions with wave number . The total wave function for this eigenmode is then the linear combination where complex numbers , depend in general on any initial and boundary conditions of the problem. Eigenmodes are useful in constructing a full solution to the wave equation, because each of them evolves in time trivially with the phase factor so that a full solution can be decomposed into an eigenmode expansion: or in terms of the plane waves, which is exactly in the same form as in the algebraic approach. Functions are known as the Fourier component and are determined by initial and boundary conditions. This is a so-called frequency-domain method, alternative to direct time-domain propagations, such as FDTD method, of the wave packet , which is complete for representing waves in absence of time dilations. Completeness of the Fourier expansion for representing waves in the presence of time dilations has been challenged by chirp wave solutions allowing for time variation of . The chirp wave solutions seem particularly implied by very large but previously inexplicable radar residuals in the flyby anomaly and differ from the sinusoidal solutions in being receivable at any distance only at proportionally shifted frequencies and time dilations, corresponding to past chirp states of the source. Vectorial wave equation in three space dimensions The vectorial wave equation (from which the scalar wave equation can be directly derived) can be obtained by applying a force equilibrium to an infinitesimal volume element. If the medium has a modulus of elasticity that is homogeneous (i.e. independent of ) within the volume element, then its stress tensor is given by , for a vectorial elastic deflection . The local equilibrium of: the tension force due to deflection , and the inertial force caused by the local acceleration can be written as By merging density and elasticity module the sound velocity results (material law). After insertion, follows the well-known governing wave equation for a homogeneous medium: (Note: Instead of vectorial only scalar can be used, i.e. waves are travelling only along the axis, and the scalar wave equation follows as .) The above vectorial partial differential equation of the 2nd order delivers two mutually independent solutions. From the quadratic velocity term can be seen that there are two waves travelling in opposite directions and are possible, hence results the designation “two-way wave equation”. It can be shown for plane longitudinal wave propagation that the synthesis of two one-way wave equations leads to a general two-way wave equation. For special two-wave equation with the d'Alembert operator results: For this simplifies to Therefore, the vectorial 1st-order one-way wave equation with waves travelling in a pre-defined propagation direction results as Scalar wave equation in three space dimensions A solution of the initial-value problem for the wave equation in three space dimensions can be obtained from the corresponding solution for a spherical wave. The result can then be also used to obtain the same solution in two space dimensions. Spherical waves To obtain a solution with constant frequencies, apply the Fourier transform which transforms the wave equation into an elliptic partial differential equation of the form: This is the Helmholtz equation and can be solved using separation of variables. In spherical coordinates this leads to a separation of the radial and angular variables, writing the solution as: The angular part of the solution take the form of spherical harmonics and the radial function satisfies: independent of , with . Substituting transforms the equation into which is the Bessel equation. Example Consider the case . Then there is no angular dependence and the amplitude depends only on the radial distance, i.e., . In this case, the wave equation reduces to or This equation can be rewritten as where the quantity satisfies the one-dimensional wave equation. Therefore, there are solutions in the form where and are general solutions to the one-dimensional wave equation and can be interpreted as respectively an outgoing and incoming spherical waves. The outgoing wave can be generated by a point source, and they make possible sharp signals whose form is altered only by a decrease in amplitude as increases (see an illustration of a spherical wave on the top right). Such waves exist only in cases of space with odd dimensions. For physical examples of solutions to the 3D wave equation that possess angular dependence, see dipole radiation. Monochromatic spherical wave Although the word "monochromatic" is not exactly accurate, since it refers to light or electromagnetic radiation with well-defined frequency, the spirit is to discover the eigenmode of the wave equation in three dimensions. Following the derivation in the previous section on plane-wave eigenmodes, if we again restrict our solutions to spherical waves that oscillate in time with well-defined constant angular frequency , then the transformed function has simply plane-wave solutions: or From this we can observe that the peak intensity of the spherical-wave oscillation, characterized as the squared wave amplitude drops at the rate proportional to , an example of the inverse-square law. Solution of a general initial-value problem The wave equation is linear in and is left unaltered by translations in space and time. Therefore, we can generate a great variety of solutions by translating and summing spherical waves. Let be an arbitrary function of three independent variables, and let the spherical wave form be a delta function. Let a family of spherical waves have center at , and let be the radial distance from that point. Thus If is a superposition of such waves with weighting function , then the denominator is a convenience. From the definition of the delta function, may also be written as where , , and are coordinates on the unit sphere , and is the area element on . This result has the interpretation that is times the mean value of on a sphere of radius centered at : It follows that The mean value is an even function of , and hence if then These formulas provide the solution for the initial-value problem for the wave equation. They show that the solution at a given point , given depends only on the data on the sphere of radius that is intersected by the light cone drawn backwards from . It does not depend upon data on the interior of this sphere. Thus the interior of the sphere is a lacuna for the solution. This phenomenon is called Huygens' principle. It is only true for odd numbers of space dimension, where for one dimension the integration is performed over the boundary of an interval with respect to the Dirac measure. Scalar wave equation in two space dimensions In two space dimensions, the wave equation is We can use the three-dimensional theory to solve this problem if we regard as a function in three dimensions that is independent of the third dimension. If then the three-dimensional solution formula becomes where and are the first two coordinates on the unit sphere, and is the area element on the sphere. This integral may be rewritten as a double integral over the disc with center and radius It is apparent that the solution at depends not only on the data on the light cone where but also on data that are interior to that cone. Scalar wave equation in general dimension and Kirchhoff's formulae We want to find solutions to for with and . Odd dimensions Assume is an odd integer, and , for . Let and let Then , in , , . Even dimensions Assume is an even integer and , , for . Let and let then in Green's function Consider the inhomogeneous wave equation in dimensionsBy rescaling time, we can set wave speed . Since the wave equation has order 2 in time, there are two impulse responses: an acceleration impulse and a velocity impulse. The effect of inflicting an acceleration impulse is to suddenly change the wave velocity . The effect of inflicting a velocity impulse is to suddenly change the wave displacement . For acceleration impulse, where is the Dirac delta function. The solution to this case is called the Green's function for the wave equation. For velocity impulse, , so if we solve the Green function , the solution for this case is just . Duhamel's principle The main use of Green's functions is to solve initial value problems by Duhamel's principle, both for the homogeneous and the inhomogeneous case. Given the Green function , and initial conditions , the solution to the homogeneous wave equation iswhere the asterisk is convolution in space. More explicitly, For the inhomogeneous case, the solution has one additional term by convolution over spacetime: Solution by Fourier transform By a Fourier transform,The term can be integrated by the residue theorem. It would require us to perturb the integral slightly either by or by , because it is an improper integral. One perturbation gives the forward solution, and the other the backward solution. The forward solution givesThe integral can be solved by analytically continuing the Poisson kernel, givingwhere is half the surface area of a -dimensional hypersphere. Solutions in particular dimensions We can relate the Green's function in dimensions to the Green's function in dimensions. Lowering dimensions Given a function and a solution of a differential equation in dimensions, we can trivially extend it to dimensions by setting the additional dimensions to be constant: Since the Green's function is constructed from and , the Green's function in dimensions integrates to the Green's function in dimensions: Raising dimensions The Green's function in dimensions can be related to the Green's function in dimensions. By spherical symmetry, Integrating in polar coordinates, where in the last equality we made the change of variables . Thus, we obtain the recurrence relation Solutions in D = 1, 2, 3 When , the integrand in the Fourier transform is the sinc function where is the sign function and is the unit step function. One solution is the forward solution, the other is the backward solution. The dimension can be raised to give the caseand similarly for the backward solution. This can be integrated down by one dimension to give the case Wavefronts and wakes In case, the Green's function solution is the sum of two wavefronts moving in opposite directions. In odd dimensions, the forward solution is nonzero only at . As the dimensions increase, the shape of wavefront becomes increasingly complex, involving higher derivatives of the Dirac delta function. For example,where , and the wave speed is restored. In even dimensions, the forward solution is nonzero in , the entire region behind the wavefront becomes nonzero, called a wake. The wake has equation:The wavefront itself also involves increasingly higher derivatives of the Dirac delta function. This means that a general Huygens' principle – the wave displacement at a point in spacetime depends only on the state at points on characteristic rays passing – only holds in odd dimensions. A physical interpretation is that signals transmitted by waves remain undistorted in odd dimensions, but distorted in even dimensions. Hadamard's conjecture states that this generalized Huygens' principle still holds in all odd dimensions even when the coefficients in the wave equation are no longer constant. It is not strictly correct, but it is correct for certain families of coefficients Problems with boundaries One space dimension Reflection and transmission at the boundary of two media For an incident wave traveling from one medium (where the wave speed is ) to another medium (where the wave speed is ), one part of the wave will transmit into the second medium, while another part reflects back into the other direction and stays in the first medium. The amplitude of the transmitted wave and the reflected wave can be calculated by using the continuity condition at the boundary. Consider the component of the incident wave with an angular frequency of , which has the waveform At , the incident reaches the boundary between the two media at . Therefore, the corresponding reflected wave and the transmitted wave will have the waveforms The continuity condition at the boundary is This gives the equations and we have the reflectivity and transmissivity When , the reflected wave has a reflection phase change of 180°, since . The energy conservation can be verified by The above discussion holds true for any component, regardless of its angular frequency of . The limiting case of corresponds to a "fixed end" that does not move, whereas the limiting case of corresponds to a "free end". The Sturm–Liouville formulation A flexible string that is stretched between two points and satisfies the wave equation for and . On the boundary points, may satisfy a variety of boundary conditions. A general form that is appropriate for applications is where and are non-negative. The case where is required to vanish at an endpoint (i.e. "fixed end") is the limit of this condition when the respective or approaches infinity. The method of separation of variables consists in looking for solutions of this problem in the special form A consequence is that The eigenvalue must be determined so that there is a non-trivial solution of the boundary-value problem This is a special case of the general problem of Sturm–Liouville theory. If and are positive, the eigenvalues are all positive, and the solutions are trigonometric functions. A solution that satisfies square-integrable initial conditions for and can be obtained from expansion of these functions in the appropriate trigonometric series. Several space dimensions The one-dimensional initial-boundary value theory may be extended to an arbitrary number of space dimensions. Consider a domain in -dimensional space, with boundary . Then the wave equation is to be satisfied if is in , and . On the boundary of , the solution shall satisfy where is the unit outward normal to , and is a non-negative function defined on . The case where vanishes on is a limiting case for approaching infinity. The initial conditions are where and are defined in . This problem may be solved by expanding and in the eigenfunctions of the Laplacian in , which satisfy the boundary conditions. Thus the eigenfunction satisfies in , and on . In the case of two space dimensions, the eigenfunctions may be interpreted as the modes of vibration of a drumhead stretched over the boundary . If is a circle, then these eigenfunctions have an angular component that is a trigonometric function of the polar angle , multiplied by a Bessel function (of integer order) of the radial component. Further details are in Helmholtz equation. If the boundary is a sphere in three space dimensions, the angular components of the eigenfunctions are spherical harmonics, and the radial components are Bessel functions of half-integer order. Inhomogeneous wave equation in one dimension The inhomogeneous wave equation in one dimension is with initial conditions The function is often called the source function because in practice it describes the effects of the sources of waves on the medium carrying them. Physical examples of source functions include the force driving a wave on a string, or the charge or current density in the Lorenz gauge of electromagnetism. One method to solve the initial-value problem (with the initial values as posed above) is to take advantage of a special property of the wave equation in an odd number of space dimensions, namely that its solutions respect causality. That is, for any point , the value of depends only on the values of and and the values of the function between and . This can be seen in d'Alembert's formula, stated above, where these quantities are the only ones that show up in it. Physically, if the maximum propagation speed is , then no part of the wave that cannot propagate to a given point by a given time can affect the amplitude at the same point and time. In terms of finding a solution, this causality property means that for any given point on the line being considered, the only area that needs to be considered is the area encompassing all the points that could causally affect the point being considered. Denote the area that causally affects point as . Suppose we integrate the inhomogeneous wave equation over this region: To simplify this greatly, we can use Green's theorem to simplify the left side to get the following: The left side is now the sum of three line integrals along the bounds of the causality region. These turn out to be fairly easy to compute: In the above, the term to be integrated with respect to time disappears because the time interval involved is zero, thus . For the other two sides of the region, it is worth noting that is a constant, namely , where the sign is chosen appropriately. Using this, we can get the relation , again choosing the right sign: And similarly for the final boundary segment: Adding the three results together and putting them back in the original integral gives Solving for , we arrive at In the last equation of the sequence, the bounds of the integral over the source function have been made explicit. Looking at this solution, which is valid for all choices compatible with the wave equation, it is clear that the first two terms are simply d'Alembert's formula, as stated above as the solution of the homogeneous wave equation in one dimension. The difference is in the third term, the integral over the source. Further generalizations Elastic waves The elastic wave equation (also known as the Navier–Cauchy equation) in three dimensions describes the propagation of waves in an isotropic homogeneous elastic medium. Most solid materials are elastic, so this equation describes such phenomena as seismic waves in the Earth and ultrasonic waves used to detect flaws in materials. While linear, this equation has a more complex form than the equations given above, as it must account for both longitudinal and transverse motion: where: and are the so-called Lamé parameters describing the elastic properties of the medium, is the density, is the source function (driving force), is the displacement vector. By using , the elastic wave equation can be rewritten into the more common form of the Navier–Cauchy equation. Note that in the elastic wave equation, both force and displacement are vector quantities. Thus, this equation is sometimes known as the vector wave equation. As an aid to understanding, the reader will observe that if and are set to zero, this becomes (effectively) Maxwell's equation for the propagation of the electric field , which has only transverse waves. Dispersion relation In dispersive wave phenomena, the speed of wave propagation varies with the wavelength of the wave, which is reflected by a dispersion relation where is the angular frequency, and is the wavevector describing plane-wave solutions. For light waves, the dispersion relation is , but in general, the constant speed gets replaced by a variable phase velocity:
Physical sciences
Waves
null
33702
https://en.wikipedia.org/wiki/Wolf
Wolf
The wolf (Canis lupus; : wolves), also known as the grey wolf or gray wolf, is a canine native to Eurasia and North America. More than thirty subspecies of Canis lupus have been recognized, including the dog and dingo, though grey wolves, as popularly understood, only comprise naturally-occurring wild subspecies. The wolf is the largest wild extant member of the family Canidae, and is further distinguished from other Canis species by its less pointed ears and muzzle, as well as a shorter torso and a longer tail. The wolf is nonetheless related closely enough to smaller Canis species, such as the coyote and the golden jackal, to produce fertile hybrids with them. The wolf's fur is usually mottled white, brown, grey, and black, although subspecies in the arctic region may be nearly all white. Of all members of the genus Canis, the wolf is most specialized for cooperative game hunting as demonstrated by its physical adaptations to tackling large prey, its more social nature, and its highly advanced expressive behaviour, including individual or group howling. It travels in nuclear families consisting of a mated pair accompanied by their offspring. Offspring may leave to form their own packs on the onset of sexual maturity and in response to competition for food within the pack. Wolves are also territorial, and fights over territory are among the principal causes of mortality. The wolf is mainly a carnivore and feeds on large wild hooved mammals as well as smaller animals, livestock, carrion, and garbage. Single wolves or mated pairs typically have higher success rates in hunting than do large packs. Pathogens and parasites, notably the rabies virus, may infect wolves. The global wild wolf population was estimated to be 300,000 in 2003 and is considered to be of Least Concern by the International Union for Conservation of Nature (IUCN). Wolves have a long history of interactions with humans, having been despised and hunted in most pastoral communities because of their attacks on livestock, while conversely being respected in some agrarian and hunter-gatherer societies. Although the fear of wolves exists in many human societies, the majority of recorded attacks on people have been attributed to animals suffering from rabies. Wolf attacks on humans are rare because wolves are relatively few, live away from people, and have developed a fear of humans because of their experiences with hunters, farmers, ranchers, and shepherds. Etymology The English "wolf" stems from the Old English , which is itself thought to be derived from the Proto-Germanic . The Proto-Indo-European root * may also be the source of the Latin word for the animal lupus (*). The name "grey wolf" refers to the greyish colour of the species. Since pre-Christian times, Germanic peoples such as the Anglo-Saxons took on wulf as a prefix or suffix in their names. Examples include Wulfhere ("Wolf Army"), Cynewulf ("Royal Wolf"), Cēnwulf ("Bold Wolf"), Wulfheard ("Wolf-hard"), Earnwulf ("Eagle Wolf"), Wulfstān ("Wolf Stone") Æðelwulf ("Noble Wolf"), Wolfhroc ("Wolf-Frock"), Wolfhetan ("Wolf Hide"), Scrutolf ("Garb Wolf"), Wolfgang ("Wolf Gait") and Wolfdregil ("Wolf Runner"). Taxonomy In 1758, the Swedish botanist and zoologist Carl Linnaeus published in his Systema Naturae the binomial nomenclature. Canis is the Latin word meaning "dog", and under this genus he listed the doglike carnivores including domestic dogs, wolves, and jackals. He classified the domestic dog as Canis familiaris, and the wolf as Canis lupus. Linnaeus considered the dog to be a separate species from the wolf because of its "cauda recurvata" (upturning tail) which is not found in any other canid. Subspecies In the third edition of Mammal Species of the World published in 2005, the mammalogist W. Christopher Wozencraft listed under C. lupus 36 wild subspecies, and proposed two additional subspecies: familiaris (Linnaeus, 1758) and dingo (Meyer, 1793). Wozencraft included hallstromi—the New Guinea singing dog—as a taxonomic synonym for the dingo. Wozencraft referred to a 1999 mitochondrial DNA (mtDNA) study as one of the guides in forming his decision, and listed the 38 subspecies of C. lupus under the biological common name of "wolf", the nominate subspecies being the Eurasian wolf (C. l. lupus) based on the type specimen that Linnaeus studied in Sweden. Studies using paleogenomic techniques reveal that the modern wolf and the dog are sister taxa, as modern wolves are not closely related to the population of wolves that was first domesticated. In 2019, a workshop hosted by the IUCN/Species Survival Commission's Canid Specialist Group considered the New Guinea singing dog and the dingo to be feral Canis familiaris, and therefore should not be assessed for the IUCN Red List. Evolution The phylogenetic descent of the extant wolf C. lupus from the earlier C. mosbachensis (which in turn descended from C. etruscus) is widely accepted. Among the oldest fossils of the modern grey wolf is from Ponte Galeria in Italy, dating to 406,500 ± 2,400 years ago. Remains from Cripple Creek Sump in Alaska may be considerably older, around 1 million years old, though differentiating between the remains of modern wolves and C. mosbachensis is difficult and ambiguous, with some authors choosing to include C. mosbachensis (which first appeared around 1.4 million years ago) as an early subspecies of C. lupus. Considerable morphological diversity existed among wolves by the Late Pleistocene. Many Late Pleistocene wolf populations had more robust skulls and teeth than modern wolves, often with a shortened snout, a pronounced development of the temporalis muscle, and robust premolars. It is proposed that these features were specialized adaptations for the processing of carcass and bone associated with the hunting and scavenging of Pleistocene megafauna. Compared with modern wolves, some Pleistocene wolves showed an increase in tooth breakage similar to that seen in the extinct dire wolf. This suggests they either often processed carcasses, or that they competed with other carnivores and needed to consume their prey quickly. The frequency and location of tooth fractures in these wolves indicates they were habitual bone crackers like the modern spotted hyena. Genomic studies suggest modern wolves and dogs descend from a common ancestral wolf population. A 2021 study found that the Himalayan wolf and the Indian plains wolf are part of a lineage that is basal to other wolves and split from them 200,000 years ago. Other wolves appear to share most of their common ancestry much more recently, within the last 23,000 years (around the peak and the end of the Last Glacial Maximum), originating from Siberia or Beringia. While some sources have suggested that this was a consequence of a population bottleneck, other studies have suggested that this a result of gene flow homogenising ancestry. A 2016 genomic study suggests that Old World and New World wolves split around 12,500 years ago followed by the divergence of the lineage that led to dogs from other Old World wolves around 11,100–12,300 years ago. An extinct Late Pleistocene wolf may have been the ancestor of the dog, with the dog's similarity to the extant wolf being the result of genetic admixture between the two. The dingo, Basenji, Tibetan Mastiff and Chinese indigenous breeds are basal members of the domestic dog clade. The divergence time for wolves in Europe, the Middle East, and Asia is estimated to be fairly recent at around 1,600 years ago. Among New World wolves, the Mexican wolf diverged around 5,400 years ago. Admixture with other canids In the distant past, there was gene flow between African wolves, golden jackals, and grey wolves. The African wolf is a descendant of a genetically admixed canid of 72% wolf and 28% Ethiopian wolf ancestry. One African wolf from the Egyptian Sinai Peninsula showed admixture with Middle Eastern grey wolves and dogs. There is evidence of gene flow between golden jackals and Middle Eastern wolves, less so with European and Asian wolves, and least with North American wolves. This indicates the golden jackal ancestry found in North American wolves may have occurred before the divergence of the Eurasian and North American wolves. The common ancestor of the coyote and the wolf is admixed with a ghost population of an extinct unidentified canid. This canid was genetically close to the dhole and evolved after the divergence of the African hunting dog from the other canid species. The basal position of the coyote compared to the wolf has been proposed to be due to the coyote retaining more of the mitochondrial genome of this unidentified canid. Similarly, a museum specimen of a wolf from southern China collected in 1963 showed a genome that was 12–14% admixed from this unknown canid. In North America, some coyotes and wolves show varying degrees of past genetic admixture. In more recent times, some male Italian wolves originated from dog ancestry, which indicates female wolves will breed with male dogs in the wild. In the Caucasus Mountains, ten percent of dogs including livestock guardian dogs, are first generation hybrids. Although mating between golden jackals and wolves has never been observed, evidence of jackal-wolf hybridization was discovered through mitochondrial DNA analysis of jackals living in the Caucasus Mountains and in Bulgaria. In 2021, a genetic study found that the dog's similarity to the extant grey wolf was the result of substantial dog-into-wolf gene flow, with little evidence of the reverse. Description The wolf is the largest extant member of the family Canidae, and is further distinguished from coyotes and jackals by a broader snout, shorter ears, a shorter torso and a longer tail. It is slender and powerfully built, with a large, deeply descending rib cage, a sloping back, and a heavily muscled neck. The wolf's legs are moderately longer than those of other canids, which enables the animal to move swiftly, and to overcome the deep snow that covers most of its geographical range in winter, though more short-legged ecomorphs are found in some wolf populations. The ears are relatively small and triangular. The wolf's head is large and heavy, with a wide forehead, strong jaws and a long, blunt muzzle. The skull is in length and in width. The teeth are heavy and large, making them better suited to crushing bone than those of other canids, though they are not as specialized as those found in hyenas. Its molars have a flat chewing surface, but not to the same extent as the coyote, whose diet contains more vegetable matter. Females tend to have narrower muzzles and foreheads, thinner necks, slightly shorter legs, and less massive shoulders than males. Adult wolves measure in length and at shoulder height. The tail measures in length, the ears in height, and the hind feet are . The size and weight of the modern wolf increases proportionally with latitude in accordance with Bergmann's rule. The mean body mass of the wolf is , the smallest specimen recorded at and the largest at . On average, European wolves weigh , North American wolves , and Indian and Arabian wolves . Females in any given wolf population typically weigh less than males. Wolves weighing over are uncommon, though exceptionally large individuals have been recorded in Alaska and Canada. In central Russia, exceptionally large males can reach a weight of . Pelage The wolf has very dense and fluffy winter fur, with a short undercoat and long, coarse guard hairs. Most of the undercoat and some guard hairs are shed in spring and grow back in autumn. The longest hairs occur on the back, particularly on the front quarters and neck. Especially long hairs grow on the shoulders and almost form a crest on the upper part of the neck. The hairs on the cheeks are elongated and form tufts. The ears are covered in short hairs and project from the fur. Short, elastic and closely adjacent hairs are present on the limbs from the elbows down to the calcaneal tendons. The winter fur is highly resistant to the cold. Wolves in northern climates can rest comfortably in open areas at by placing their muzzles between the rear legs and covering their faces with their tail. Wolf fur provides better insulation than dog fur and does not collect ice when warm breath is condensed against it. In cold climates, the wolf can reduce the flow of blood near its skin to conserve body heat. The warmth of the foot pads is regulated independently from the rest of the body and is maintained at just above tissue-freezing point where the pads come in contact with ice and snow. In warm climates, the fur is coarser and scarcer than in northern wolves. Female wolves tend to have smoother furred limbs than males and generally develop the smoothest overall coats as they age. Older wolves generally have more white hairs on the tip of the tail, along the nose, and on the forehead. Winter fur is retained longest by lactating females, although with some hair loss around their teats. Hair length on the middle of the back is , and the guard hairs on the shoulders generally do not exceed , but can reach . A wolf's coat colour is determined by its guard hairs. Wolves usually have some hairs that are white, brown, grey and black. The coat of the Eurasian wolf is a mixture of ochreous (yellow to orange) and rusty ochreous (orange/red/brown) colours with light grey. The muzzle is pale ochreous grey, and the area of the lips, cheeks, chin, and throat is white. The top of the head, forehead, under and between the eyes, and between the eyes and ears is grey with a reddish film. The neck is ochreous. Long, black tips on the hairs along the back form a broad stripe, with black hair tips on the shoulders, upper chest and rear of the body. The sides of the body, tail, and outer limbs are a pale dirty ochreous colour, while the inner sides of the limbs, belly, and groin are white. Apart from those wolves which are pure white or black, these tones vary little across geographical areas, although the patterns of these colours vary between individuals. In North America, the coat colours of wolves follow Gloger's rule, wolves in the Canadian arctic being white and those in southern Canada, the U.S., and Mexico being predominantly grey. In some areas of the Rocky Mountains of Alberta and British Columbia, the coat colour is predominantly black, some being blue-grey and some with silver and black. Differences in coat colour between sexes is absent in Eurasia; females tend to have redder tones in North America. Black-coloured wolves in North America acquired their colour from wolf-dog admixture after the first arrival of dogs across the Bering Strait 12,000 to 14,000 years ago. Research into the inheritance of white colour from dogs into wolves has yet to be undertaken. Ecology Distribution and habitat Wolves occur across Eurasia and North America. However, deliberate human persecution because of livestock predation and fear of attacks on humans has reduced the wolf's range to about one-third of its historic range; the wolf is now extirpated (locally extinct) from much of its range in Western Europe, the United States and Mexico, and completely in the British Isles and Japan. In modern times, the wolf occurs mostly in wilderness and remote areas. The wolf can be found between sea level and . Wolves live in forests, inland wetlands, shrublands, grasslands (including Arctic tundra), pastures, deserts, and rocky peaks on mountains. Habitat use by wolves depends on the abundance of prey, snow conditions, livestock densities, road densities, human presence and topography. Diet Like all land mammals that are pack hunters, the wolf feeds predominantly on ungulates that can be divided into large size and medium size , and have a body mass similar to that of the combined mass of the pack members. The wolf specializes in preying on the vulnerable individuals of large prey, with a pack of 15 able to bring down an adult moose. The variation in diet between wolves living on different continents is based on the variety of hoofed mammals and of available smaller and domesticated prey. In North America, the wolf's diet is dominated by wild large hoofed mammals (ungulates) and medium-sized mammals. In Asia and Europe, their diet is dominated by wild medium-sized hoofed mammals and domestic species. The wolf depends on wild species, and if these are not readily available, as in Asia, the wolf is more reliant on domestic species. Across Eurasia, wolves prey mostly on moose, red deer, roe deer and wild boar. In North America, important range-wide prey are elk, moose, caribou, white-tailed deer and mule deer. Prior to their extirpation from North America, wild horses were among the most frequently consumed prey of North American wolves. Wolves can digest their meal in a few hours and can feed several times in one day, making quick use of large quantities of meat. A well-fed wolf stores fat under the skin, around the heart, intestines, kidneys, and bone marrow, particularly during the autumn and winter. Nonetheless, wolves are not fussy eaters. Smaller-sized animals that may supplement their diet include rodents, hares, insectivores and smaller carnivores. They frequently eat waterfowl and their eggs. When such foods are insufficient, they prey on lizards, snakes, frogs, and large insects when available. Wolves in some areas may consume fish and even marine life. Wolves also consume some plant material. In Europe, they eat apples, pears, figs, melons, berries and cherries. In North America, wolves eat blueberries and raspberries. They also eat grass, which may provide some vitamins, but is most likely used mainly to induce vomiting to rid themselves of intestinal parasites or long guard hairs. They are known to eat the berries of mountain-ash, lily of the valley, bilberries, cowberries, European black nightshade, grain crops, and the shoots of reeds. In times of scarcity, wolves will readily eat carrion. In Eurasian areas with dense human activity, many wolf populations are forced to subsist largely on livestock and garbage. As prey in North America continue to occupy suitable habitats with low human density, North American wolves eat livestock and garbage only in dire circumstances. Cannibalism is not uncommon in wolves during harsh winters, when packs often attack weak or injured wolves and may eat the bodies of dead pack members. Interactions with other predators Wolves typically dominate other canid species in areas where they both occur. In North America, incidents of wolves killing coyotes are common, particularly in winter, when coyotes feed on wolf kills. Wolves may attack coyote den sites, digging out and killing their pups, though rarely eating them. There are no records of coyotes killing wolves, though coyotes may chase wolves if they outnumber them. According to a press release by the U.S. Department of Agriculture in 1921, the infamous Custer Wolf relied on coyotes to accompany him and warn him of danger. Though they fed from his kills, he never allowed them to approach him. Interactions have been observed in Eurasia between wolves and golden jackals, the latter's numbers being comparatively small in areas with high wolf densities. Wolves also kill red, Arctic and corsac foxes, usually in disputes over carcasses, sometimes eating them. Brown bears typically dominate wolf packs in disputes over carcasses, while wolf packs mostly prevail against bears when defending their den sites. Both species kill each other's young. Wolves eat the brown bears they kill, while brown bears seem to eat only young wolves. Wolf interactions with American black bears are much rarer because of differences in habitat preferences. Wolves have been recorded on numerous occasions actively seeking out American black bears in their dens and killing them without eating them. Unlike brown bears, American black bears frequently lose against wolves in disputes over kills. Wolves also dominate and sometimes kill wolverines, and will chase off those that attempt to scavenge from their kills. Wolverines escape from wolves in caves or up trees. Wolves may interact and compete with felids, such as the Eurasian lynx, which may feed on smaller prey where wolves are present and may be suppressed by large wolf populations. Wolves encounter cougars along portions of the Rocky Mountains and adjacent mountain ranges. Wolves and cougars typically avoid encountering each other by hunting at different elevations for different prey (niche partitioning). This is more difficult during winter. Wolves in packs usually dominate cougars and can steal their kills or even kill them, while one-to-one encounters tend to be dominated by the cat, who likewise will kill wolves. Wolves more broadly affect cougar population dynamics and distribution by dominating territory and prey opportunities and disrupting the feline's behaviour. Wolf and Siberian tiger interactions are well-documented in the Russian Far East, where tigers significantly depress wolf numbers, sometimes to the point of localized extinction. In Israel, Palestine, Central Asia and India wolves may encounter striped hyenas, usually in disputes over carcasses. Striped hyenas feed extensively on wolf-killed carcasses in areas where the two species interact. One-to-one, hyenas dominate wolves, and may prey on them, but wolf packs can drive off single or outnumbered hyenas. There is at least one case in Israel of a hyena associating and cooperating with a wolf pack. Infections Viral diseases carried by wolves include: rabies, canine distemper, canine parvovirus, infectious canine hepatitis, papillomatosis, and canine coronavirus. In wolves, the incubation period for rabies is eight to 21 days, and results in the host becoming agitated, deserting its pack, and travelling up to a day, thus increasing the risk of infecting other wolves. Although canine distemper is lethal in dogs, it has not been recorded to kill wolves, except in Canada and Alaska. The canine parvovirus, which causes death by dehydration, electrolyte imbalance, and endotoxic shock or sepsis, is largely survivable in wolves, but can be lethal to pups. Bacterial diseases carried by wolves include: brucellosis, Lyme disease, leptospirosis, tularemia, bovine tuberculosis, listeriosis and anthrax. Although lyme disease can debilitate individual wolves, it does not appear to significantly affect wolf populations. Leptospirosis can be contracted through contact with infected prey or urine, and can cause fever, anorexia, vomiting, anemia, hematuria, icterus, and death. Wolves are often infested with a variety of arthropod exoparasites, including fleas, ticks, lice, and mites. The most harmful to wolves, particularly pups, is the mange mite (Sarcoptes scabiei), though they rarely develop full-blown mange, unlike foxes. Endoparasites known to infect wolves include: protozoans and helminths (flukes, tapeworms, roundworms and thorny-headed worms). Most fluke species reside in the wolf's intestines. Tapeworms are commonly found in wolves, which they get though their prey, and generally cause little harm in wolves, though this depends on the number and size of the parasites, and the sensitivity of the host. Symptoms often include constipation, toxic and allergic reactions, irritation of the intestinal mucosa, and malnutrition. Wolves can carry over 30 roundworm species, though most roundworm infections appear benign, depending on the number of worms and the age of the host. Behaviour Social structure The wolf is a social animal. Its populations consist of packs and lone wolves, most lone wolves being temporarily alone while they disperse from packs to form their own or join another one. The wolf's basic social unit is the nuclear family consisting of a mated pair accompanied by their offspring. The average pack size in North America is eight wolves and 5.5 in Europe. The average pack across Eurasia consists of a family of eight wolves (two adults, juveniles, and yearlings), or sometimes two or three such families, with examples of exceptionally large packs consisting of up to 42 wolves being known. Cortisol levels in wolves rise significantly when a pack member dies, indicating the presence of stress. During times of prey abundance caused by calving or migration, different wolf packs may join together temporarily. Offspring typically stay in the pack for 10–54 months before dispersing. Triggers for dispersal include the onset of sexual maturity and competition within the pack for food. The distance travelled by dispersing wolves varies widely; some stay in the vicinity of the parental group, while other individuals may travel great distances of upwards of , , and from their natal (birth) packs. A new pack is usually founded by an unrelated dispersing male and female, travelling together in search of an area devoid of other hostile packs. Wolf packs rarely adopt other wolves into their fold and typically kill them. In the rare cases where other wolves are adopted, the adoptee is almost invariably an immature animal of one to three years old, and unlikely to compete for breeding rights with the mated pair. This usually occurs between the months of February and May. Adopted males may mate with an available pack female and then form their own pack. In some cases, a lone wolf is adopted into a pack to replace a deceased breeder. Wolves are territorial and generally establish territories far larger than they require to survive assuring a steady supply of prey. Territory size depends largely on the amount of prey available and the age of the pack's pups. They tend to increase in size in areas with low prey populations, or when the pups reach the age of six months when they have the same nutritional needs as adults. Wolf packs travel constantly in search of prey, covering roughly 9% of their territory per day, on average . The core of their territory is on average where they spend 50% of their time. Prey density tends to be much higher on the territory's periphery. Wolves tend to avoid hunting on the fringes of their range to avoid fatal confrontations with neighbouring packs. The smallest territory on record was held by a pack of six wolves in northeastern Minnesota, which occupied an estimated , while the largest was held by an Alaskan pack of ten wolves encompassing . Wolf packs are typically settled, and usually leave their accustomed ranges only during severe food shortages. Territorial fights are among the principal causes of wolf mortality, one study concluding that 14–65% of wolf deaths in Minnesota and the Denali National Park and Preserve were due to other wolves. Communication Wolves communicate using vocalizations, body postures, scent, touch, and taste. The phases of the moon have no effect on wolf vocalization, and despite popular belief, wolves do not howl at the Moon. Wolves howl to assemble the pack usually before and after hunts, to pass on an alarm particularly at a den site, to locate each other during a storm, while crossing unfamiliar territory, and to communicate across great distances. Wolf howls can under certain conditions be heard over areas of up to . Other vocalizations include growls, barks and whines. Wolves do not bark as loudly or continuously as dogs do in confrontations, rather barking a few times and then retreating from a perceived danger. Aggressive or self-assertive wolves are characterized by their slow and deliberate movements, high body posture and raised hackles, while submissive ones carry their bodies low, flatten their fur, and lower their ears and tail. Scent marking involves urine, feces, and preputial and anal gland scents. This is more effective at advertising territory than howling and is often used in combination with scratch marks. Wolves increase their rate of scent marking when they encounter the marks of wolves from other packs. Lone wolves will rarely mark, but newly bonded pairs will scent mark the most. These marks are generally left every throughout the territory on regular travelways and junctions. Such markers can last for two to three weeks, and are typically placed near rocks, boulders, trees, or the skeletons of large animals. Raised leg urination is considered to be one of the most important forms of scent communication in the wolf, making up 60–80% of all scent marks observed. Reproduction Wolves are monogamous, mated pairs usually remaining together for life. Should one of the pair die, another mate is found quickly. With wolves in the wild, inbreeding does not occur where outbreeding is possible. Wolves become mature at the age of two years and sexually mature from the age of three years. The age of first breeding in wolves depends largely on environmental factors: when food is plentiful, or when wolf populations are heavily managed, wolves can rear pups at younger ages to better exploit abundant resources. Females are capable of producing pups every year, one litter annually being the average. Oestrus and rut begin in the second half of winter and lasts for two weeks. Dens are usually constructed for pups during the summer period. When building dens, females make use of natural shelters like fissures in rocks, cliffs overhanging riverbanks and holes thickly covered by vegetation. Sometimes, the den is the appropriated burrow of smaller animals such as foxes, badgers or marmots. An appropriated den is often widened and partly remade. On rare occasions, female wolves dig burrows themselves, which are usually small and short with one to three openings. The den is usually constructed not more than away from a water source. It typically faces southwards where it can be better warmed by sunlight exposure, and the snow can thaw more quickly. Resting places, play areas for the pups, and food remains are commonly found around wolf dens. The odor of urine and rotting food emanating from the denning area often attracts scavenging birds like magpies and ravens. Though they mostly avoid areas within human sight, wolves have been known to nest near homes, paved roads and railways. During pregnancy, female wolves remain in a den located away from the peripheral zone of their territories, where violent encounters with other packs are less likely to occur. The gestation period lasts 62–75 days with pups usually being born in the spring months or early summer in very cold places such as on the tundra. Young females give birth to four to five young, and older females from six to eight young and up to 14. Their mortality rate is 60–80%. Newborn wolf pups look similar to German Shepherd Dog pups. They are born blind and deaf and are covered in short soft greyish-brown fur. They weigh at birth and begin to see after nine to 12 days. The milk canines erupt after one month. Pups first leave the den after three weeks. At one-and-a-half months of age, they are agile enough to flee from danger. Mother wolves do not leave the den for the first few weeks, relying on the fathers to provide food for them and their young. Pups begin to eat solid food at the age of three to four weeks. They have a fast growth rate during their first four months of life: during this period, a pup's weight can increase nearly 30 times. Wolf pups begin play-fighting at the age of three weeks, though unlike young coyotes and foxes, their bites are gentle and controlled. Actual fights to establish hierarchy usually occur at five to eight weeks of age. This is in contrast to young coyotes and foxes, which may begin fighting even before the onset of play behaviour. By autumn, the pups are mature enough to accompany the adults on hunts for large prey. Hunting and feeding Single wolves or mated pairs typically have higher success rates in hunting than do large packs; single wolves have occasionally been observed to kill large prey such as moose, bison and muskoxen unaided. The size of a wolf hunting pack is related to the number of pups that survived the previous winter, adult survival, and the rate of dispersing wolves leaving the pack. The optimal pack size for hunting elk is four wolves, and for bison a large pack size is more successful. Wolves move around their territory when hunting, using the same trails for extended periods. Wolves are nocturnal predators. During the winter, a pack will commence hunting in the twilight of early evening and will hunt all night, traveling tens of kilometres. Sometimes hunting large prey occurs during the day. During the summer, wolves generally tend to hunt individually, ambushing their prey and rarely giving pursuit. When hunting large gregarious prey, wolves will try to isolate an individual from its group. If successful, a wolf pack can bring down game that will feed it for days, but one error in judgement can lead to serious injury or death. Most large prey have developed defensive adaptations and behaviours. Wolves have been killed while attempting to bring down bison, elk, moose, muskoxen, and even by one of their smallest hoofed prey, the white-tailed deer. With smaller prey like beaver, geese, and hares, there is no risk to the wolf. Although people often believe wolves can easily overcome any of their prey, their success rate in hunting hoofed prey is usually low. The wolf must give chase and gain on its fleeing prey, slow it down by biting through thick hair and hide, and then disable it enough to begin feeding. Wolves may wound large prey and then lie around resting for hours before killing it when it is weaker due to blood loss, thereby lessening the risk of injury to themselves. With medium-sized prey, such as roe deer or sheep, wolves kill by biting the throat, severing nerve tracks and the carotid artery, thus causing the animal to die within a few seconds to a minute. With small, mouselike prey, wolves leap in a high arc and immobilize it with their forepaws. Once prey is brought down, wolves begin to feed excitedly, ripping and tugging at the carcass in all directions, and bolting down large chunks of it. The breeding pair typically monopolizes food to continue producing pups. When food is scarce, this is done at the expense of other family members, especially non-pups. Wolves typically commence feeding by gorging on the larger internal organs, like the heart, liver, lungs, and stomach lining. The kidneys and spleen are eaten once they are exposed, followed by the muscles. A wolf can eat 15–19% of its body weight in one sitting. Status and conservation The global wild wolf population in 2003 was estimated at 300,000. Wolf population declines have been arrested since the 1970s. This has fostered recolonization and reintroduction in parts of its former range as a result of legal protection, changes in land use, and rural human population shifts to cities. Competition with humans for livestock and game species, concerns over the danger posed by wolves to people, and habitat fragmentation pose a continued threat to the wolf. Despite these threats, the IUCN classifies the wolf as Least Concern on its Red List due to its relatively widespread range and stable population. The species is listed under AppendixII of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), meaning international trade in the species (including parts and derivatives) is regulated. However, populations of Bhutan, India, Nepal and Pakistan are listed in AppendixI which prohibits commercial international trade in wild-sourced specimens. North America In Canada, 50,000–60,000 wolves live in 80% of their historical range, making Canada an important stronghold for the species. Under Canadian law, First Nations people can hunt wolves without restrictions, but others must acquire licenses for the hunting and trapping seasons. As many as 4,000 wolves may be harvested in Canada each year. The wolf is a protected species in national parks under the Canada National Parks Act. In Alaska, 7,000–11,000 wolves are found on 85% of the state's area. Wolves may be hunted or trapped with a license; around 1,200 wolves are harvested annually. In the contiguous United States, wolf declines were caused by the expansion of agriculture, the decimation of the wolf's main prey species like the American bison, and extermination campaigns. Wolves were given protection under the Endangered Species Act (ESA) of 1973, and have since returned to parts of their former range thanks to both natural recolonizations and reintroductions in Yellowstone and Idaho. The repopulation of wolves in Midwestern United States has been concentrated in the Great Lakes states of Minnesota, Wisconsin and Michigan where wolves number over 4,000 as of 2018. Wolves also occupy much of the northern Rocky Mountains region and the northwest, with a total population over 3,000 as of the 2020s. In Mexico and parts of the southwestern United States, the Mexican and U.S. governments collaborated from 1977 to 1980 in capturing all Mexican wolves remaining in the wild to prevent their extinction and established captive breeding programs for reintroduction. As of 2024, the reintroduced Mexican wolf population numbers over 250 individuals. Eurasia Europe, excluding Russia, Belarus and Ukraine, has 17,000 wolves in more than 28 countries. In many countries of the European Union, the wolf is strictly protected under the 1979 Berne Convention on the Conservation of European Wildlife and Natural Habitats (AppendixII) and the 1992 Council Directive 92/43/EEC on the Conservation of Natural Habitats and of Wild Fauna and Flora (AnnexII andIV). There is extensive legal protection in many European countries, although there are national exceptions. Wolves have been persecuted in Europe for centuries, having been exterminated in Great Britain by 1684, in Ireland by 1770, in Central Europe by 1899, in France by the 1930s, and in much of Scandinavia by the early 1970s. They continued to survive in parts of Finland, Eastern Europe and Southern Europe. Since 1980, European wolves have rebounded and expanded into parts of their former range. The decline of the traditional pastoral and rural economies seems to have ended the need to exterminate the wolf in parts of Europe. As of 2016, estimates of wolf numbers include: 4,000 in the Balkans, 3,460–3,849 in the Carpathian Mountains, 1,700–2,240 in the Baltic states, 1,100–2,400 in the Italian Peninsula, and around 2,500 in the northwest Iberian peninsula as of 2007. In a study of wolf conservation in Sweden, it was found that there was little opposition between the policies of the European Union and those of the Swedish officials implementing domestic policy. In the former Soviet Union, wolf populations have retained much of their historical range despite Soviet-era large scale extermination campaigns. Their numbers range from 1,500 in Georgia, to 20,000 in Kazakhstan and up to 45,000 in Russia. In Russia, the wolf is regarded as a pest because of its attacks on livestock, and wolf management means controlling their numbers by destroying them throughout the year. Russian history over the past century shows that reduced hunting leads to an abundance of wolves. The Russian government has continued to pay bounties for wolves and annual harvests of 20–30% do not appear to significantly affect their numbers. In the Middle East, only Israel and Oman give wolves explicit legal protection. Israel has protected its wolves since 1954 and has maintained a moderately sized population of 150 through effective enforcement of conservation policies. These wolves have moved into neighboring countries. Approximately 300–600 wolves inhabit the Arabian Peninsula. The wolf also appears to be widespread in Iran. Turkey has an estimated population of about 7,000 wolves. Outside of Turkey, wolf populations in the Middle East may total 1,000–2,000. In southern Asia, the northern regions of Afghanistan and Pakistan are important strongholds for wolves. The wolf has been protected in India since 1972. The Indian wolf is distributed across the states of Gujarat, Rajasthan, Haryana, Uttar Pradesh, Madhya Pradesh, Maharashtra, Karnataka and Andhra Pradesh. As of 2019, it is estimated that there are around 2,000–3,000 Indian wolves in the country. In East Asia, Mongolia's population numbers 10,000–20,000. In China, Heilongjiang has roughly 650 wolves, Xinjiang has 10,000 and Tibet has 2,000. 2017 evidence suggests that wolves range across all of mainland China. Wolves have been historically persecuted in China but have been legally protected since 1998. The last Japanese wolf was captured and killed in 1905. Relationships with humans In culture In folklore, religion and mythology The wolf is a common motif in the mythologies and cosmologies of peoples throughout its historical range. The Ancient Greeks associated wolves with Apollo, the god of light and order. The Ancient Romans connected the wolf with their god of war and agriculture Mars, and believed their city's founders, Romulus and Remus, were suckled by a she-wolf. Norse mythology includes the feared giant wolf Fenrir, and Geri and Freki, Odin's faithful pets. In Chinese astronomy, the wolf represents Sirius and guards the heavenly gate. In China, the wolf was traditionally associated with greed and cruelty and wolf epithets were used to describe negative behaviours such as cruelty ("wolf's heart"), mistrust ("wolf's look") and lechery ("wolf-sex"). In both Hinduism and Buddhism, the wolf is ridden by gods of protection. In Vedic Hinduism, the wolf is a symbol of the night and the daytime quail must escape from its jaws. In Tantric Buddhism, wolves are depicted as inhabitants of graveyards and destroyers of corpses. In the Pawnee creation myth, the wolf was the first animal brought to Earth. When humans killed it, they were punished with death, destruction and the loss of immortality. For the Pawnee, Sirius is the "wolf star" and its disappearance and reappearance signified the wolf moving to and from the spirit world. Both Pawnee and Blackfoot call the Milky Way the "wolf trail". The wolf is also an important crest symbol for clans of the Pacific Northwest like the Kwakwakaʼwakw. The concept of people turning into wolves, and the inverse, has been present in many cultures. One Greek myth tells of Lycaon being transformed into a wolf by Zeus as punishment for his evil deeds. The legend of the werewolf has been widespread in European folklore and involves people willingly turning into wolves to attack and kill others. The Navajo have traditionally believed that witches would turn into wolves by donning wolf skins and would kill people and raid graveyards. The Dena'ina believed wolves were once men and viewed them as brothers. In fable and literature Aesop featured wolves in several of his fables, playing on the concerns of Ancient Greece's settled, sheep-herding world. His most famous is the fable of "The Boy Who Cried Wolf", which is directed at those who knowingly raise false alarms, and from which the idiomatic phrase "to cry wolf" is derived. Some of his other fables concentrate on maintaining the trust between shepherds and guard dogs in their vigilance against wolves, as well as anxieties over the close relationship between wolves and dogs. Although Aesop used wolves to warn, criticize and moralize about human behaviour, his portrayals added to the wolf's image as a deceitful and dangerous animal. The Bible uses an image of a wolf lying with a lamb in a utopian vision of the future. In the New Testament, Jesus is said to have used wolves as illustrations of the dangers his followers, whom he represents as sheep, would face should they follow him. Isengrim the wolf, a character first appearing in the 12th-century Latin poem Ysengrimus, is a major character in the Reynard Cycle, where he stands for the low nobility, whilst his adversary, Reynard the fox, represents the peasant hero. Isengrim is forever the victim of Reynard's wit and cruelty, often dying at the end of each story. The tale of "Little Red Riding Hood", first written in 1697 by Charles Perrault, is considered to have further contributed to the wolf's negative reputation in the Western world. The Big Bad Wolf is portrayed as a villain capable of imitating human speech and disguising itself with human clothing. The character has been interpreted as an allegorical sexual predator. Villainous wolf characters also appear in The Three Little Pigs and "The Wolf and the Seven Young Goats". The hunting of wolves, and their attacks on humans and livestock, feature prominently in Russian literature, and are included in the works of Leo Tolstoy, Anton Chekhov, Nikolay Nekrasov, Ivan Bunin, Leonid Pavlovich Sabaneyev, and others. Tolstoy's War and Peace and Chekhov's Peasants both feature scenes in which wolves are hunted with hounds and Borzois. The musical Peter and the Wolf involves a wolf being captured for eating a duck, but is spared and sent to a zoo. Wolves are among the central characters of Rudyard Kipling's The Jungle Book. His portrayal of wolves has been praised posthumously by wolf biologists for his depiction of them: rather than being villainous or gluttonous, as was common in wolf portrayals at the time of the book's publication, they are shown as living in amiable family groups and drawing on the experience of infirm but experienced elder pack members. Farley Mowat's largely fictional 1963 memoir Never Cry Wolf is widely considered to be the most popular book on wolves, having been adapted into a Hollywood film and taught in several schools decades after its publication. Although credited with having changed popular perceptions on wolves by portraying them as loving, cooperative and noble, it has been criticized for its idealization of wolves and its factual inaccuracies. Conflicts Human presence appears to stress wolves, as seen by increased cortisol levels in instances such as snowmobiling near their territory. Predation on livestock Livestock depredation has been one of the primary reasons for hunting wolves and can pose a severe problem for wolf conservation. As well as causing economic losses, the threat of wolf predation causes great stress on livestock producers, and no foolproof solution of preventing such attacks short of exterminating wolves has been found. Some nations help offset economic losses to wolves through compensation programs or state insurance. Domesticated animals are easy prey for wolves, as they have been bred under constant human protection, and are thus unable to defend themselves very well. Wolves typically resort to attacking livestock when wild prey is depleted. In Eurasia, a large part of the diet of some wolf populations consists of livestock, while such incidents are rare in North America, where healthy populations of wild prey have been largely restored. The majority of losses occur during the summer grazing period, untended livestock in remote pastures being the most vulnerable to wolf predation. The most frequently targeted livestock species are sheep (Europe), domestic reindeer (northern Scandinavia), goats (India), horses (Mongolia), cattle and turkeys (North America). The number of animals killed in single attacks varies according to species: most attacks on cattle and horses result in one death, while turkeys, sheep and domestic reindeer may be killed in surplus. Wolves mainly attack livestock when the animals are grazing, though they occasionally break into fenced enclosures. Competition with dogs A review of the studies on the competitive effects of dogs on sympatric carnivores did not mention any research on competition between dogs and wolves. Competition would favour the wolf, which is known to kill dogs; however, wolves usually live in pairs or in small packs in areas with high human persecution, giving them a disadvantage when facing large groups of dogs. Wolves kill dogs on occasion, and some wolf populations rely on dogs as an important food source. In Croatia, wolves kill more dogs than sheep, and wolves in Russia appear to limit stray dog populations. Wolves may display unusually bold behaviour when attacking dogs accompanied by people, sometimes ignoring nearby humans. Wolf attacks on dogs may occur both in house yards and in forests. Wolf attacks on hunting dogs are considered a major problem in Scandinavia and Wisconsin. Although the number of dogs killed each year by wolves is relatively low, it induces a fear of wolves entering villages and farmyards to prey on them. In many cultures, dogs are seen as family members, or at least working team members, and losing one can lead to strong emotional responses such as demanding more liberal hunting regulations. Dogs that are employed to guard sheep help to mitigate human–wolf conflicts, and are often proposed as one of the non-lethal tools in the conservation of wolves. Shepherd dogs are not particularly aggressive, but they can disrupt potential wolf predation by displaying what is to the wolf ambiguous behaviours, such as barking, social greeting, invitation to play or aggression. The historical use of shepherd dogs across Eurasia has been effective against wolf predation, especially when confining sheep in the presence of several livestock guardian dogs. Shepherd dogs are sometimes killed by wolves. Attacks on humans The fear of wolves has been pervasive in many societies, though humans are not part of the wolf's natural prey. How wolves react to humans depends largely on their prior experience with people: wolves lacking any negative experience of humans, or which are food-conditioned, may show little fear of people. Although wolves may react aggressively when provoked, such attacks are mostly limited to quick bites on extremities, and the attacks are not pressed. Predatory attacks may be preceded by a long period of habituation, in which wolves gradually lose their fear of humans. The victims are repeatedly bitten on the head and face, and are then dragged off and consumed unless the wolves are driven off. Such attacks typically occur only locally and do not stop until the wolves involved are eliminated. Predatory attacks can occur at any time of the year, with a peak in the June–August period, when the chances of people entering forested areas (for livestock grazing or berry and mushroom picking) increase. Cases of non-rabid wolf attacks in winter have been recorded in Belarus, Kirov and Irkutsk oblasts, Karelia and Ukraine. Also, wolves with pups experience greater food stresses during this period. The majority of victims of predatory wolf attacks are children under the age of 18 and, in the rare cases where adults are killed, the victims are almost always women. Indian wolves have a history of preying on children, a phenomenon called "child-lifting". They may be taken primarily in the spring and summer periods during the evening hours, and often within human settlements. Cases of rabid wolves are low when compared to other species, as wolves do not serve as primary reservoirs of the disease, but can be infected by animals such as dogs, jackals and foxes. Incidents of rabies in wolves are very rare in North America, though numerous in the eastern Mediterranean, the Middle East and Central Asia. Wolves apparently develop the "furious" phase of rabies to a very high degree. This, coupled with their size and strength, makes rabid wolves perhaps the most dangerous of rabid animals. Bites from rabid wolves are 15 times more dangerous than those of rabid dogs. Rabid wolves usually act alone, travelling large distances and often biting large numbers of people and domestic animals. Most rabid wolf attacks occur in the spring and autumn periods. Unlike with predatory attacks, the victims of rabid wolves are not eaten, and the attacks generally occur only on a single day. The victims are chosen at random, though most cases involve adult men. During the fifty years up to 2002, there were eight fatal attacks in Europe and Russia, and more than two hundred in southern Asia. Human hunting of wolves Theodore Roosevelt said wolves are difficult to hunt because of their elusiveness, sharp senses, high endurance, and ability to quickly incapacitate and kill hunting dogs. Historic methods included killing of spring-born litters in their dens, coursing with dogs (usually combinations of sighthounds, Bloodhounds and Fox Terriers), poisoning with strychnine, and trapping. A popular method of wolf hunting in Russia involves trapping a pack within a small area by encircling it with fladry poles carrying a human scent. This method relies heavily on the wolf's fear of human scents, though it can lose its effectiveness when wolves become accustomed to the odor. Some hunters can lure wolves by imitating their calls. In Kazakhstan and Mongolia, wolves are traditionally hunted using eagles and large falcons, though this practice is declining, as experienced falconers are becoming few in number. Shooting wolves from aircraft is highly effective, due to increased visibility and direct lines of fire. Several types of dog, including the Borzoi and Kyrgyz Tajgan, have been specifically bred for wolf hunting. Wolves as pets and working animals As domestic dogs, wolves have been pets for tens of thousands of years. In modern times, wolves and wolf-dog hybrids are sometimes kept as exotic pets, wolves do not show the same tractability as dogs in living alongside humans, being generally less responsive to human commands and more likely to act aggressively. Humans are more likely to be fatally mauled by a pet wolf or wolf-dog hybrid than by a dog.
Biology and health sciences
Carnivora
null
33709
https://en.wikipedia.org/wiki/Walrus
Walrus
The walrus (Odobenus rosmarus) is a large pinniped marine mammal with discontinuous distribution about the North Pole in the Arctic Ocean and subarctic seas of the Northern Hemisphere. It is the only extant species in the family Odobenidae and genus Odobenus. This species is subdivided into two subspecies: the Atlantic walrus (O. r. rosmarus), which lives in the Atlantic Ocean, and the Pacific walrus (O. r. divergens), which lives in the Pacific Ocean. Adult walrus are characterised by prominent tusks and whiskers, and considerable bulk: adult males in the Pacific can weigh more than and, among pinnipeds, are exceeded in size only by the two species of elephant seals. Walrus live mostly in shallow waters above the continental shelves, spending significant amounts of their lives on the sea ice looking for benthic bivalve molluscs. Walruses are relatively long-lived, social animals, and are considered to be a "keystone species" in the Arctic marine regions. The walrus has played a prominent role in the cultures of many indigenous Arctic peoples, who have hunted it for meat, fat, skin, tusks, and bone. During the 19th century and the early 20th century, walrus were widely hunted for their blubber, walrus ivory, and meat. The population of walruses dropped rapidly all around the Arctic region. It has rebounded somewhat since, though the populations of Atlantic and Laptev walruses remain fragmented and at low levels compared with the time before human interference. Etymology The origin of the word walrus derives from a Germanic language, and it has been attributed largely to either Dutch or Old Norse. Its first part is thought to derive from a word such as Old Norse ('whale') and the second part has been hypothesized to come from the Old Norse word ('horse'). For example, the Old Norse word means 'horse-whale' and is thought to have been passed in an inverted form to both Dutch and the dialects of northern Germany as and . An alternative theory is that it comes from the Dutch words 'shore' and 'giant'. The species name rosmarus is Scandinavian. The Norwegian manuscript Konungs skuggsjá, thought to date from around AD 1240, refers to the walrus as in Iceland and in Greenland (walruses were by now extinct in Iceland and Norway, while the word evolved in Greenland). Several place names in Iceland, Greenland and Norway may originate from walrus sites: Hvalfjord, Hvallatrar and Hvalsnes to name some, all being typical walrus breeding grounds. The archaic English word for walrus—morse—is widely thought to have come from the Slavic languages, which in turn borrowed it from Finno-Ugric languages, and ultimately (according to Ante Aikio) from an unknown Pre-Finno-Ugric substrate language of Northern Europe. Compare () in Russian, in Finnish, in Northern Saami, and in French. Olaus Magnus, who depicted the walrus in the in 1539, first referred to the walrus as the , probably a Latinization of , and this was adopted by Linnaeus in his binomial nomenclature. The coincidental similarity between morse and the Latin word ('a bite') supposedly contributed to the walrus's reputation as a "terrible monster". The compound Odobenus comes from (Greek for 'teeth') and (Greek for 'walk'), based on observations of walruses using their tusks to pull themselves out of the water. The term in Latin means 'turning apart', referring to their tusks. The Inuttitut term for the creature is aivik, similar to the Inuktitut word: aiviq ᐊᐃᕕᖅ. Taxonomy and evolution The walrus is a mammal in the order Carnivora. It is the sole surviving member of the family Odobenidae, one of three lineages in the suborder Pinnipedia along with true seals (Phocidae) and eared seals (Otariidae). While there has been some debate as to whether all three lineages are monophyletic, i.e. descended from a single ancestor, or diphyletic, recent genetic evidence suggests all three descended from a caniform ancestor most closely related to modern bears. Recent multigene analysis indicates the odobenids and otariids diverged from the phocids about 20–26 million years ago, while the odobenids and the otariids separated 15–20 million years ago. Odobenidae was once a highly diverse and widespread family, including at least twenty species in the subfamilies Imagotariinae, Dusignathinae and Odobeninae. The key distinguishing feature was the development of a squirt/suction feeding mechanism; tusks are a later feature specific to Odobeninae, of which the modern walrus is the last remaining (relict) species. Two subspecies of walrus are widely recognized: the Atlantic walrus, O. r. rosmarus (Linnaeus, 1758) and the Pacific walrus, O. r. divergens (Illiger, 1815). Fixed genetic differences between the Atlantic and Pacific subspecies indicate very restricted gene flow, but relatively recent separation, estimated at 500,000 and 785,000 years ago. These dates coincide with the hypothesis derived from fossils that the walrus evolved from a tropical or subtropical ancestor that became isolated in the Atlantic Ocean and gradually adapted to colder conditions in the Arctic. The modern walrus is mostly known from Arctic regions, but a substantial breeding population occurred on isolated Sable Island, southeast of Nova Scotia and due east of Portland, Maine, until the early Colonial period. Abundant walrus remains have also been recovered from the southern North Sea dating to the Eemian interglacial period, when that region would have been submerged as it is today, unlike the intervening glacial lowstand when the shallow North Sea was dry land. Fossils known from San Francisco, Vancouver, and the Atlantic US coast as far south as North Carolina have been referred to glacial periods. An isolated population in the Laptev Sea was considered by some authorities, including many Russian biologists and the canonical Mammal Species of the World, to be a third subspecies, O. r. laptevi (Chapskii, 1940), but has since been determined to be of Pacific walrus origin. Anatomy While some outsized Pacific males can weigh as much as , most weigh between . An occasional male of the Pacific subspecies far exceeds normal dimensions. In 1909, a walrus hide weighing was collected from an enormous bull in Franz Josef Land, while in August 1910, Jack Woodson shot a walrus, harvesting its hide. Since a walrus's hide usually accounts for about 20% of its body weight, the total body mass of these two giants is estimated to have been at least . The Atlantic subspecies weighs about 10–20% less than the Pacific subspecies. Male Atlantic walrus weigh an average of . The Atlantic walrus also tends to have relatively shorter tusks and somewhat more of a flattened snout. Females weigh about two-thirds as much as males, with the Atlantic females averaging , sometimes weighing as little as , and the Pacific female averaging . Length typically ranges from . Newborn walruses are already quite large, averaging in weight and in length across both sexes and subspecies. All told, the walrus is the third largest pinniped species, after the two elephant seals. Walruses maintain such a high body weight because of the blubber stored underneath their skin. This blubber keeps them warm and the fat provides energy to the walrus. The walrus's body shape shares features with both sea lions (eared seals: Otariidae) and seals (true seals: Phocidae). As with otariids, it can turn its rear flippers forward and move on all fours; however, its swimming technique is more like that of true seals, relying less on flippers and more on sinuous whole body movements. Also like phocids, it lacks external ears. The extraocular muscles of the walrus are well-developed. This and its lack of orbital roof allow it to protrude its eyes and see in both a frontal and dorsal direction. However, vision in this species appears to be more suited for short-range. Tusks and dentition While this was not true of all extinct walruses, the most prominent feature of the living species is its long tusks. These are elongated canines, which are present in both male and female walruses and can reach a length of 1 m (3 ft 3 in) and weigh up to 5.4 kg (12 lb). Tusks are slightly longer and thicker among males, which use them for fighting, dominance and display; the strongest males with the largest tusks typically dominate social groups. Tusks are also used to form and maintain holes in the ice and aid the walrus in climbing out of water onto ice. Tusks were once thought to be used to dig out prey from the seabed, but analyses of abrasion patterns on the tusks indicate they are dragged through the sediment while the upper edge of the snout is used for digging. While the dentition of walruses is highly variable, they generally have relatively few teeth other than the tusks. The maximal number of teeth is 38 with dentition formula: , but over half of the teeth are rudimentary and occur with less than 50% frequency, such that a typical dentition includes only 18 teeth Vibrissae (whiskers) Surrounding the tusks is a broad mat of stiff bristles ("mystacial vibrissae"), giving the walrus a characteristic whiskered appearance. There can be 400 to 700 vibrissae in 13 to 15 rows reaching 30 cm (12 in) in length, though in the wild they are often worn to much shorter lengths due to constant use in foraging. The vibrissae are attached to muscles and are supplied with blood and nerves, making them highly sensitive organs capable of differentiating shapes thick and wide. Skin Aside from the vibrissae, the walrus is sparsely covered with fur and appears bald. Its skin is highly wrinkled and thick, up to around the neck and shoulders of males. The blubber layer beneath is up to thick. Young walruses are deep brown and grow paler and more cinnamon-colored as they age. Old males, in particular, become nearly pink. Because skin blood vessels constrict in cold water, the walrus can appear almost white when swimming. As a secondary sexual characteristic, males also acquire significant nodules, called "bosses", particularly around the neck and shoulders. The walrus has an air sac under its throat which acts like a flotation bubble and allows it to bob vertically in the water and sleep. The males possess a large baculum (penis bone), up to in length, the largest of any land mammal, both in absolute size and relative to body size. Life history Reproduction Walruses live to about 20–30 years old in the wild. The males reach sexual maturity as early as seven years, but do not typically mate until fully developed at around 15 years of age. They rut from January through April, decreasing their food intake dramatically. The females begin ovulating as soon as four to six years old. The females are diestrous, coming into heat in late summer and around February, yet the males are fertile only around February; the potential fertility of this second period is unknown. Breeding occurs from January to March, peaking in February. Males aggregate in the water around ice-bound groups of estrous females and engage in competitive vocal displays. The females join them and copulate in the water. Gestation lasts 15 to 16 months. The first three to four months are spent with the blastula in suspended development before it implants itself in the uterus. This strategy of delayed implantation, common among pinnipeds, presumably evolved to optimize both the mating season and the birthing season, determined by ecological conditions that promote newborn survival. Calves are born during the spring migration, from April to June. They weigh at birth and are able to swim. The mothers nurse for over a year before weaning, but the young can spend up to five years with the mothers. Walrus milk contains higher amounts of fats and protein compared to land animals but lower compared to phocid seals. This lower fat content in turn causes a slower growth rate among calves and a longer nursing investment for their mothers. Because ovulation is suppressed until the calf is weaned, females give birth at most every two years, leaving the walrus with the lowest reproductive rate of any pinniped. Migration The rest of the year (late summer and fall), walruses tend to form massive aggregations of tens of thousands of individuals on rocky beaches or outcrops. The migration between the ice and the beach can be long-distance and dramatic. In late spring and summer, for example, several hundred thousand Pacific walruses migrate from the Bering Sea into the Chukchi Sea through the relatively narrow Bering Strait. Ecology Range and habitat The majority of the population of the Pacific walrus spends its summers north of the Bering Strait in the Chukchi Sea of the Arctic Ocean along the northern coast of eastern Siberia, around Wrangel Island, in the Beaufort Sea along the northern shore of Alaska south to Unimak Island, and in the waters between those locations. Smaller numbers of males summer in the Gulf of Anadyr on the southern coast of the Siberian Chukchi Peninsula, and in Bristol Bay off the southern coast of Alaska, west of the Alaska Peninsula. In the spring and fall, walruses congregate throughout the Bering Strait, reaching from the western coast of Alaska to the Gulf of Anadyr. They winter over in the Bering Sea along the eastern coast of Siberia south to the northern part of the Kamchatka Peninsula, and along the southern coast of Alaska. A 28,000-year-old fossil walrus was dredged up from the bottom of San Francisco Bay, indicating that Pacific walruses ranged that far south during the last Ice Age. Commercial harvesting reduced the population of the Pacific walrus to between 50,000 and 100,000 in the 1950s–1960s. Limits on commercial hunting allowed the population to increase to a peak in the 1970s-1980s, but subsequently, walrus numbers have again declined. Early aerial censuses of Pacific walrus conducted at five-year intervals between 1975 and 1985 estimated populations of above 220,000 in each of the three surveys. In 2006, the population of the Pacific walrus was estimated to be around 129,000 on the basis of an aerial census combined with satellite tracking. There were roughly 200,000 Pacific walruses in 1990. The much smaller population of Atlantic walruses ranges from the Canadian Arctic, across Greenland, Svalbard, and the western part of Arctic Russia. There are eight hypothetical subpopulations of Atlantic walruses, based largely on their geographical distribution and movements: five west of Greenland and three east of Greenland. The Atlantic walrus once ranged south to Sable Island, off of Nova Scotia; as late as the 18th century, they could be found in large numbers in the Greater Gulf of St. Lawrence region, sometimes in colonies of 7-8,000 individuals. This population was nearly eradicated by commercial harvest; their current numbers, though difficult to estimate, probably remain below 20,000. In April 2006, the Canadian Species at Risk Act listed the populations of northwestern Atlantic walrus in Québec, New Brunswick, Nova Scotia, Newfoundland and Labrador as having been eradicated in Canada. A genetically distinct population existed in Iceland that was wiped out after Norse settlement around 1213–1330 AD. An isolated population is restricted, year-round, to the central and western regions of the Laptev Sea, from the eastern Kara Sea to the westernmost regions of the East Siberian Sea. The current population of these Laptev walruses has been estimated at between 5-10,000. Even though walruses can dive to depths beyond 500 meters, they spend most of their time in shallow waters (and the nearby ice floes) hunting for bivalves. In March 2021, a single walrus, nicknamed Wally the Walrus, was sighted at Valentia Island, Ireland, far south of its typical range, potentially due to having fallen asleep on an iceberg that then drifted south towards Ireland. Days later, a walrus, thought to be the same animal, was spotted on the Pembrokeshire coast, Wales. In June 2022, a single walrus was sighted on the shores of the Baltic Sea - at Rügen Island, Germany, Mielno, Poland and Skälder Bay, Sweden. In July 2022, there was a report of a lost, starving walrus (nicknamed as Stena) in the coastal waters of the towns of Hamina and Kotka in Kymenlaakso, Finland, that, despite rescue attempts, died of starvation when the rescuers tried to transport it to the Korkeasaari Zoo for treatment. Diet Walruses prefer shallow shelf regions and forage primarily on the sea floor, often from sea ice platforms. They are not particularly deep divers compared to other pinnipeds; the deepest dives in a study of Atlantic walrus near Svalbard were only . However, a more recent study recorded dives exceeding in Smith Sound, between NW Greenland and Arctic Canada – in general, peak dive depth can be expected to depend on prey distribution and seabed depth. The walrus has a diverse and opportunistic diet, feeding on more than 60 genera of marine organisms, including shrimp, crabs, priapulids, spoon worms, tube worms, soft corals, tunicates, sea cucumbers, various mollusks (such as snails, octopuses, and squid), some types of slow-moving fish, and even parts of other pinnipeds. However, it prefers benthic bivalve mollusks, especially clams, for which it forages by grazing along the sea bottom, searching and identifying prey with its sensitive vibrissae and clearing the murky bottoms with jets of water and active flipper movements. The walrus sucks the meat out by sealing its powerful lips to the organism and withdrawing its piston-like tongue rapidly into its mouth, creating a vacuum. The walrus palate is uniquely vaulted, enabling effective suction. The diet of the Pacific walrus consist almost exclusively of benthic invertebrates (97 percent). Aside from the large numbers of organisms actually consumed by the walrus, its foraging has a large peripheral impact on benthic communities. It disturbs (bioturbates) the sea floor, releasing nutrients into the water column, encouraging mixing and movement of many organisms and increasing the patchiness of the benthos. Seal tissue has been observed in a fairly significant proportion of walrus stomachs in the Pacific, but the importance of seals in the walrus diet is under debate. There have been isolated observations of walruses preying on seals up to the size of a bearded seal. Rarely, incidents of walruses preying on seabirds, particularly the Brünnich's guillemot (Uria lomvia), have been documented. Walruses may occasionally prey on ice-entrapped narwhals and scavenge on whale carcasses but there is little evidence to prove this. Predators Due to its great size and tusks, the walrus has only two natural predators: the orca and the polar bear. The walrus does not, however, comprise a significant component of either of these predators' diets. Both the orca and the polar bear are also most likely to prey on walrus calves. The polar bear often hunts the walrus by rushing at beached aggregations and consuming the individuals crushed or wounded in the sudden exodus, typically younger or infirm animals. The bears also isolate walruses when they overwinter and are unable to escape a charging bear due to inaccessible diving holes in the ice. However, even an injured walrus is a formidable opponent for a polar bear, and direct attacks are rare. Armed with its ivory tusks, walruses have been known to fatally injure polar bears in battles if the latter follows the other into the water, where the bear is at a disadvantage. Polar bear–walrus battles are often extremely protracted and exhausting, and bears have been known to break away from the attack after injuring a walrus. Orcas regularly attack walruses, although walruses are believed to have successfully defended themselves via counterattack against the larger cetacean. However, orcas have been observed successfully attacking walruses with few or no injuries. Relationship with humans Conservation In the 18th and 19th centuries, the walrus was heavily exploited by American and European sealers and whalers, leading to the near-extirpation of the Atlantic subspecies. As early as 1871 traditional hunters were expressing concern about the numbers of walrus being hunted by whaling fleets. Commercial walrus harvesting is now outlawed throughout its range, although Chukchi, Yupik and Inuit peoples are permitted to kill small numbers towards the end of each summer. Traditional hunters used all parts of the walrus. The meat, often preserved, is an important winter nutrition source; the flippers are fermented and stored as a delicacy until spring; tusks and bone were historically used for tools, as well as material for handicrafts; the oil was rendered for warmth and light; the tough hide made rope and house and boat coverings; and the intestines and gut linings made waterproof parkas. While some of these uses have faded with access to alternative technologies, walrus meat remains an important part of local diets, and tusk carving and engraving remain a vital art form. According to Adolf Erik Nordenskiöld, European hunters and Arctic explorers found walrus meat not particularly tasty, and only ate it in case of necessity; however walrus tongue was a delicacy. Walrus hunts are regulated by resource managers in Russia, the United States, Canada, and Greenland, and by representatives of the respective hunting communities. An estimated four to seven thousand Pacific walruses are harvested in Alaska and in Russia, including a significant portion (about 42%) of struck and lost animals. Several hundred are removed annually around Greenland. The sustainability of these levels of harvest is difficult to determine given uncertain population estimates and parameters such as fecundity and mortality. The Boone and Crockett Big Game Record book has entries for Atlantic and Pacific walrus. The recorded largest tusks are just over 30 inches and 37 inches long respectively. The effects of global climate change are another element of concern. The extent and thickness of the pack ice has reached unusually low levels in several recent years. The walrus relies on this ice while giving birth and aggregating in the reproductive period. Thinner pack ice over the Bering Sea has reduced the amount of resting habitat near optimal feeding grounds. This more widely separates lactating females from their calves, increasing nutritional stress for the young and lower reproductive rates. Reduced coastal sea ice has also been implicated in the increase of stampeding deaths crowding the shorelines of the Chukchi Sea between eastern Russia and western Alaska. Analysis of trends in ice cover published in 2012 indicate that Pacific walrus populations are likely to continue to decline for the foreseeable future, and shift further north, but that careful conservation management might be able to limit these effects. Currently, two of the three walrus subspecies are listed as "least-concern" by the IUCN, while the third is "data deficient". The Pacific walrus is not listed as "depleted" according to the Marine Mammal Protection Act nor as "threatened" or "endangered" under the Endangered Species Act. The Russian Atlantic and Laptev Sea populations are classified as Category 2 (decreasing) and Category 3 (rare) in the Russian Red Book. Global trade in walrus ivory is restricted according to a CITES Appendix 3 listing. In October 2017, the Center for Biological Diversity announced they would sue the U.S. Fish and Wildlife Service to force it to classify the Pacific Walrus as a threatened or endangered species. In 1952, walruses in Svalbard were nearly gone due to ivory hunting over a 300 years period, but the Norwegian government banned their commercial hunting and the walruses began to repopulate. By 2018, the population had increased to an estimated 5,503 walruses in the Svalbard area. Culture Folklore The walrus plays an important role in the religion and folklore of many Arctic peoples. Skin and bone are used in some ceremonies, and the animal appears frequently in legends. For example, in a Chukchi version of the widespread myth of the Raven, in which Raven recovers the sun and the moon from an evil spirit by seducing his daughter, the angry father throws the daughter from a high cliff and, as she drops into the water, she turns into a walrus . According to various legends, the tusks are formed either by the trails of mucus from the weeping girl or her long braids. This myth is possibly related to the Chukchi myth of the old walrus-headed woman who rules the bottom of the sea, who is in turn linked to the Inuit goddess Sedna. Both in Chukotka and Alaska, the aurora borealis is believed to be a special world inhabited by those who died by violence, the changing rays representing deceased souls playing ball with a walrus head. Most of the distinctive 12th-century Lewis Chessmen from northern Europe are carved from walrus ivory, though a few have been found to be made of whales' teeth. Literature Because of its distinctive appearance, great bulk, and immediately recognizable whiskers and tusks, the walrus also appears in the popular cultures of peoples with little direct experience with the animal, particularly in English children's literature. Perhaps its best-known appearance is in Lewis Carroll's whimsical poem "The Walrus and the Carpenter" that appears in his 1871 book Through the Looking-Glass. In the poem, the eponymous antiheroes use trickery to consume a great number of oysters. Although Carroll accurately portrays the biological walrus's appetite for bivalve mollusks, oysters, primarily nearshore and intertidal inhabitants, these organisms in fact comprise an insignificant portion of its diet in captivity. The "walrus" in the cryptic song "I Am the Walrus" by the Beatles is a reference to the Lewis Carroll poem. Another appearance of the walrus in literature is in the story "The White Seal" in Rudyard Kipling's The Jungle Book, where it is the "old Sea Vitch—the big, ugly, bloated, pimpled, fat-necked, long-tusked walrus of the North Pacific, who has no manners except when he is asleep".
Biology and health sciences
Pinnipeds
null
33777
https://en.wikipedia.org/wiki/Whale
Whale
Whales are a widely distributed and diverse group of fully aquatic placental marine mammals. As an informal and colloquial grouping, they correspond to large members of the infraorder Cetacea, i.e. all cetaceans apart from dolphins and porpoises. Dolphins and porpoises may be considered whales from a formal, cladistic perspective. Whales, dolphins and porpoises belong to the order Cetartiodactyla, which consists of even-toed ungulates. Their closest non-cetacean living relatives are the hippopotamuses, from which they and other cetaceans diverged about 54 million years ago. The two parvorders of whales, baleen whales (Mysticeti) and toothed whales (Odontoceti), are thought to have had their last common ancestor around 34 million years ago. Mysticetes include four extant (living) families: Balaenopteridae (the rorquals), Balaenidae (right whales), Cetotheriidae (the pygmy right whale), and Eschrichtiidae (the grey whale). Odontocetes include the Monodontidae (belugas and narwhals), Physeteridae (the sperm whale), Kogiidae (the dwarf and pygmy sperm whale), and Ziphiidae (the beaked whales), as well as the six families of dolphins and porpoises which are not considered whales in the informal sense. Whales are fully aquatic, open-ocean animals: they can feed, mate, give birth, suckle and raise their young at sea. Whales range in size from the and dwarf sperm whale to the and blue whale, which is the largest known animal that has ever lived. The sperm whale is the largest toothed predator on Earth. Several whale species exhibit sexual dimorphism, in that the females are larger than males. Baleen whales have no teeth; instead, they have plates of baleen, fringe-like structures that enable them to expel the huge mouthfuls of water they take in while retaining the krill and plankton they feed on. Because their heads are enormous—making up as much as 40% of their total body mass—and they have throat pleats that enable them to expand their mouths, they are able to take huge quantities of water into their mouth at a time. Baleen whales also have a well-developed sense of smell. Toothed whales, in contrast, have conical teeth adapted to catching fish or squid. They also have such keen hearing—whether above or below the surface of the water—that some can survive even if they are blind. Some species, such as sperm whales, are particularly well adapted for diving to great depths to catch squid and other favoured prey. Whales evolved from land-living mammals, and must regularly surface to breathe air, although they can remain underwater for long periods of time. Some species, such as the sperm whale, can stay underwater for up to 90 minutes. They have blowholes (modified nostrils) located on top of their heads, through which air is taken in and expelled. They are warm-blooded, and have a layer of fat, or blubber, under the skin. With streamlined fusiform bodies and two limbs that are modified into flippers, whales can travel at speeds of up to 20 knots, though they are not as flexible or agile as seals. Whales produce a great variety of vocalizations, notably the extended songs of the humpback whale. Although whales are widespread, most species prefer the colder waters of the Northern and Southern Hemispheres and migrate to the equator to give birth. Species such as humpbacks and blue whales are capable of travelling thousands of miles without feeding. Males typically mate with multiple females every year, but females only mate every two to three years. Calves are typically born in the spring and summer; females bear all the responsibility for raising them. Mothers in some species fast and nurse their young for one to two years. Once relentlessly hunted for their products, whales are now protected by international law. The North Atlantic right whales nearly became extinct in the twentieth century, with a population low of 450, and the North Pacific grey whale population is ranked Critically Endangered by the IUCN. Besides the threat from whalers, they also face threats from bycatch and marine pollution. The meat, blubber and baleen of whales have traditionally been used by indigenous peoples of the Arctic. Whales have been depicted in various cultures worldwide, notably by the Inuit and the coastal peoples of Vietnam and Ghana, who sometimes hold whale funerals. Whales occasionally feature in literature and film. A famous example is the great white whale in Herman Melville's novel Moby-Dick. Small whales, such as belugas, are sometimes kept in captivity and trained to perform tricks, but breeding success has been poor and the animals often die within a few months of capture. Whale watching has become a form of tourism around the world. Etymology and definitions The word "whale" comes from the Old English hwæl, from Proto-Germanic *hwalaz, from Proto-Indo-European *(s)kwal-o-, meaning "large sea fish". The Proto-Germanic *hwalaz is also the source of Old Saxon hwal, Old Norse hvalr, hvalfiskr, Swedish val, Middle Dutch wal, walvisc, Dutch walvis, Old High German wal, and German Wal. Other archaic English forms include wal, wale, whal, whalle, whaille, wheal, etc. The term "whale" is sometimes used interchangeably with dolphins and porpoises, acting as a synonym for Cetacea. Six species of dolphins have the word "whale" in their name, collectively known as blackfish: the orca, or killer whale, the melon-headed whale, the pygmy killer whale, the false killer whale, and the two species of pilot whales, all of which are classified under the family Delphinidae (oceanic dolphins). Each species has a different reason for it, for example, the killer whale was named "Ballena asesina" 'killer whale' by Spanish sailors. The term "Great Whales" covers those currently regulated by the International Whaling Commission: the Odontoceti family Physeteridae (sperm whales); and the Mysticeti families Balaenidae (right and bowhead whales), Eschrichtiidae (grey whales), and some of the Balaenopteridae (Minke, Bryde's, Sei, Blue and Fin; not Eden's and Omura's whales). Taxonomy and evolution Phylogeny The whales are part of the largely terrestrial mammalian clade Laurasiatheria. Whales do not form a clade or order; the infraorder Cetacea includes dolphins and porpoises, which are not considered whales in the informal sense. The phylogenetic tree shows the relationships of whales and other mammals, with whale groups marked in green. Cetaceans are divided into two parvorders. The larger parvorder, Mysticeti (baleen whales), is characterized by the presence of baleen, a sieve-like structure in the upper jaw made of keratin, which it uses to filter plankton, among others, from the water. Odontocetes (toothed whales) are characterized by bearing sharp teeth for hunting, as opposed to their counterparts' baleen. Cetaceans and artiodactyls now are classified under the order Cetartiodactyla, often still referred to as Artiodactyla, which includes both whales and hippopotamuses. The hippopotamus and pygmy hippopotamus are the whales' closest terrestrial living relatives. Mysticetes Mysticetes are also known as baleen whales. They have a pair of blowholes side by side and lack teeth; instead they have baleen plates which form a sieve-like structure in the upper jaw made of keratin, which they use to filter plankton from the water. Some whales, such as the humpback, reside in the polar regions where they feed on a reliable source of schooling fish and krill. These animals rely on their well-developed flippers and tail fin to propel themselves through the water; they swim by moving their fore-flippers and tail fin up and down. Whale ribs loosely articulate with their thoracic vertebrae at the proximal end, but do not form a rigid rib cage. This adaptation allows the chest to compress during deep dives as the pressure increases. Mysticetes consist of four families: rorquals (balaenopterids), cetotheriids, right whales (balaenids), and grey whales (eschrichtiids). The main difference between each family of mysticete is in their feeding adaptations and subsequent behaviour. Balaenopterids are the rorquals. These animals, along with the cetotheriids, rely on their throat pleats to gulp large amounts of water while feeding. The throat pleats extend from the mouth to the navel and allow the mouth to expand to a large volume for more efficient capture of the small animals they feed on. Balaenopterids consist of two genera and eight species. Balaenids are the right whales. These animals have very large heads, which can make up as much as 40% of their body mass, and much of the head is the mouth. This allows them to take in large amounts of water into their mouths, letting them feed more effectively. Eschrichtiids have one living member: the grey whale. They are bottom feeders, mainly eating crustaceans and benthic invertebrates. They feed by turning on their sides and taking in water mixed with sediment, which is then expelled through the baleen, leaving their prey trapped inside. This is an efficient method of hunting, in which the whale has no major competitors. Odontocetes Odontocetes are known as toothed whales; they have teeth and only one blowhole. They rely on their well-developed sonar to find their way in the water. Toothed whales send out ultrasonic clicks using the melon. Sound waves travel through the water. Upon striking an object in the water, the sound waves bounce back at the whale. These vibrations are received through fatty tissues in the jaw, which is then rerouted into the ear-bone and into the brain where the vibrations are interpreted. All toothed whales are opportunistic, meaning they will eat anything they can fit in their throat because they are unable to chew. These animals rely on their well-developed flippers and tail fin to propel themselves through the water; they swim by moving their fore-flippers and tail fin up and down. Whale ribs loosely articulate with their thoracic vertebrae at the proximal end, but they do not form a rigid rib cage. This adaptation allows the chest to compress during deep dives as opposed to resisting the force of water pressure. Excluding dolphins and porpoises, odontocetes consist of four families: belugas and narwhals (monodontids), sperm whales (physeterids), dwarf and pygmy sperm whales (kogiids), and beaked whales (ziphiids). The differences between families of odontocetes include size, feeding adaptations and distribution. Monodontids consist of two species: the beluga and the narwhal. They both reside in the frigid arctic and both have large amounts of blubber. Belugas, being white, hunt in large pods near the surface and around pack ice, their coloration acting as camouflage. Narwhals, being black, hunt in large pods in the aphotic zone, but their underbelly still remains white to remain camouflaged when something is looking directly up or down at them. They have no dorsal fin to prevent collision with pack ice. Physeterids and Kogiids consist of sperm whales. Sperm whales consist the largest and smallest odontocetes, and spend a large portion of their life hunting squid. P. macrocephalus spends most of its life in search of squid in the depths; these animals do not require any degree of light at all, in fact, blind sperm whales have been caught in perfect health. The behaviour of Kogiids remains largely unknown, but, due to their small lungs, they are thought to hunt in the photic zone. Ziphiids consist of 22 species of beaked whale. These vary from size, to coloration, to distribution, but they all share a similar hunting style. They use a suction technique, aided by a pair of grooves on the underside of their head, not unlike the throat pleats on the rorquals, to feed. As a formal clade (a group which does not exclude any descendant taxon), odontocetes also contains the porpoises (Phocoenidae) and four or five living families of dolphins: oceanic dolphins (Delphinidae), South Asian river dolphins (Platanistidae), the possibly extinct Yangtze River dolphin (Lipotidae), South American river dolphins (Iniidae), and La Plata dolphin (Pontoporiidae). Evolution Whales are descendants of land-dwelling mammals of the artiodactyl order (even-toed ungulates). They are related to the Indohyus, an extinct chevrotain-like ungulate, from which they split approximately 48 million years ago. Primitive cetaceans, or archaeocetes, first took to the sea approximately 49 million years ago and became fully aquatic 5–10 million years later. What defines an archaeocete is the presence of anatomical features exclusive to cetaceans, alongside other primitive features not found in modern cetaceans, such as visible legs or asymmetrical teeth. Their features became adapted for living in the marine environment. Major anatomical changes included their hearing set-up that channeled vibrations from the jaw to the earbone (Ambulocetus 49 mya), a streamlined body and the growth of flukes on the tail (Protocetus 43 mya), the migration of the nostrils toward the top of the cranium (blowholes), and the modification of the forelimbs into flippers (Basilosaurus 35 mya), and the shrinking and eventual disappearance of the hind limbs (the first odontocetes and mysticetes 34 mya). Whale morphology shows several examples of convergent evolution, the most obvious being the streamlined fish-like body shape. Other examples include the use of echolocation for hunting in low light conditions — which is the same hearing adaptation used by bats — and, in the rorqual whales, jaw adaptations, similar to those found in pelicans, that enable engulfment feeding. Today, the closest living relatives of cetaceans are the hippopotamuses; these share a semi-aquatic ancestor that branched off from other artiodactyls some 60 mya. Around 40 mya, a common ancestor between the two branched off into cetacea and anthracotheres; nearly all anthracotheres became extinct at the end of the Pleistocene 2.5 mya, eventually leaving only one surviving lineage – the hippopotamus. Whales split into two separate parvorders around 34 mya – the baleen whales (Mysticetes) and the toothed whales (Odontocetes). Biology Anatomy Whales have torpedo-shaped bodies with non-flexible necks, limbs modified into flippers, non-existent external ear flaps, a large tail fin, and flat heads (with the exception of monodontids and ziphiids). Whale skulls have small eye orbits, long snouts (with the exception of monodontids and ziphiids) and eyes placed on the sides of its head. Whales range in size from the and dwarf sperm whale to the and blue whale. Overall, they tend to dwarf other cetartiodactyls; the blue whale is the largest creature on Earth. Several species have female-biased sexual dimorphism, with the females being larger than the males. One exception is with the sperm whale, which has males larger than the females. Odontocetes, such as the sperm whale, possess teeth with cementum cells overlying dentine cells. Unlike human teeth, which are composed mostly of enamel on the portion of the tooth outside of the gum, whale teeth have cementum outside the gum. Only in larger whales, where the cementum is worn away on the tip of the tooth, does enamel show. Mysticetes have large whalebone, as opposed to teeth, made of keratin. Mysticetes have two blowholes, whereas Odontocetes contain only one. Breathing involves expelling stale air from the blowhole, forming an upward, steamy spout, followed by inhaling fresh air into the lungs; a humpback whale's lungs can hold about of air. Spout shapes differ among species, which facilitates identification. All whales have a thick layer of blubber. In species that live near the poles, the blubber can be as thick as . This blubber can help with buoyancy (which is helpful for a 100-ton whale), protection to some extent as predators would have a hard time getting through a thick layer of fat, and energy for fasting when migrating to the equator; the primary usage for blubber is insulation from the harsh climate. It can constitute as much as 50% of a whale's body weight. Calves are born with only a thin layer of blubber, but some species compensate for this with thick lanugos. Whales have a two- to three-chambered stomach that is similar in structure to those of terrestrial carnivores. Mysticetes contain a proventriculus as an extension of the oesophagus; this contains stones that grind up food. They also have fundic and pyloric chambers. Locomotion Whales have two flippers on the front, and a tail fin. These flippers contain four digits. Although whales do not possess fully developed hind limbs, some, such as the sperm whale and bowhead whale, possess discrete rudimentary appendages, which may contain feet and digits. Whales are fast swimmers in comparison to seals, which typically cruise at 5–15 kn, or ; the fin whale, in comparison, can travel at speeds up to and the sperm whale can reach speeds of . The fusing of the neck vertebrae, while increasing stability when swimming at high speeds, decreases flexibility; whales are unable to turn their heads. When swimming, whales rely on their tail fin to propel them through the water. Flipper movement is continuous. Whales swim by moving their tail fin and lower body up and down, propelling themselves through vertical movement, while their flippers are mainly used for steering. Some species log out of the water, which may allow them to travel faster. Their skeletal anatomy allows them to be fast swimmers. Most species have a dorsal fin. Whales are adapted for diving to great depths. In addition to their streamlined bodies, they can slow their heart rate to conserve oxygen; blood is rerouted from tissue tolerant of water pressure to the heart and brain among other organs; haemoglobin and myoglobin store oxygen in body tissue; and they have twice the concentration of myoglobin than haemoglobin. Before going on long dives, many whales exhibit a behaviour known as sounding; they stay close to the surface for a series of short, shallow dives while building their oxygen reserves, and then make a sounding dive. Senses The whale ear has specific adaptations to the marine environment. In humans, the middle ear works as an impedance equalizer between the outside air's low impedance and the cochlear fluid's high impedance. In whales, and other marine mammals, there is no great difference between the outer and inner environments. Instead of sound passing through the outer ear to the middle ear, whales receive sound through the throat, from which it passes through a low-impedance fat-filled cavity to the inner ear. The whale ear is acoustically isolated from the skull by air-filled sinus pockets, which allow for greater directional hearing underwater. Odontocetes send out high-frequency clicks from an organ known as a melon. This melon consists of fat, and the skull of any such creature containing a melon will have a large depression. The melon size varies between species, the bigger the more dependent they are on it. A beaked whale for example has a small bulge sitting on top of its skull, whereas a sperm whale's head is filled up mainly with the melon. The whale eye is relatively small for its size, yet they do retain a good degree of eyesight. As well as this, the eyes of a whale are placed on the sides of its head, so their vision consists of two fields, rather than a binocular view like humans have. When belugas surface, their lens and cornea correct the nearsightedness that results from the refraction of light; they contain both rod and cone cells, meaning they can see in both dim and bright light, but they have far more rod cells than they do cone cells. Whales do, however, lack short wavelength sensitive visual pigments in their cone cells indicating a more limited capacity for colour vision than most mammals. Most whales have slightly flattened eyeballs, enlarged pupils (which shrink as they surface to prevent damage), slightly flattened corneas and a tapetum lucidum; these adaptations allow for large amounts of light to pass through the eye and, therefore, a very clear image of the surrounding area. They also have glands on the eyelids and outer corneal layer that act as protection for the cornea. The olfactory lobes are absent in toothed whales, suggesting that they have no sense of smell. Some whales, such as the bowhead whale, possess a vomeronasal organ, which does mean that they can "sniff out" krill. Whales are not thought to have a good sense of taste, as their taste buds are atrophied or missing altogether. However, some toothed whales have preferences between different kinds of fish, indicating some sort of attachment to taste. The presence of the Jacobson's organ indicates that whales can smell food once inside their mouth, which might be similar to the sensation of taste. Communication Whale vocalization is likely to serve several purposes. Some species, such as the humpback whale, communicate using melodic sounds, known as whale song. These sounds may be extremely loud, depending on the species. Humpback whales only have been heard making clicks, while toothed whales use sonar that may generate up to 20,000 watts of sound (+73 dBm or +43 dBw) and be heard for many miles. Captive whales have occasionally been known to mimic human speech. Scientists have suggested this indicates a strong desire on behalf of the whales to communicate with humans, as whales have a very different vocal mechanism, so imitating human speech likely takes considerable effort. Whales emit two distinct kinds of acoustic signals, which are called whistles and clicks: Clicks are quick broadband burst pulses, used for sonar, although some lower-frequency broadband vocalizations may serve a non-echolocative purpose such as communication; for example, the pulsed calls of belugas. Pulses in a click train are emitted at intervals of ≈35–50 milliseconds, and in general these inter-click intervals are slightly greater than the round-trip time of sound to the target. Whistles are narrow-band frequency modulated (FM) signals, used for communicative purposes, such as contact calls. Intelligence Whales are known to teach, learn, cooperate, scheme, and grieve. The neocortex of many species of whale is home to elongated spindle neurons that, prior to 2007, were known only in hominids. In humans, these cells are involved in social conduct, emotions, judgement, and theory of mind. Whale spindle neurons are found in areas of the brain that are homologous to where they are found in humans, suggesting that they perform a similar function. Brain size was previously considered a major indicator of the intelligence of an animal. Since most of the brain is used for maintaining bodily functions, greater ratios of brain-to-body mass may increase the amount of brain mass available for more complex cognitive tasks. Allometric analysis indicates that mammalian brain size scales at approximately the ⅔ or ¾ exponent of the body mass. Comparison of a particular animal's brain size with the expected brain size based on such allometric analysis provides an encephalisation quotient that can be used as another indication of animal intelligence. Sperm whales have the largest brain mass of any animal on Earth, averaging and in mature males, in comparison to the average human brain which averages in mature males. The brain-to-body mass ratio in some odontocetes, such as belugas and narwhals, is second only to humans. Small whales are known to engage in complex play behaviour, which includes such things as producing stable underwater toroidal air-core vortex rings or "bubble rings". There are two main methods of bubble ring production: rapid puffing of a burst of air into the water and allowing it to rise to the surface, forming a ring, or swimming repeatedly in a circle and then stopping to inject air into the helical vortex currents thus formed. They also appear to enjoy biting the vortex-rings, so that they burst into many separate bubbles and then rise quickly to the surface. Some believe this is a means of communication. Whales are also known to produce bubble-nets for the purpose of foraging. Larger whales are also thought, to some degree, to engage in play. The southern right whale, for example, elevates their tail fluke above the water, remaining in the same position for a considerable amount of time. This is known as "sailing". It appears to be a form of play and is most commonly seen off the coast of Argentina and South Africa. Humpback whales, among others, are also known to display this behaviour. Life cycle Whales are fully aquatic creatures, which means that birth and courtship behaviours are very different from terrestrial and semi-aquatic creatures. Since they are unable to go onto land to calve, they deliver the baby with the fetus positioned for tail-first delivery. This prevents the baby from drowning either upon or during delivery. To feed the newborn, whales, being aquatic, must squirt the milk into the mouth of the calf. Nursing can occur while the mother whale is in the vertical or horizontal position. While nursing in the vertical position, a mother whale may sometimes rest with her tail flukes remaining stationary above the water. This position with the flukes above the water is known as "whale-tail-sailing." Not all whale-tail-sailing includes nursing of the young, as whales have also been observed tail-sailing while no calves were present. Being mammals, they have mammary glands used for nursing calves; weaning occurs at about 11 months of age. This milk contains high amounts of fat which is meant to hasten the development of blubber; it contains so much fat that it has the consistency of toothpaste. Like humans, female whales typically deliver a single offspring. Whales have a typical gestation period of 12 months. Once born, the absolute dependency of the calf upon the mother lasts from one to two years. Sexual maturity is achieved at around seven to ten years of age. The length of the developmental phases of a whale's early life varies between different whale species. As with humans, the whale mode of reproduction typically produces but one offspring approximately once each year. While whales have fewer offspring over time than most species, the probability of survival for each calf is also greater than for most other species. Female whales are referred to as "cows." Cows assume full responsibility for the care and training of their young. Male whales, referred to as "bulls," typically play no role in the process of calf-rearing. Most baleen whales reside at the poles. To prevent the unborn baleen whale calves from dying of frostbite, the baleen mother must migrate to warmer calving/mating grounds. They will then stay there for a matter of months until the calf has developed enough blubber to survive the bitter temperatures of the poles. Until then, the baleen calves will feed on the mother's fatty milk. With the exception of the humpback whale, it is largely unknown when whales migrate. Most will travel from the Arctic or Antarctic into the tropics to mate, calve, and raise their young during the winter and spring; they will migrate back to the poles in the warmer summer months so the calf can continue growing while the mother can continue eating, as they fast in the breeding grounds. One exception to this is the southern right whale, which migrates to Patagonia and western New Zealand to calve; both are well out of the tropic zone. Sleep Unlike most animals, whales are conscious breathers. All mammals sleep, but whales cannot afford to become unconscious for long because they may drown. While knowledge of sleep in wild cetaceans is limited, toothed cetaceans in captivity have been recorded to sleep with one side of their brain at a time, so that they may swim, breathe consciously, and avoid both predators and social contact during their period of rest. A 2008 study found that sperm whales sleep in vertical postures just under the surface in passive shallow "drift-dives", generally during the day, during which whales do not respond to passing vessels unless they are in contact, leading to the suggestion that whales possibly sleep during such dives. Ecology Foraging and predation All whales are carnivorous and predatory. Odontocetes, as a whole, mostly feed on fish and cephalopods, and then followed by crustaceans and bivalves. All species are generalist and opportunistic feeders. Mysticetes, as a whole, mostly feed on krill and plankton, followed by crustaceans and other invertebrates. A few are specialists. Examples include the blue whale, which eats almost exclusively krill, the minke whale, which eats mainly schooling fish, the sperm whale, which specialize on squid, and the grey whale which feed on bottom-dwelling invertebrates. The elaborate baleen "teeth" of filter-feeding species, mysticetes, allow them to remove water before they swallow their planktonic food by using the teeth as a sieve. Usually, whales hunt solitarily, but they do sometimes hunt cooperatively in small groups. The former behaviour is typical when hunting non-schooling fish, slow-moving or immobile invertebrates or endothermic prey. When large amounts of prey are available, whales such as certain mysticetes hunt cooperatively in small groups. Some cetaceans may forage with other kinds of animals, such as other species of whales or certain species of pinnipeds. Large whales, such as mysticetes, are not usually subject to predation, but smaller whales, such as monodontids or ziphiids, are. These species are preyed on by the orca. To subdue and kill whales, orcas continuously ram them with their heads; this can sometimes kill bowhead whales, or severely injure them. Other times they corral the narwhals or belugas before striking. They are typically hunted by groups of 10 or fewer orcas, but they are seldom attacked by an individual. Calves are more commonly taken by orcas, but adults can be targeted as well. These small whales are also targeted by terrestrial and pagophilic predators. The polar bear is well adapted for hunting Arctic whales and calves. Bears are known to use sit-and-wait tactics as well as active stalking and pursuit of prey on ice or water. Whales lessen the chance of predation by gathering in groups. This however means less room around the breathing hole as the ice slowly closes the gap. When out at sea, whales dive out of the reach of surface-hunting orcas. Polar bear attacks on belugas and narwhals are usually successful in winter, but rarely inflict any damage in summer. Whale pump A 2010 study considered whales to be a positive influence to the productivity of ocean fisheries, in what has been termed a "whale pump." Whales carry nutrients such as nitrogen from the depths back to the surface. This functions as an upward biological pump, reversing an earlier presumption that whales accelerate the loss of nutrients to the bottom. This nitrogen input in the Gulf of Maine is "more than the input of all rivers combined" emptying into the gulf, some each year. Whales defecate at the ocean's surface; their excrement is important for fisheries because it is rich in iron and nitrogen. The whale faeces are liquid and instead of sinking, they stay at the surface where phytoplankton feed off it. Whale fall Upon death, whale carcasses fall to the deep ocean and provide a substantial habitat for marine life. Evidence of whale falls in present-day and fossil records shows that deep sea whale falls support a rich assemblage of creatures, with a global diversity of 407 species, comparable to other neritic biodiversity hotspots, such as cold seeps and hydrothermal vents. Deterioration of whale carcasses happens though a series of three stages. Initially, moving organisms such as sharks and hagfish, scavenge the soft tissues at a rapid rate over a period of months, and as long as two years. This is followed by the colonization of bones and surrounding sediments (which contain organic matter) by enrichment opportunists, such as crustaceans and polychaetes, throughout a period of years. Finally, sulfophilic bacteria reduce the bones releasing hydrogen sulfide enabling the growth of chemoautotrophic organisms, which in turn, support other organisms such as mussels, clams, limpets, and sea snails. This stage may last for decades and supports a rich assemblage of species, averaging 185 species per site. Relationship with humans Whaling Whaling by humans has existed since the Stone Age. Ancient whalers used harpoons to spear the bigger animals from boats out at sea. People from Norway and Japan started hunting whales around 2000 B.C. Whales are typically hunted for their meat and blubber by aboriginal groups; they used baleen for baskets or roofing, and made tools and masks out of bones. The Inuit hunted whales in the Arctic Ocean. The Basques started whaling as early as the 11th century, sailing as far as Newfoundland in the 16th century in search of right whales. 18th- and 19th-century whalers hunted whales mainly for their oil, which was used as lamp fuel and a lubricant, baleen or whalebone, which was used for items such as corsets and skirt hoops, and ambergris, which was used as a fixative for perfumes. The most successful whaling nations at this time were the Netherlands, Japan, and the United States. Commercial whaling was historically important as an industry well throughout the 17th, 18th and 19th centuries. Whaling was at that time a sizeable European industry with ships from Britain, France, Spain, Denmark, the Netherlands and Germany, sometimes collaborating to hunt whales in the Arctic, sometimes in competition leading even to war. By the early 1790s, whalers, namely the Americans and Australians, focused efforts in the South Pacific where they mainly hunted sperm whales and right whales, with catches of up to 39,000 right whales by Americans alone. By 1853, US profits reached US$11,000,000 (£6.5m), equivalent to US$348,000,000 (£230m) today, the most profitable year for the American whaling industry. Commonly exploited species included North Atlantic right whales, sperm whales, which were mainly hunted by Americans, bowhead whales, which were mainly hunted by the Dutch, common minke whales, blue whales, and grey whales. The scale of whale harvesting decreased substantially after 1982 when the International Whaling Commission (IWC) placed a moratorium which set a catch limit for each country, excluding aboriginal groups until 2004. Current whaling nations are Norway, Iceland, and Japan, despite their joining to the IWC, as well as the aboriginal communities of Siberia, Alaska, and northern Canada. Subsistence hunters typically use whale products for themselves and depend on them for survival. National and international authorities have given special treatment to aboriginal hunters since their methods of hunting are seen as less destructive and wasteful. This distinction is being questioned as these aboriginal groups are using more modern weaponry and mechanized transport to hunt with, and are selling whale products in the marketplace. Some anthropologists argue that the term "subsistence" should also apply to these cash-based exchanges as long as they take place within local production and consumption. In 1946, the IWC placed a moratorium, limiting the annual whale catch. Since then, yearly profits for these "subsistence" hunters have been close to US$31 million (£20m) per year. Other threats Whales can also be threatened by humans more indirectly. They are unintentionally caught in fishing nets by commercial fisheries as bycatch and accidentally swallow fishing hooks. Gillnetting and Seine netting is a significant cause of mortality in whales and other marine mammals. Species commonly entangled include beaked whales. Whales are also affected by marine pollution. High levels of organic chemicals accumulate in these animals since they are high in the food chain. They have large reserves of blubber, more so for toothed whales as they are higher up the food chain than baleen whales. Lactating mothers can pass the toxins on to their young. These pollutants can cause gastrointestinal cancers and greater vulnerability to infectious diseases. They can also be poisoned by swallowing litter, such as plastic bags. Advanced military sonar harms whales. Sonar interferes with the basic biological functions of whales—such as feeding and mating—by impacting their ability to echolocate. Whales swim in response to sonar and sometimes experience decompression sickness due to rapid changes in depth. Mass strandings have been triggered by sonar activity, resulting in injury or death. Whales are sometimes killed or injured during collisions with ships or boats. This is considered to be a significant threat to vulnerable whale populations such as the North Atlantic right whale, whose total population numbers less than 500. Conservation Whaling decreased substantially after 1946 when, in response to the steep decline in whale populations, the International Whaling Commission placed a moratorium which set a catch limit for each country; this excluded aboriginal groups up until 2004. As of 2015, aboriginal communities are allowed to take 280 bowhead whales off Alaska and two from the western coast of Greenland, 620 grey whales off Washington state, three common minke whales off the eastern coast of Greenland and 178 on their western coast, 10 fin whales from the west coast of Greenland, nine humpback whales from the west coast of Greenland and 20 off St. Vincent and the Grenadines each year. Several species that were commercially exploited have rebounded in numbers; for example, grey whales may be as numerous as they were prior to harvesting, but the North Atlantic population is functionally extinct. Conversely, the North Atlantic right whale was extirpated from much of its former range, which stretched across the North Atlantic, and only remains in small fragments along the coast of Canada, Greenland, and is considered functionally extinct along the European coastline. The IWC has designated two whale sanctuaries: the Southern Ocean Whale Sanctuary, and the Indian Ocean Whale Sanctuary. The Southern Ocean whale sanctuary spans and envelopes Antarctica. The Indian Ocean whale sanctuary takes up all of the Indian Ocean south of 55°S. The IWC is a voluntary organization, with no treaty. Any nation may leave as they wish; the IWC cannot enforce any law it makes. There are at least 86 cetacean species that are recognized by the International Whaling Commission Scientific Committee. , six are considered at risk, as they are ranked "Critically Endangered" (North Atlantic right whale), "Endangered" (blue whale, North Pacific right whale, and sei whale,) and "Vulnerable" (fin whale and sperm whale). Twenty-one species have a "Data Deficient" ranking. Species that live in polar habitats are vulnerable to the effects of recent and ongoing climate change, particularly the time when pack ice forms and melts. Whale watching An estimated 13 million people went whale watching globally in 2008, in all oceans except the Arctic. Rules and codes of conduct have been created to minimize harassment of the whales. Iceland, Japan and Norway have both whaling and whale watching industries. Whale watching lobbyists are concerned that the most inquisitive whales, which approach boats closely and provide much of the entertainment on whale-watching trips, will be the first to be taken if whaling is resumed in the same areas. Whale watching generated US$2.1 billion (£1.4 billion) per annum in tourism revenue worldwide, employing around 13,000 workers. In contrast, the whaling industry, with the moratorium in place, generates US$31 million (£20 million) per year. The size and rapid growth of the industry has led to complex and continuing debates with the whaling industry about the best use of whales as a natural resource. In myth, literature and art As marine creatures that reside in either the depths or the poles, humans knew very little about whales over the course of human history; many feared or revered them. The Vikings and various arctic tribes revered the whale as they were important pieces of their lives. In Inuit creation myths, when 'Big Raven', a deity in human form, found a stranded whale, he was told by the Great Spirit where to find special mushrooms that would give him the strength to drag the whale back to the sea and thus, return order to the world. In an Icelandic legend, a man threw a stone at a fin whale and hit the blowhole, causing the whale to burst. The man was told not to go to sea for twenty years, but during the nineteenth year he went fishing and a whale came and killed him. Whales played a major part in shaping the art forms of many coastal civilizations, such as the Norse, with some dating to the Stone Age. Petroglyphs off a cliff face in Bangudae, South Korea show 300 depictions of various animals, a third of which are whales. Some show particular detail in which there are throat pleats, typical of rorquals. These petroglyphs show these people, of around 7,000 to 3,500 B.C.E. in South Korea, had a very high dependency on whales. The Pacific Islanders and Australian Aborigines viewed whales as bringers of good and joy. One exception is French Polynesia, where, in many parts, cetaceans are met with great brutality. In coastal regions of China, Korea and Vietnam, the worship of whale gods, who were associated with Dragon Kings after the arrival of Buddhism, was present along with related legends.The god of the seas, according to Chinese folklore, was a large whale with human limbs. In Thailand, the most common whales found are the Bryde's whale. Thai fishermen refer them as Pla pu (; ), "grandfather fish", what with its large size and its long-term presence in the upper reaches of the Gulf of Thailand. Fishermen do not fish for the whales, holding them in high regard and viewing them as akin to family members. They believe the whales bring good fortune, and their presence is considered as a sign of a healthy marine environment. In Vietnam, whales hold a sense of divinity. They are so respected in their cultures that they occasionally hold funerals for beached whales, a custom deriving from Vietnam's ancient sea-based Champa Kingdom. Whales have also played a role in sacred texts. The story of Jonah being swallowed by a great fish is told both in the Qur'an and in the biblical Book of Jonah (and is mentioned by Jesus in the New Testament: Matthew 12:40.). This episode was frequently depicted in medieval art (for example, on a 12th-century column capital at the abbey church of Mozac, France). The Bible also mentions whales in Genesis 1:21, Job 7:12, and Ezekiel 32:2. The "leviathan" described at length in Job 41:1-34 is generally understood to refer to a whale. The "sea monsters" in Lamentations 4:3 have been taken by some to refer to marine mammals, in particular whales, although most modern versions use the word "jackals" instead. In 1585, Alessandro Farnese, 1585, and Francois, Duke of Anjou, 1582, were greeted on his ceremonial entry into the port city of Antwerp by floats including "Neptune and the Whale", indicating at least the city's dependence on the sea for its wealth. In 1896, an article in The Pall Mall Gazette popularised a practice of alternative medicine that probably began in the whaling town of Eden, Australia two or three years earlier. It was believed that climbing inside a whale carcass and remaining there for a few hours would relieve symptoms of rheumatism. Whales continue to be prevalent in modern literature. For example, Herman Melville's Moby Dick features a "great white whale" as the main antagonist for Ahab. The whale is an albino sperm whale, considered by Melville to be the largest type of whale, and is partly based on the historically attested bull whale Mocha Dick. Rudyard Kipling's Just So Stories includes the story of "How the Whale got in his Throat". A whale features in the award-winning children's book The Snail and the Whale (2003) by Julia Donaldson and Axel Scheffler. Niki Caro's film the Whale Rider has a Māori girl ride a whale in her journey to be a suitable heir to the chieftain-ship. Walt Disney's film Pinocchio features a showdown with a giant whale named Monstro at the end of the film. A recording of Song with a Humpback Whale by a team of marine scientists became popular in 1970. Alan Hovhaness's orchestral composition And God Created Great Whales (1970) includes the recorded sounds of humpback and bowhead whales. Recorded whale songs also appear in a number of other musical works, including Léo Ferré's song "Il n'y a plus rien" and Judy Collins's "Farewell to Tarwathie" (on the 1970 album Whales and Nightingales). In captivity Belugas were the first whales to be kept in captivity. Other species were too rare, too shy, or too big. The first beluga was shown at Barnum's Museum in New York City in 1861. For most of the 20th century, Canada was the predominant source of wild belugas. They were taken from the St. Lawrence River estuary until the late 1960s, after which they were predominantly taken from the Churchill River estuary until capture was banned in 1992. Russia has become the largest provider since it had been banned in Canada. Belugas are caught in the Amur River delta and their eastern coast, and then are either transported domestically to aquariums or dolphinariums in Moscow, St. Petersburg, and Sochi, or exported to other countries, such as Canada. Most captive belugas are caught in the wild, since captive-breeding programs are not very successful. As of 2006, 30 belugas were in Canada and 28 in the United States, and 42 deaths in captivity had been reported up to that time. A single specimen can reportedly fetch up to US$100,000 (£64,160) on the market. The beluga's popularity is due to its unique colour and its facial expressions. The latter is possible because while most cetacean "smiles" are fixed, the extra movement afforded by the beluga's unfused cervical vertebrae allows a greater range of apparent expression. Between 1960 and 1992, the Navy carried out a program that included the study of marine mammals' abilities with sonar, with the objective of improving the detection of underwater objects. A large number of belugas were used from 1975 on, the first being dolphins. The program also included training them to carry equipment and material to divers working underwater by holding cameras in their mouths to locate lost objects, survey ships and submarines, and underwater monitoring. A similar program was used by the Russian Navy during the Cold War, in which belugas were also trained for antimining operations in the Arctic. Aquariums have tried housing other species of whales in captivity. The success of belugas turned attention to maintaining their relative, the narwhal, in captivity. However, in repeated attempts in the 1960s and 1970s, all narwhals kept in captivity died within months. A pair of pygmy right whales were retained in an enclosed area (with nets); they were eventually released in South Africa. There was one attempt to keep a stranded Sowerby's beaked whale calf in captivity; the calf rammed into the tank wall, breaking its rostrum, which resulted in death. It was thought that Sowerby's beaked whale evolved to swim fast in a straight line, and a tank was not big enough. There have been attempts to keep baleen whales in captivity. There were three attempts to keep grey whales in captivity. Gigi was a grey whale calf that died in transport. Gigi II was another grey whale calf that was captured in the Ojo de Liebre Lagoon, and was transported to SeaWorld. The calf was a popular attraction, and behaved normally, despite being separated from his mother. A year later, the whale grew too big to keep in captivity and was released; it was the first of two grey whales, the other being another grey whale calf named JJ, to successfully be kept in captivity. There were three attempts to keep minke whales in captivity in Japan. They were kept in a tidal pool with a sea-gate at the Izu Mito Sea Paradise. Another, unsuccessful, attempt was made by the U.S. One stranded humpback whale calf was kept in captivity for rehabilitation, but died days later.
Biology and health sciences
Cetaceans
null
33809
https://en.wikipedia.org/wiki/Walkman
Walkman
is a brand of portable audio players manufactured and marketed by Japanese company Sony since 1979. The original Walkman started out as a portable cassette player and the brand was later extended to serve most of Sony's portable audio devices; since 2011 it consists exclusively of digital flash memory players. The current flagship product as of 2022 is the WM1ZM2 player. Walkman cassette players were very popular during the 1980s, which led to "walkman" becoming an genericized label term for personal compact stereos of any producer or brand. 220 million cassette-type Walkmen were sold by the end of production in 2010; including digital Walkman devices such as DAT, MiniDisc, CD (originally Discman then renamed the CD Walkman) and memory-type media players, it has sold approximately 400 million at this time. The Walkman brand has also been applied to transistor radios, and Sony Ericsson mobile phones. History In March 1979, at the request of Masaru Ibuka, the audio department modified the small recorder used by journalists, "Pressman", into a smaller recorder. After many people praised the good sound quality evaluation, under the leadership of Akio Morita, SONY began to launch the Walkman in July 1979. Akio Morita positioned Walkman in the youth market, emphasized youth, vitality, and fashion, and created a headset culture. In February 1980, he began to sell Walkman to the world, and in November 1980, he began to use the non-standard Japanese and English brands globally. The Walkman has sold more than 250 million units worldwide. When Akio Morita was knighted in October 1992, the headline in the British newspapers The Sun and The Daily Telegraph was "Arise, Sir Sony Walkman" [Doing Cultural Studies: The Story of the Sony Walkman, Paul du Gay] The Compact Cassette was developed by the Dutch electronics firm Philips and released in August 1963. In the late 1960s, the introduction of prerecorded compact cassettes made it possible to listen to music on portable devices as well as on car stereos, though gramophone records remained the most popular format for home listening. Portable tape players of various designs were available, but none of them were intended to be operated by a person as they were walking. In the 1970s, Brazilian inventor Andreas Pavel devised a method for carrying a player of this type on a belt around the waist, listening via headphones, but his "Stereobelt" concept did not include the required engineering advancements to yield high-quality sound reproduction while the tape player was subject to mechanical shock as would be expected on a person walking. Pavel later lost his suit claiming the Walkman idea as his own. Finally in 2003, with Pavel threatening to file infringement proceedings in the remaining territories where he held protective rights, Sony approached him with a view to settling the matter amicably, which led to both parties signing a contract and confidentiality agreement in 2004. The settlement was reported to be a cash payment in the "low eight figures" and ongoing royalties of the sale of certain Walkman models. Sony cofounder Masaru Ibuka used Sony's bulky TC-D5 cassette recorder to listen to music while traveling for business. He asked the executive deputy president Norio Ohga to design a playback-only stereo version optimized for walking. The metal-cased blue-and-silver Walkman TPS-L2, the world's first low-cost personal stereo, went on sale in Japan on July 1, 1979, and was sold for around ¥33,000 (or $150.00). Though Sony predicted it would sell about 5,000 units a month, it sold more than 30,000 in the first two months. The Walkman was followed by a series of international releases; as overseas sales companies objected to the wasei-eigo name, it was sold under several names, including Sound-about in the United States, Freestyle in Australia and Sweden, and Stowaway in the UK. Eventually, in the early 1980s, Walkman caught on globally and Sony used the name worldwide. The TPS-L2 was introduced in the US in June 1980. The 1980s was the decade of the intensive development of the Walkman lineup. In 1981 Sony released the second Walkman model, the WM-2, which was significantly smaller than the TPS-L2 thanks to the "inverse" mounting of the power-operated magnetic head and soft-touch buttons. Sony applied the "Walkman" brand to some transistor radios starting with the matching blue SRF-40 FM Walkman in 1980, and added a radio system to some Walkman cassette models starting with the model WM-F1 in 1982. The first model with Dolby noise-reduction system and an auto reverse function appeared in 1982. The first ultra-compact "cassette-size" Walkman was introduced in 1983, model WM-20, with a telescopic case. This allowed even easier carrying of a Walkman in bags or pockets. In October 1985, the WM-101 model was the first in its class with a "gum stick" rechargeable battery. In 1986 Sony presented the first model outfitted with remote control, as well as one with solar battery (WM-F107). Within a decade of launch, Sony held a 50% market share in the United States and 46% in Japan. Two limited edition 10th anniversary models were released in 1989 (WM-701S/T) in Japan, made of brass and plated in sterling silver. Only a few hundred were built of each. A 15th anniversary model was also made on July 1, 1994, with vertical loading, and a 20th anniversary on July 1, 1999, with a prestige model. By 1989, 10 years after the launch of the first model, over Walkmans had been sold worldwide. units were manufactured by 1995. By 1999, 20 years after the introduction of the first model, Sony sold 186 million cassette Walkmans. Portable compact disc players led to the decline of the cassette Walkman, which was discontinued in Japan in 2010. The last cassette-based model available in the US was the WM-FX290W, which was first released in 2004. Marketing The marketing of the Walkman helped introduce the idea of "Japanese-ness" into global culture, synonymous with miniaturization and high-technology. The "Walk-men" and "Walk-women" in advertisements were created to be the ideal reflections of the viewing audience. Sony implemented a marketing strategy, hiring young adults to walk around in public wearing a Walkman, offering nearby people to test out the product. Sony also hired actors to pose with the Walkman around the streets of Tokyo as an additional form of promotion. A major component of the Walkman advertising campaign was overspecialization of the device. Prior to the Walkman, the common device for portable music was the portable radio, which could only offer listeners standard music broadcasts. Having the ability to customize a playlist was a new and exciting revolution in music consumption. Potential buyers had the opportunity to choose their perfect match in terms of mobile listening technology. The ability to play one's personal choice of music and listen privately was a huge selling point of the Walkman, especially amongst teens, who greatly contributed to its success. A diversity of features and styles suggested that there would be a product which was "the perfect choice" for each consumer. This method of marketing to an extremely expansive user-base while maintaining the idea that the product was made for each individual "[got] the best of all possible worlds—mass marketing and personal differentiation". Impact and legacy Culturally the Walkman had a great effect and it became ubiquitous. According to Time, the Walkman's "unprecedented combination of portability (it ran on two AA batteries) and privacy (it featured a headphone jack but no external speaker) made it the ideal product for thousands of consumers looking for a compact portable stereo that they could take with them anywhere". According to The Verge, "the world changed" on the day the Walkman was released. The Walkman became an icon in 1980s culture. In 1986, the word "Walkman" entered the Oxford English Dictionary. Millions used the Walkman during exercise, marking the beginning of the aerobics fad. Between 1987 and 1997, the height of the Walkman's popularity, the number of people who said they walked for exercise increased by 30%. Other firms, including Aiwa, Panasonic and Toshiba, produced similar products, and in 1983 cassettes outsold vinyl for the first time. The Walkman has been cited as influencing people's relationship with music and technology, due to its "solitary" and "personal" nature, as users were listening to their music of choice instead of radio. It has been seen as a precursor of personal mainstream tech possessions such as personal computers or mobile phones. Headphones also started to be worn in public. This caused safety controversies in the US, which in 1982 led to the mayor of Woodbridge, New Jersey banning Walkman from being worn in public due to pedestrian accidents. In the market, the Walkman's success also led to great adoption of the Compact Cassette format. Within a few years, cassettes were outselling vinyl records, and would continue to do so until the compact disc (CD) overtook cassette sales in 1991. In German-speaking countries, the use of "Walkman" became generic, meaning a personal stereo of any make, to a degree that the Austrian Supreme Court of Justice ruled in 2002 that Sony could not prevent others from using the term "Walkman" to describe similar goods. It is therefore an example of what marketing experts call the "genericide" of a brand. A large statue of a Sports Walkman FM was erected in Tokyo's Ginza district in 2019 in celebration of the 40th anniversary. Other Walkman In 1989, Sony released portable Video8 recorders marketed as Video Walkman, extending the brand name. In 1990 Sony released portable Digital Audio Tape (DAT) players marketed as DAT Walkman. It was extended further in 1992 for MiniDisc players with the MD Walkman brand. From 1997, Sony's Discman range of portable compact disc (CD) players started to rebrand as CD Walkman. In 2000, the Walkman brand (the entire range) was unified, and a new small icon, "W.", was made for the branding. From 2012, Walkman was also the name of the music player software on Sony Xperia. It has since been rebranded to Music. Digital players (1999–present) On December 21, 1999, Sony launched its first digital audio players (DAP), under the name Network Walkman (alongside players developed by the VAIO division). The first player, which used Memory Stick storage medium, was branded as MS Walkman, shortly before the Walkman brand unification. Most future models would use built-in solid-state flash memory, although hard disk based players were also released from 2004 to 2007. They came with OpenMG copyright protection and, until 2004, exclusively supported Sony's in-house ATRAC format; there was no support for industry-standard MP3 as Sony wanted to protect its records division, Sony BMG, from piracy. Additionally, Walkman-branded mobile phones were also made by the Sony Ericsson joint venture. Sony could not repeat the success of the cassette player in the 21st-century digital audio player (DAP) market. Rival Apple's iPod range became a large success in the market, hindering Walkman sales internationally, though it fared better domestically. The Network Walkman for several years had paltry market share and had also been struggling against numerous other rivals such as Creative, Rio, Mpio and iRiver, although sales and share did eventually increase fivefold in 2005 and continued improving, but remained small. Its pricing policy, SonicStage software and lack of MP3 support in earlier years have been suggested factors of its performance. Its U.S. market share in 2006 was 1.9%, placing it behind Apple, SanDisk, Creative and Samsung. In Japan its share in 2009–2010 was between 43 and 48%, slightly ahead of Apple. Meanwhile, Sony Computer Entertainment, a Sony division who are not involved in Walkman products, officially described their PlayStation Portable in 2004 as the "21st century Walkman". Current range Walkman portable digital audio and media players are the only Walkman-branded products still being produced today – although the "Network" prefix is no longer being used, the model numbers still carry the "NW-" prefix. The current product range as of 2022 are: A Series – flagship mid-range players B Series (except Japan) – budget-oriented thumb style music players E Series – entry level players S Series (Japan) – entry level players W/WS Series – wearable music players WM1 Series – flagship luxurious high-end players (part of Sony's Signature Series of audio products) ZX Series – high-end music players Since 2017, Sony provided the Music Center for PC software on Microsoft Windows, designed for both content transfer and playback for Walkman and other audio products. List of products
Technology
Specific hardware
null
33864
https://en.wikipedia.org/wiki/Wombat
Wombat
Wombats are short-legged, muscular quadrupedal marsupials of the family Vombatidae that are native to Australia. Living species are about in length with small, stubby tails and weigh between . They are adaptable and habitat tolerant, and are found in forested, mountainous, and heathland areas of southern and eastern Australia, including Tasmania, as well as an isolated patch of about in Epping Forest National Park in central Queensland. Etymology The name "wombat" comes from the now nearly extinct Dharug language spoken by the aboriginal Dharug people, who originally inhabited the Sydney area. It was first recorded in January 1798, when John Price and James Wilson, a white man who had adopted aboriginal ways, visited the area of what is now Bargo, New South Wales. Price wrote: "We saw several sorts of dung of different animals, one of which Wilson called a 'Whom-batt', which is an animal about high, with short legs and a thick body with a large head, round ears, and very small eyes; is very fat, and has much the appearance of a badger." Wombats were often called badgers by early settlers because of their size and habits. Because of this, localities such as Badger Creek, Victoria, and Badger Corner, Tasmania, were named after the wombat. The spelling went through many variants over the years, including "wambat", "whombat", "womat", "wombach", and "womback", possibly reflecting dialectal differences in the Darug language. Evolution and taxonomy Though genetic studies of the Vombatidae have been undertaken, evolution of the family is not well understood. Wombats are estimated to have diverged from other Australian marsupials relatively early, as long as 40 million years ago, while some estimates place divergence at around 25 million years. Some prehistoric wombat genera greatly exceeded modern wombats in size. The largest known wombat, Phascolonus, which went extinct approximately 40,000 years ago, is estimated to have had a body mass of up to . Characteristics Wombats dig extensive burrow systems with their rodent-like front teeth and powerful claws. One distinctive adaptation of wombats is their backward pouch. The advantage of a backward-facing pouch is that when digging, the wombat does not gather soil in its pouch over its young. Although mainly crepuscular and nocturnal, wombats may also venture out to feed on cool or overcast days. They are not commonly seen, but leave ample evidence of their passage, treating fences as minor inconveniences to be gone through or under. Wombats leave distinctive cubic faeces. As wombats arrange these feces to mark territories and attract mates, it is believed that the cubic shape makes them more stackable and less likely to roll, which gives this shape a biological advantage. The method by which the wombat produces them is not well understood, but it is believed that the wombat intestine stretches preferentially at the walls, with two flexible and two stiff areas around its intestines. The adult wombat produces between 80 and 100, pieces of feces in a single night, and four to eight pieces each bowel movement. In 2019 the production of cube-shaped wombat feces was the subject of the Ig Nobel Prize for Physics, won by Patricia Yang and David Hu. All wombat teeth lack roots and are ever-growing, like the incisors of rodents. Wombats are herbivores; their diets consist mostly of grasses, sedges, herbs, bark, and roots. Their incisor teeth somewhat resemble those of rodents (rats, mice, etc.), being adapted for gnawing tough vegetation. Like many other herbivorous mammals, they have a large diastema between their incisors and the cheek teeth, which are relatively simple. The dental formula of wombats is . Wombats' fur can vary from a sandy colour to brown, or from grey to black. All three known extant species average around in length and weigh between . Male wombats have penile spines, a non-pendulous scrotum, and three pairs of bulbourethral glands. The testes, prostate, and bulbourethral glands enlarge during the breeding season. Female wombats give birth to a single young after a gestation period of roughly 20–30 days, which varies between species. All species have well-developed pouches, which the young leave after about six to seven months. Wombats are weaned after 15 months, and are sexually mature at 18 months. A group of wombats is known as a wisdom, a mob, or a colony. Wombats typically live up to 15 years in the wild, but can live past 20 and even 30 years in captivity. The longest-lived captive wombat lived to 34 years of age. In 2020, biologists discovered that wombats, like many other Australian marsupials, display bio-fluorescence under ultraviolet light. Ecology and behaviour Wombats have an extraordinarily slow metabolism, taking around 8 to 14 days to complete digestion, which aids their survival in arid conditions. They generally move slowly. Wombats defend home territories centred on their burrows, and they react aggressively to intruders. The common wombat occupies a range of up to , while the hairy-nosed species have much smaller ranges, of no more than . Dingos and Tasmanian devils prey on wombats. Extinct predators were likely to have included Thylacoleo and possibly the thylacine (Tasmanian tiger). Their primary defence is their toughened rear hide, with most of the posterior made of cartilage. This, combined with its lack of a meaningful tail, makes it difficult for any predator that follows the wombat into its tunnel to bite and injure its target. When attacked, wombats dive into a nearby tunnel, using their rumps to block a pursuing attacker. According to an urban legend, wombats sometimes allow an intruder to force its head over the wombat's back, and then use its powerful legs to crush the skull of the predator against the roof of the tunnel. However, there is no evidence to support this. Wombats are generally quiet animals. Bare-nosed wombats can make a number of different sounds, more than the hairy-nosed wombats. Wombats tend to be more vocal during mating season. When angered, they can make hissing sounds. Their call sounds somewhat like a pig's squeal. They can also make grunting noises, a low growl, a hoarse cough, and a clicking noise. Species The three extant species of wombat are all endemic to Australia and a few offshore islands. They are protected under Australian law. Common wombat (Vombatus ursinus), which has three subspecies: Vombatus ursinus hirsutus, found on the Australian mainland Vombatus ursinus tasmaniensis, found in Tasmania Vombatus ursinus ursinus, found on Flinders Island and Maria Island in the Bass Strait Northern hairy-nosed wombat or yaminon (Lasiorhinus krefftii), which is critically endangered Southern hairy-nosed wombat (Lasiorhinus latifrons), the smallest of the three species Human relations History Depictions of the animals in rock art are exceptionally rare, though examples estimated to be up to 4,000 years old have been discovered in Wollemi National Park. The wombat is depicted in aboriginal Dreamtime as an animal of little worth. The mainland stories tell of the wombat as originating from a person named Warreen whose head had been flattened by a stone and tail amputated as punishment for selfishness. In contrast, the Tasmanian aboriginal story first recorded in 1830 tells of the wombat (known as the drogedy or publedina) the great spirit Moihernee had asked hunters to leave alone. In both cases, the wombat is regarded as having been banished to its burrowing habitat. Estimates of wombat distribution prior to European settlement are that numbers of all three surviving species were prolific and that they covered a range more than ten times greater than that of today. After the ship Sydney Cove ran aground on Clarke Island in February 1797, the crew of the salvage ship Francis discovered wombats on the island. A live animal was taken back to Port Jackson. Matthew Flinders, who was travelling on board the Francis on its third and final salvage trip, also decided to take a wombat specimen from the island to Port Jackson. Governor John Hunter later sent the animal's corpse to Joseph Banks at the Literary and Philosophical Society to verify that it was a new species. The island was named Clarke Island after William Clark. Wombats were classified as vermin in 1906, and a bounty was introduced in 1925. This and the removal of a substantial amount of habitat have greatly reduced their numbers and range. Attacks on humans In addition to being bitten, humans can receive puncture wounds from wombat claws. Startled wombats can also charge humans and bowl them over, with the attendant risks of broken bones from the fall. One naturalist, Harry Frauca, once received a bite deep into the flesh of his leg—through a rubber boot, trousers and thick woollen socks. A UK newspaper, The Independent, reported that on 6 April 2010, a 59-year-old man from rural Victoria state was mauled by a wombat (thought to have been angered by mange), causing a number of cuts and bite marks requiring hospital treatment. He resorted to killing it with an axe. Cultural significance Some farmers consider common wombats to be a nuisance due primarily to their burrowing behaviour. "Fatso the Fat-Arsed Wombat" was the tongue-in-cheek "unofficial" mascot of the 2000 Sydney Olympics. Since 2005, an unofficial holiday called Wombat Day has been observed on 22 October. Wombat meat has been a source of bush food from the arrival of Aboriginal Australians to the arrival of Europeans. Due to the protection of the species, wombat meat as food is no longer part of mainstream Australian cuisine, but wombat stew was once one of the few truly Australian dishes. In the 20th century, the more easily found rabbit meat was more commonly used. (Rabbits are now considered an invasive pest in Australia.) The name of the dish is also used by a popular children's book and musical. Wombats have featured in Australian postage stamps and coins. The hairy-nosed wombats have featured mainly to highlight their elevated conservation status. The northern hairy-nosed wombat featured on an Australian 1974 20-cent stamp and also an Australian 1981 five-cent stamp. The common wombat has appeared on a 1987 37-cent stamp and an Australian 1996 95-cent stamp. The 2006 Australian Bush Babies stamp series features an AU$1.75-stamp of a baby common wombat, and the 2010 Rescue to Release series features a 60-cent stamp of a common wombat being treated by a veterinarian. Wombats are rarely seen on circulated Australian coins, an exception is a 50-cent coin which also shows a koala and lorikeet. The common wombat appeared on a 2005 commemorative $1 coin and the northern hairy-nosed wombat on a 1998 Australia Silver Proof $10 coin. Many places in Australia have been named after the wombat, including a large number of places where they are now locally extinct in the wild.
Biology and health sciences
Diprotodontia
Animals
33865
https://en.wikipedia.org/wiki/Warhead
Warhead
A warhead is the section of a device that contains the explosive agent or toxic (biological, chemical, or nuclear) material that is delivered by a missile, rocket, torpedo, or bomb. Classification Types of warheads include: Explosive: An explosive charge is used to disintegrate the target, and damage surrounding areas with a blast wave. Conventional: Chemicals such as gunpowder and high explosives store significant energy within their molecular bonds. This energy can be released quickly by a trigger, such as an electric spark. Thermobaric weapons enhance the blast effect by utilizing the surrounding atmosphere in their explosive reactions. Blast: A strong shock wave is provided by the detonation of the explosive. Fragmentation: Metal fragments are projected at high velocity to cause damage or injury. Continuous rod: Metal bars welded on their ends form a compact cylinder of interconnected rods, which is violently expanded into a contiguous zig-zag-shaped ring by an explosive detonation. The rapidly expanding ring produces a planar cutting effect that is devastating against military aircraft, which may be designed to be resistant to shrapnel. Shaped charge: The effect of the explosive charge is focused onto a specially shaped metal liner to project a hypervelocity jet of metal, to perforate heavy armour. Explosively formed penetrator: Instead of turning a thin metal liner into a focused jet, the detonation wave is directed against a concave metal plate at the front of the warhead, propelling it at high velocity while simultaneously deforming it into a projectile. Nuclear: A runaway nuclear fission (fission bomb) or nuclear fusion (Thermonuclear weapon) reaction causes immense energy release. Chemical: A toxic chemical, such as poison gas or nerve gas, is dispersed, which is designed to injure or kill human beings. Biological: An infectious agent, such as anthrax spores, is dispersed, which is designed to sicken or kill humans. Often, a biological or chemical warhead will use an explosive charge for rapid dispersal. Detonators Explosive warheads contain detonators to trigger the explosion. Types of detonators include:
Technology
Missiles
null
33879
https://en.wikipedia.org/wiki/Windows%20XP
Windows XP
Windows XP is a major release of Microsoft's Windows NT operating system. It was released to manufacturing on August 24, 2001, and later to retail on October 25, 2001. It is a direct successor to Windows 2000 for high-end and business users and Windows Me for home users. Development of Windows XP began in the late 1990s under the codename "Neptune", built on the Windows NT kernel and explicitly intended for mainstream consumer use. An updated version of Windows 2000 was also initially planned for the business market. However, in January 2000, both projects were scrapped in favor of a single OS codenamed "Whistler", which would serve as a single platform for both consumer and business markets. As a result, Windows XP is the first consumer edition of Windows not based on the Windows 95 kernel or MS-DOS. Windows XP removed support for PC-98, i486, and SGI Visual Workstation 320 and 540, and will only run on 32-bit x86 CPUs and devices that use BIOS firmware. Upon its release, Windows XP received critical acclaim, noting increased performance and stability (especially compared to Windows Me), a more intuitive user interface, improved hardware support, and expanded multimedia capabilities. Windows XP and Windows Server 2003 were succeeded by Windows Vista and Windows Server 2008, released in 2007 and 2008, respectively. Mainstream support for Windows XP ended on April 14, 2009, and extended support ended on April 8, 2014. Windows Embedded POSReady 2009, based on Windows XP Professional, received security updates until April 2019. The final security update for Service Pack 3 was released on May 14, 2019. Unofficial methods were made available to apply the updates to other editions of Windows XP. Microsoft has discouraged this practice, citing compatibility issues. , globally, under 0.6% of Windows PCs and 0.1% of all devices across all platforms continued to run Windows XP. Development In the late 1990s, initial development of what would become Windows XP was focused on two individual products: "Odyssey", which was reportedly intended to succeed the future Windows 2000 and "Neptune", which was reportedly a consumer-oriented operating system using the Windows NT architecture, succeeding the MS-DOS-based Windows 98. However, the projects proved to be too ambitious. In January 2000, shortly prior to the official release of Windows 2000, technology writer Paul Thurrott reported that Microsoft had shelved both Neptune and Odyssey in favor of a new product codenamed "Whistler", named after Whistler, British Columbia, as many Microsoft employees skied at the Whistler-Blackcomb ski resort. The goal of Whistler was to unify both the consumer and business-oriented Windows lines under a single, Windows NT platform. Thurrott stated that Neptune had become "a black hole when all the features that were cut from Windows Me were simply re-tagged as Neptune features. And since Neptune and Odyssey would be based on the same code-base anyway, it made sense to combine them into a single project". At PDC on July 13, 2000, Microsoft announced that Whistler would be released during the second half of 2001, and also unveiled the first preview build, 2250, which featured an early implementation of Windows XP's visual styles system and interface changes to Windows Explorer and the Control Panel. Microsoft released the first public beta build of Whistler, build 2296, on October 31, 2000. Subsequent builds gradually introduced features that users of the release version of Windows XP would recognize, such as Internet Explorer 6.0, the Microsoft Product Activation system, and the Bliss desktop background. Whistler was officially unveiled during a media event on February 5, 2001, under the name Windows XP, where XP stands for "eXPerience". Release In June 2001, Microsoft indicated that it was planning to spend at least US$1 billion on marketing and promoting Windows XP, in conjunction with Intel and other PC makers. The theme of the campaign, "Yes You Can", was designed to emphasize the platform's overall capabilities. Microsoft had originally planned to use the slogan "Prepare to Fly", but it was replaced because of sensitivity issues in the wake of the September 11 attacks. On August 24, 2001, Windows XP build 2600 was released to manufacturing (RTM). During a ceremonial media event at Microsoft Redmond Campus, copies of the RTM build were given to representatives of several major PC manufacturers in briefcases, who then flew off on decorated helicopters. While PC manufacturers would be able to release devices running XP beginning on September 24, 2001, XP was expected to reach general retail availability on October 25, 2001. On the same day, Microsoft also announced the final retail pricing of XP's two main editions, "Home" (as a replacement for Windows Me for home computing) and "Professional" (as a replacement for Windows 2000 for high-end users). New and updated features User interface While retaining some similarities to previous versions, Windows XP's interface was overhauled with a new visual appearance, with an increased use of alpha compositing effects, drop shadows, and "visual styles", which completely changed the appearance of the operating system. The number of effects enabled are determined by the operating system based on the computer's processing power, and can be enabled or disabled on a case-by-case basis. XP also added ClearType, a new subpixel rendering system designed to improve the appearance of fonts on liquid-crystal displays. A new set of system icons was also introduced. The default wallpaper, Bliss, is a photo of a landscape in the Napa Valley outside Napa, California, with rolling green hills and a blue sky with stratocumulus and cirrus clouds. The Start menu received its first major overhaul in XP, switching to a two-column layout with the ability to list, pin, and display frequently used applications, recently opened documents, and the traditional cascading "All Programs" menu. The taskbar can now group windows opened by a single application into one taskbar button, with a popup menu listing the individual windows. The notification area also hides "inactive" icons by default. A "common tasks" list was added, and Windows Explorer's sidebar was updated to use a new task-based design with lists of common actions; the tasks displayed are contextually relevant to the type of content in a folder (e.g. a folder with music displays offers to play all the files in the folder, or burn them to a CD). Fast user switching allows additional users to log into a Windows XP machine without existing users having to close their programs and log out. Although only one user at the time can use the console (i.e., monitor, keyboard, and mouse), previous users can resume their session once they regain control of the console. Service Pack 2 and Service Pack 3 also introduced new features to Windows XP post-release, including the Windows Security Center, Bluetooth support, Data Execution Prevention, Windows Firewall, and support for SDHC cards that are larger than 4 GB and smaller than 32 GB. Infrastructure Windows XP uses prefetching to improve startup and application launch times. It also became possible to revert the installation of an updated device driver, should the updated driver produce undesirable results. A copy protection system known as Windows Product Activation was introduced with Windows XP and its server counterpart, Windows Server 2003. All non-enterprise (Volume Licensing) Windows licenses must be tied to a unique ID generated using information from the computer hardware, transmitted either via the internet or a telephone hotline. If Windows is not activated within 30 days of installation, the OS will cease to function until it is activated. Windows also periodically verifies the hardware to check for changes. If significant hardware changes are detected, the activation is voided, and Windows must be re-activated. Networking and internet functionality Windows XP was originally bundled with Internet Explorer 6, Outlook Express 6, Windows Messenger, and MSN Explorer. New networking features were also added, including Internet Connection Firewall, Internet Connection Sharing integration with UPnP, NAT traversal APIs, Quality of Service features, IPv6 and Teredo tunneling, Background Intelligent Transfer Service, extended fax features, network bridging, peer to peer networking, support for most DSL modems, IEEE 802.11 (Wi-Fi) connections with auto configuration and roaming, TAPI 3.1, and networking over FireWire. Remote Assistance and Remote Desktop were also added, which allow users to connect to a computer running Windows XP from across a network or the Internet and access their applications, files, printers, and devices or request help. Improvements were also made to IntelliMirror features such as Offline Files, roaming user profiles, and folder redirection. Backward compatibility To enable running software that targets or locks out specific versions of Windows, "Compatibility mode" was added. It allows pretending a selected earlier version of Windows to software, starting at Windows 95. This feature was first introduced in Windows 2000 Service Pack 2, released five months before the release of Windows XP, and was backported from prerelease Windows XP builds. Unlike with Windows XP, however, it was hidden from the operating system as it was not enabled by default and had to be manually activated through the Register Server utility. It was also only available to administrator users. Windows XP has this feature activated out of the box and also grants it to regular users. Other features Improved application compatibility and shims compared to Windows 2000. DirectX 8.1, upgradeable to DirectX 9.0c. A number of new features in Windows Explorer including task panes, thumbnails, and the option to view photos as a slideshow. Improved imaging features such as Windows Picture and Fax Viewer. Faster start-up, (because of improved Prefetch functions) logon, logoff, hibernation, and application launch sequences. Numerous improvements to increase the system reliability such as improved System Restore, Automated System Recovery, and driver reliability improvements through Device Driver Rollback. Hardware support improvements such as FireWire 800, and improvements to multi-monitor support under the name "DualView". Fast user switching. The ClearType font rendering mechanism, which is designed to improve text readability on liquid-crystal display (LCD) and similar monitors, especially laptops. Side-by-side assemblies and registration-free COM. General improvements to international support such as more locales, languages and scripts, MUI support in Terminal Services, improved Input Method Editors, and National Language Support. Removed features Some of the programs and features that were part of the previous versions of Windows did not make it to Windows XP. Various MS-DOS commands available in its Windows 9x predecessor were removed, as were the POSIX and OS/2 subsystems. In networking, NetBEUI, NWLink and NetDDE were deprecated and not installed by default. Plug-and-play–incompatible communication devices (like modems and network interface cards) were no longer supported. Service Pack 2 and Service Pack 3 also removed features from Windows XP, including support for TCP half-open connections and the address bar on the taskbar. Editions Windows XP was released in two major editions on launch: Home Edition and Professional Edition. Both editions were made available at retail as pre-loaded software on new computers and as boxed copies. Boxed copies were sold as "Upgrade" or "Full" licenses; the "Upgrade" versions were slightly cheaper, but require an existing version of Windows to install. The "Full" version can be installed on systems without an operating system or existing version of Windows. The two editions of XP were aimed at different markets: Home Edition is explicitly intended for consumer use and disables or removes certain advanced and enterprise-oriented features present on Professional, such as the ability to join a Windows domain, Internet Information Services, and Multilingual User Interface. Windows 98 or Me can be upgraded to either edition, but Windows NT 4.0 and Windows 2000 can only be upgraded to Professional. Windows' software license agreement for pre-loaded licenses allows the software to be "returned" to the OEM for a refund if the user does not wish to use it. Despite the refusal of some manufacturers to honor the entitlement, it has been enforced by courts in some countries. Two specialized variants of XP were introduced in 2002 for certain types of hardware, exclusively through OEM channels as pre-loaded software. Windows XP Media Center Edition was initially designed for high-end home theater PCs with TV tuners (marketed under the term "Media Center PC"), offering expanded multimedia functionality, an electronic program guide, and digital video recorder (DVR) support through the Windows Media Center application. Microsoft also unveiled Windows XP Tablet PC Edition, which contains additional pen input features, and is optimized for mobile devices meeting its Tablet PC specifications. Two different 64-bit editions of XP were made available. The first, Windows XP 64-Bit Edition, was intended for IA-64 (Itanium) systems; as IA-64 usage declined on workstations in favor of AMD's x86-64 architecture, the Itanium edition was discontinued in January 2005. A new 64-bit edition supporting the x86-64 architecture, called Windows XP Professional x64 Edition, was released in April 2005. Microsoft also targeted emerging markets with the 2004 introduction of Windows XP Starter Edition, a special variant of Home Edition intended for low-cost PCs. The OS is primarily aimed at first-time computer owners, containing heavy localization (including wallpapers and screen savers incorporating images of local landmarks), and a "My Support" area which contains video tutorials on basic computing tasks. It also removes certain "complex" features, and does not allow users to run more than three applications at a time. After a pilot program in India and Thailand, Starter was released in other emerging markets throughout 2005. In 2006, Microsoft also unveiled the FlexGo initiative, which would also target emerging markets with subsidized PCs on a pre-paid, subscription basis. As a result of unfair competition lawsuits in Europe and South Korea, which both alleged that Microsoft had improperly leveraged its status in the PC market to favor its own bundled software, Microsoft was ordered to release special editions of XP in these markets that excluded certain applications. In March 2004, after the European Commission fined Microsoft €497 million (US$603 million), Microsoft was ordered to release "N" editions of XP that excluded Windows Media Player, encouraging users to pick and download their own media player software. As it was sold at the same price as the edition with Windows Media Player included, certain OEMs (such as Dell, who offered it for a short period, along with Hewlett-Packard, Lenovo and Fujitsu Siemens) chose not to offer it. Consumer interest was minuscule, with roughly 1,500 units shipped to OEMs, and no reported sales to consumers. In December 2005, the Korean Fair Trade Commission ordered Microsoft to make available editions of Windows XP and Windows Server 2003 that do not contain Windows Media Player or Windows Messenger. The "K" and "KN" editions of Windows XP were released in August 2006, and are only available in English and Korean, and also contain links to third-party instant messenger and media player software. Service packs A service pack is a cumulative update package that is a superset of all updates, and even service packs, that have been released before it. Three service packs have been released for Windows XP. Service Pack 3 is slightly different, in that it needs at least Service Pack 1 to have been installed, in order to update a live OS. However, Service Pack 3 can still be embedded into a Windows installation disc; SP1 is not reported as a prerequisite for doing so. The boot screens for all editions of Windows XP have been unified by Service Pack 2 for Windows XP with a new one that no longer displays the SKU, with the boot screen for Home Edition using a blue progress bar instead of green. The copyright years on the boot screen were also removed. Service Pack 1 Service Pack 1 (SP1) for Windows XP was released on September 9, 2002. It contained over 300 minor, post-RTM bug fixes, along with all security patches released since the original release of XP. SP1 also added USB 2.0 support, the Microsoft Java Virtual Machine, .NET Framework support, and support for technologies used by the then-upcoming Media Center and Tablet PC editions of XP. The most significant change on SP1 was the addition of Set Program Access and Defaults, a settings page which allows programs to be set as default for certain types of activities (such as media players or web browsers) and for access to bundled, Microsoft programs (such as Internet Explorer or Windows Media Player) to be disabled. This feature was added to comply with the settlement of United States v. Microsoft Corp., which required Microsoft to offer the ability for OEMs to bundle third-party competitors to software it bundles with Windows (such as Internet Explorer and Windows Media Player), and give them the same level of prominence as those normally bundled with the OS. On February 3, 2003, Microsoft released Service Pack 1a (SP1a). It was the same as SP1, except the Microsoft Java Virtual Machine was excluded. Service Pack 2 Service Pack 2 (SP2) for Windows XP Home edition and Professional edition was released on August 25, 2004. Headline features included WPA encryption compatibility for Wi-Fi and usability improvements to the Wi-Fi networking user interface, partial Bluetooth support, and various improvements to security systems. Headed by former computer hacker Window Snyder, the service pack's security improvements (codenamed "Springboard", as these features were intended to underpin additional changes in Longhorn) included a major revision to the included firewall (renamed Windows Firewall, and now enabled by default), and an update to Data Execution Prevention, which gained hardware support in the NX bit that can stop some forms of buffer overflow attacks. Raw socket support is removed (which supposedly limits the damage done by zombie machines) and the Windows Messenger service (which had been abused to cause pop-up advertisements to be displayed as system messages without a web browser or any additional software) became disabled by default. Additionally, security-related improvements were made to e-mail and web browsing. Service Pack 2 also added Security Center, an interface that provides a general overview of the system's security status, including the state of the firewall and automatic updates. Third-party firewall and antivirus software can also be monitored from Security Center. In August 2006, Microsoft released updated installation media for Windows XP and Windows Server 2003 SP2 (SP2b), in order to incorporate a patch requiring ActiveX controls in Internet Explorer to be manually activated before a user may interact with them. This was done so that the browser would not violate a patent owned by Eolas. Microsoft has since licensed the patent, and released a patch reverting the change in April 2008. In September 2007, another minor revision known as SP2c was released for XP Professional, extending the number of available product keys for the operating system to "support the continued availability of Windows XP Professional through the scheduled system builder channel end-of-life (EOL) date of January 31, 2009." Windows XP Service Pack 2 was later included in Windows Embedded for Point of Service and Windows Fundamentals for Legacy PCs. Service Pack 3 The third and final Service Pack, SP3, was released through different channels between April 21 and June 10, 2008, about a year after the release of Windows Vista, and about a year before the release of Windows 7. Service Pack 3 was not available for Windows XP x64 Edition, which was based on the Windows Server 2003 kernel and, as a result, used its service packs rather than the ones for the other editions. It began being automatically pushed out to Automatic Updates users on July 10, 2008. A feature set overview which detailed new features available separately as stand-alone updates to Windows XP, as well as backported features from Windows Vista, was posted by Microsoft. A total of 1,174 fixes are included in SP3. Service Pack 3 could be installed on systems with Internet Explorer up to and including version 8; Internet Explorer 7 was not included as part of SP3. It also did not include Internet Explorer 8, but instead was included in Windows 7, which was released one year after XP SP3. Service Pack 3 included security enhancements over and above those of SP2, including APIs allowing developers to enable Data Execution Prevention for their code, independent of system-wide compatibility enforcement settings, the Security Support Provider Interface, improvements to WPA2 security, and an updated version of the Microsoft Enhanced Cryptographic Provider Module that is FIPS 140-2 certified. In incorporating all previously released updates not included in SP2, Service Pack 3 included many other key features. Windows Imaging Component allowed camera vendors to integrate their own proprietary image codecs with the operating system's features, such as thumbnails and slideshows. In enterprise features, Remote Desktop Protocol 6.1 included support for ClearType and 32-bit color depth over RDP, while improvements made to Windows Management Instrumentation in Windows Vista to reduce the possibility of corruption of the WMI repository were backported to XP SP3. In addition, SP3 contains updates to the operating system components of Windows XP Media Center Edition (MCE) and Windows XP Tablet PC Edition, and security updates for .NET Framework version 1.0, which is included in these editions. However, it does not include update rollups for the Windows Media Center application in Windows XP MCE 2005. SP3 also omits security updates for Windows Media Player 10, although the player is included in Windows XP MCE 2005. The Address Bar DeskBand on the Taskbar is no longer included because of antitrust violation concerns. Unofficial SP3 ZIP download packages were released on a now-defunct website called The Hotfix from 2005 to 2007. The owner of the website, Ethan C. Allen, was a former Microsoft employee in Software Quality Assurance and would comb through the Microsoft Knowledge Base articles daily and download new hotfixes Microsoft would put online within the articles. The articles would have a "kbwinxppresp3fix" and/or "kbwinxpsp3fix" tag, thus allowing Allen to easily find and determine which fixes were planned for the official SP3 release to come. Microsoft publicly stated at the time that the SP3 pack was unofficial and advised users to not install it. Allen also released a Vista SP1 package in 2007, for which Allen received a cease-and-desist email from Microsoft. Windows XP Service Pack 3 was later included in Windows Embedded Standard 2009 and Windows Embedded POSReady 2009. System requirements System requirements for Windows XP are as follows:
Technology
Operating Systems
null
33894
https://en.wikipedia.org/wiki/Wheatstone%20bridge
Wheatstone bridge
A Wheatstone bridge is an electrical circuit used to measure an unknown electrical resistance by balancing two legs of a bridge circuit, one leg of which includes the unknown component. The primary benefit of the circuit is its ability to provide extremely accurate measurements (in contrast with something like a simple voltage divider). Its operation is similar to the original potentiometer. The Wheatstone bridge was invented by Samuel Hunter Christie (sometimes spelled "Christy") in 1833 and improved and popularized by Sir Charles Wheatstone in 1843. One of the Wheatstone bridge's initial uses was for soil analysis and comparison. Operation In the figure, is the fixed, yet unknown, resistance to be measured. , , and are resistors of known resistance and the resistance of is adjustable. The resistance is adjusted until the bridge is "balanced" and no current flows through the galvanometer . At this point, the potential difference between the two midpoints (B and D) will be zero. Therefore the ratio of the two resistances in the known leg is equal to the ratio of the two resistances in the unknown leg . If the bridge is unbalanced, the direction of the current indicates whether is too high or too low. At the point of balance, Detecting zero current with a galvanometer can be done to extremely high precision. Therefore, if , , and are known to high precision, then can be measured to high precision. Very small changes in disrupt the balance and are readily detected. Alternatively, if , , and are known, but is not adjustable, the voltage difference across or current flow through the meter can be used to calculate the value of , using Kirchhoff's circuit laws. This setup is frequently used in strain gauge and resistance thermometer measurements, as it is usually faster to read a voltage level off a meter than to adjust a resistance to zero the voltage. Derivation Quick derivation at balance At the point of balance, both the voltage and the current between the two midpoints (B and D) are zero. Therefore, , , . Because of , then and . Dividing the last two equations by members and using the above currents equalities, then Full derivation using Kirchhoff's circuit laws First, Kirchhoff's first law is used to find the currents in junctions B and D: Then, Kirchhoff's second law is used for finding the voltage in the loops ABDA and BCDB: When the bridge is balanced, then , so the second set of equations can be rewritten as: Then, equation (1) is divided by equation (2) and the resulting equation is rearranged, giving: Due to and being proportional from Kirchhoff's First Law, cancels out of the above equation. The desired value of is now known to be given as: On the other hand, if the resistance of the galvanometer is high enough that is negligible, it is possible to compute from the three other resistor values and the supply voltage (), or the supply voltage from all four resistor values. To do so, one has to work out the voltage from each potential divider and subtract one from the other. The equations for this are: where is the voltage of node D relative to node B. Significance The Wheatstone bridge illustrates the concept of a difference measurement, which can be extremely accurate. Variations on the Wheatstone bridge can be used to measure capacitance, inductance, impedance and other quantities, such as the amount of combustible gases in a sample, with an explosimeter. The Kelvin bridge was specially adapted from the Wheatstone bridge for measuring very low resistances. In many cases, the significance of measuring the unknown resistance is related to measuring the impact of some physical phenomenon (such as force, temperature, pressure, etc.) which thereby allows the use of Wheatstone bridge in measuring those elements indirectly. The concept was extended to alternating current measurements by James Clerk Maxwell in 1865 and further improved as by Alan Blumlein in British Patent no. 323,037, 1928. Modifications of the basic bridge The Wheatstone bridge is the fundamental bridge, but there are other modifications that can be made to measure various kinds of resistances when the fundamental Wheatstone bridge is not suitable. Some of the modifications are: Carey Foster bridge, for measuring small resistances Kelvin bridge, for measuring small four-terminal resistances Maxwell bridge, and Wien bridge for measuring reactive components Anderson's bridge, for measuring the self-inductance of the circuit, an advanced form of Maxwell's bridge
Technology
Functional circuits
null
33898
https://en.wikipedia.org/wiki/Website
Website
A website (also written as a web site) is one or more web pages and related content that is identified by a common domain name and published on at least one web server. Websites are typically dedicated to a particular topic or purpose, such as news, education, commerce, entertainment, or social media. Hyperlinking between web pages guides the navigation of the site, which often starts with a home page. The most-visited sites are Google, YouTube, and Facebook. All publicly-accessible websites collectively constitute the World Wide Web. There are also private websites that can only be accessed on a private network, such as a company's internal website for its employees. Users can access websites on a range of devices, including desktops, laptops, tablets, and smartphones. The app used on these devices is called a web browser. Background The World Wide Web (WWW) was created in 1989 by the British CERN computer scientist Tim Berners-Lee. On 30 April 1993, CERN announced that the World Wide Web would be free to use for anyone, contributing to the immense growth of the Web. Before the introduction of the Hypertext Transfer Protocol (HTTP), other protocols such as File Transfer Protocol and the gopher protocol were used to retrieve individual files from a server. These protocols offer a simple directory structure in which the user navigates and where they choose files to download. Documents were most often presented as plain text files without formatting or were encoded in word processor formats. History While "web site" was the original spelling (sometimes capitalized "Web site", since "Web" is a proper noun when referring to the World Wide Web), this variant has become rarely used, and "website" has become the standard spelling. All major style guides, such as The Chicago Manual of Style and the AP Stylebook, have reflected this change. In February 2009, Netcraft, an Internet monitoring company that has tracked Web growth since 1995, reported that there were 215,675,903 websites with domain names and content on them in 2009, compared to just 19,732 websites in August 1995. After reaching 1 billion websites in September 2014, a milestone confirmed by Netcraft in its October 2014 Web Server Survey and that Internet Live Stats was the first to announce—as attested by this tweet from the inventor of the World Wide Web himself, Tim Berners-Lee—the number of websites in the world have subsequently declined, reverting to a level below 1 billion. This is due to the monthly fluctuations in the count of inactive websites. The number of websites continued growing to over 1 billion by March 2016 and has continued growing since. Netcraft Web Server Survey in January 2020 reported that there are 1,295,973,827 websites and in April 2021 reported that there are 1,212,139,815 sites across 10,939,637 web-facing computers, and 264,469,666 unique domains. An estimated 85 percent of all websites are inactive. Static website A static website is one that has Web pages stored on the server in the format that is sent to a client Web browser. It is primarily coded in Hypertext Markup Language (HTML); Cascading Style Sheets (CSS) are used to control appearance beyond basic HTML. Images are commonly used to create the desired appearance and as part of the main content. Audio or video might also be considered "static" content if it plays automatically or is generally non-interactive. This type of website usually displays the same information to all visitors. Similar to handing out a printed brochure to customers or clients, a static website will generally provide consistent, standard information for an extended period of time. Although the website owner may make updates periodically, it is a manual process to edit the text, photos, and other content and may require basic website design skills and software. Simple forms or marketing examples of websites, such as a classic website, a five-page website or a brochure website are often static websites, because they present pre-defined, static information to the user. This may include information about a company and its products and services through text, photos, animations, audio/video, and navigation menus. Static websites may still use server side includes (SSI) as an editing convenience, such as sharing a common menu bar across many pages. As the site's behavior to the reader is still static, this is not considered a dynamic site. Dynamic website A dynamic website is one that changes or customizes itself frequently and automatically. Server-side dynamic pages are generated "on the fly" by computer code that produces the HTML (CSS are responsible for appearance and thus, are static files). There are a wide range of software systems, such as CGI, Java Servlets and Java Server Pages (JSP), Active Server Pages and ColdFusion (CFML) that are available to generate dynamic Web systems and dynamic sites. Various Web application frameworks and Web template systems are available for general-use programming languages like Perl, PHP, Python and Ruby to make it faster and easier to create complex dynamic websites. A site can display the current state of a dialogue between users, monitor a changing situation, or provide information in some way personalized to the requirements of the individual user. For example, when the front page of a news site is requested, the code running on the webserver might combine stored HTML fragments with news stories retrieved from a database or another website via RSS to produce a page that includes the latest information. Dynamic sites can be interactive by using HTML forms, storing and reading back browser cookies, or by creating a series of pages that reflect the previous history of clicks. Another example of dynamic content is when a retail website with a database of media products allows a user to input a search request, e.g. for the keyword Beatles. In response, the content of the Web page will spontaneously change the way it looked before, and will then display a list of Beatles products like CDs, DVDs, and books. Dynamic HTML uses JavaScript code to instruct the Web browser how to interactively modify the page contents. One way to simulate a certain type of dynamic website while avoiding the performance loss of initiating the dynamic engine on a per-user or per-connection basis is to periodically automatically regenerate a large series of static pages. Multimedia and interactive content Early websites had only text, and soon after, images. Web browser plug-ins were then used to add audio, video, and interactivity (such as for a rich Web application that mirrors the complexity of a desktop application like a word processor). Examples of such plug-ins are Microsoft Silverlight, Adobe Flash Player, Adobe Shockwave Player, and Java SE. HTML 5 includes provisions for audio and video without plugins. JavaScript is also built into most modern web browsers, and allows for website creators to send code to the web browser that instructs it how to interactively modify page content and communicate with the web server if needed. The browser's internal representation of the content is known as the Document Object Model (DOM). WebGL (Web Graphics Library) is a modern JavaScript API for rendering interactive 3D graphics without the use of plug-ins. It allows interactive content such as 3D animations, visualizations and video explainers to presented users in the most intuitive way. A 2010-era trend in websites called "responsive design" has given the best viewing experience as it provides a device-based layout for users. These websites change their layout according to the device or mobile platform, thus giving a rich user experience. Types Websites can be divided into two broad categories—static and interactive. Interactive sites are part of the Web 2.0 community of sites and allow for interactivity between the site owner and site visitors or users. Static sites serve or capture information but do not allow engagement with the audience or users directly. Some websites are informational or produced by enthusiasts or for personal use or entertainment. Many websites do aim to make money using one or more business models, including: Posting interesting content and selling contextual advertising either through direct sales or through an advertising network. E-commerce: products or services are purchased directly through the website Advertising products or services available at a brick-and-mortar business Freemium: basic content is available for free, but premium content requires a payment (e.g., WordPress website, it is an open-source platform to build a blog or website). Some websites require user registration or subscription to access the content. Examples of subscription websites include many business sites, news websites, academic journal websites, gaming websites, file-sharing websites, message boards, Web-based email, social networking websites, websites providing real-time stock market data, as well as sites providing various other services.
Technology
Internet
null
33931
https://en.wikipedia.org/wiki/Weight
Weight
In science and engineering, the weight of an object is a quantity associated with the gravitational force exerted on the object by other objects in its environment, although there is some variation and debate as to the exact definition. Some standard textbooks define weight as a vector quantity, the gravitational force acting on the object. Others define weight as a scalar quantity, the magnitude of the gravitational force. Yet others define it as the magnitude of the reaction force exerted on a body by mechanisms that counteract the effects of gravity: the weight is the quantity that is measured by, for example, a spring scale. Thus, in a state of free fall, the weight would be zero. In this sense of weight, terrestrial objects can be weightless: so if one ignores air resistance, one could say the legendary apple falling from the tree, on its way to meet the ground near Isaac Newton, was weightless. The unit of measurement for weight is that of force, which in the International System of Units (SI) is the newton. For example, an object with a mass of one kilogram has a weight of about 9.8 newtons on the surface of the Earth, and about one-sixth as much on the Moon. Although weight and mass are scientifically distinct quantities, the terms are often confused with each other in everyday use (e.g. comparing and converting force weight in pounds to mass in kilograms and vice versa). Further complications in elucidating the various concepts of weight have to do with the theory of relativity according to which gravity is modeled as a consequence of the curvature of spacetime. In the teaching community, a considerable debate has existed for over half a century on how to define weight for their students. The current situation is that a multiple set of concepts co-exist and find use in their various contexts. History Discussion of the concepts of heaviness (weight) and lightness (levity) date back to the ancient Greek philosophers. These were typically viewed as inherent properties of objects. Plato described weight as the natural tendency of objects to seek their kin. To Aristotle, weight and levity represented the tendency to restore the natural order of the basic elements: air, earth, fire and water. He ascribed absolute weight to earth and absolute levity to fire. Archimedes saw weight as a quality opposed to buoyancy, with the conflict between the two determining if an object sinks or floats. The first operational definition of weight was given by Euclid, who defined weight as: "the heaviness or lightness of one thing, compared to another, as measured by a balance." Operational balances (rather than definitions) had, however, been around much longer. According to Aristotle, weight was the direct cause of the falling motion of an object, the speed of the falling object was supposed to be directly proportionate to the weight of the object. As medieval scholars discovered that in practice the speed of a falling object increased with time, this prompted a change to the concept of weight to maintain this cause-effect relationship. Weight was split into a "still weight" or , which remained constant, and the actual gravity or , which changed as the object fell. The concept of was eventually replaced by Jean Buridan's impetus, a precursor to momentum. The rise of the Copernican view of the world led to the resurgence of the Platonic idea that like objects attract but in the context of heavenly bodies. In the 17th century, Galileo made significant advances in the concept of weight. He proposed a way to measure the difference between the weight of a moving object and an object at rest. Ultimately, he concluded weight was proportionate to the amount of matter of an object, not the speed of motion as supposed by the Aristotelean view of physics. Newton The introduction of Newton's laws of motion and the development of Newton's law of universal gravitation led to considerable further development of the concept of weight. Weight became fundamentally separate from mass. Mass was identified as a fundamental property of objects connected to their inertia, while weight became identified with the force of gravity on an object and therefore dependent on the context of the object. In particular, Newton considered weight to be relative to another object causing the gravitational pull, e.g. the weight of the Earth towards the Sun. Newton considered time and space to be absolute. This allowed him to consider concepts as true position and true velocity. Newton also recognized that weight as measured by the action of weighing was affected by environmental factors such as buoyancy. He considered this a false weight induced by imperfect measurement conditions, for which he introduced the term apparent weight as compared to the true weight defined by gravity. Although Newtonian physics made a clear distinction between weight and mass, the term weight continued to be commonly used when people meant mass. This led the 3rd General Conference on Weights and Measures (CGPM) of 1901 to officially declare "The word weight denotes a quantity of the same nature as a force: the weight of a body is the product of its mass and the acceleration due to gravity", thus distinguishing it from mass for official usage. Relativity In the 20th century, the Newtonian concepts of absolute time and space were challenged by relativity. Einstein's equivalence principle put all observers, moving or accelerating, on the same footing. This led to an ambiguity as to what exactly is meant by the force of gravity and weight. A scale in an accelerating elevator cannot be distinguished from a scale in a gravitational field. Gravitational force and weight thereby became essentially frame-dependent quantities. This prompted the abandonment of the concept as superfluous in the fundamental sciences such as physics and chemistry. Nonetheless, the concept remained important in the teaching of physics. The ambiguities introduced by relativity led, starting in the 1960s, to considerable debate in the teaching community as how to define weight for their students, choosing between a nominal definition of weight as the force due to gravity or an operational definition defined by the act of weighing. Definitions Several definitions exist for weight, not all of which are equivalent. Gravitational definition The most common definition of weight found in introductory physics textbooks defines weight as the force exerted on a body by gravity. This is often expressed in the formula , where W is the weight, m the mass of the object, and g gravitational acceleration. In 1901, the 3rd General Conference on Weights and Measures (CGPM) established this as their official definition of weight: This resolution defines weight as a vector, since force is a vector quantity. However, some textbooks also take weight to be a scalar by defining: The gravitational acceleration varies from place to place. Sometimes, it is simply taken to have a standard value of , which gives the standard weight. The force whose magnitude is equal to mg newtons is also known as the m kilogram weight (which term is abbreviated to kg-wt) Operational definition In the operational definition, the weight of an object is the force measured by the operation of weighing it, which is the force it exerts on its support. Since W is the downward force on the body by the centre of earth and there is no acceleration in the body, there exists an opposite and equal force by the support on the body. Also it is equal to the force exerted by the body on its support because action and reaction have same numerical value and opposite direction. This can make a considerable difference, depending on the details; for example, an object in free fall exerts little if any force on its support, a situation that is commonly referred to as weightlessness. However, being in free fall does not affect the weight according to the gravitational definition. Therefore, the operational definition is sometimes refined by requiring that the object be at rest. However, this raises the issue of defining "at rest" (usually being at rest with respect to the Earth is implied by using standard gravity). In the operational definition, the weight of an object at rest on the surface of the Earth is lessened by the effect of the centrifugal force from the Earth's rotation. The operational definition, as usually given, does not explicitly exclude the effects of buoyancy, which reduces the measured weight of an object when it is immersed in a fluid such as air or water. As a result, a floating balloon or an object floating in water might be said to have zero weight. ISO definition In the ISO International standard ISO 80000-4:2006, describing the basic physical quantities and units in mechanics as a part of the International standard ISO/IEC 80000, the definition of weight is given as: The definition is dependent on the chosen frame of reference. When the chosen frame is co-moving with the object in question then this definition precisely agrees with the operational definition. If the specified frame is the surface of the Earth, the weight according to the ISO and gravitational definitions differ only by the centrifugal effects due to the rotation of the Earth. Apparent weight In many real world situations the act of weighing may produce a result that differs from the ideal value provided by the definition used. This is usually referred to as the apparent weight of the object. A common example of this is the effect of buoyancy, when an object is immersed in a fluid the displacement of the fluid will cause an upward force on the object, making it appear lighter when weighed on a scale. The apparent weight may be similarly affected by levitation and mechanical suspension. When the gravitational definition of weight is used, the operational weight measured by an accelerating scale is often also referred to as the apparent weight. Mass In modern scientific usage, weight and mass are fundamentally different quantities: mass is an intrinsic property of matter, whereas weight is a force that results from the action of gravity on matter: it measures how strongly the force of gravity pulls on that matter. However, in most practical everyday situations the word "weight" is used when, strictly, "mass" is meant. For example, most people would say that an object "weighs one kilogram", even though the kilogram is a unit of mass. The distinction between mass and weight is unimportant for many practical purposes because the strength of gravity does not vary too much on the surface of the Earth. In a uniform gravitational field, the gravitational force exerted on an object (its weight) is directly proportional to its mass. For example, object A weighs 10 times as much as object B, so therefore the mass of object A is 10 times greater than that of object B. This means that an object's mass can be measured indirectly by its weight, and so, for everyday purposes, weighing (using a weighing scale) is an entirely acceptable way of measuring mass. Similarly, a balance measures mass indirectly by comparing the weight of the measured item to that of an object(s) of known mass. Since the measured item and the comparison mass are in virtually the same location, so experiencing the same gravitational field, the effect of varying gravity does not affect the comparison or the resulting measurement. The Earth's gravitational field is not uniform but can vary by as much as 0.5% at different locations on Earth (see Earth's gravity). These variations alter the relationship between weight and mass, and must be taken into account in high-precision weight measurements that are intended to indirectly measure mass. Spring scales, which measure local weight, must be calibrated at the location at which the objects will be used to show this standard weight, to be legal for commerce. This table shows the variation of acceleration due to gravity (and hence the variation of weight) at various locations on the Earth's surface. The historical use of "weight" for "mass" also persists in some scientific terminology – for example, the chemical terms "atomic weight", "molecular weight", and "formula weight", can still be found rather than the preferred "atomic mass", etc. In a different gravitational field, for example, on the surface of the Moon, an object can have a significantly different weight than on Earth. The gravity on the surface of the Moon is only about one-sixth as strong as on the surface of the Earth. A one-kilogram mass is still a one-kilogram mass (as mass is an intrinsic property of the object) but the downward force due to gravity, and therefore its weight, is only one-sixth of what the object would have on Earth. So a man of mass 180 pounds weighs only about 30 pounds-force when visiting the Moon. SI units In most modern scientific work, physical quantities are measured in SI units. The SI unit of weight is the same as that of force: the newton (N) – a derived unit which can also be expressed in SI base units as kg⋅m/s2 (kilograms times metres per second squared). In commercial and everyday use, the term "weight" is usually used to mean mass, and the verb "to weigh" means "to determine the mass of" or "to have a mass of". Used in this sense, the proper SI unit is the kilogram (kg). Pound and other non-SI units In United States customary units, the pound can be either a unit of force or a unit of mass. Related units used in some distinct, separate subsystems of units include the poundal and the slug. The poundal is defined as the force necessary to accelerate an object of one-pound mass at 1ft/s2, and is equivalent to about 1/32.2 of a pound-force. The slug is defined as the amount of mass that accelerates at 1ft/s2 when one pound-force is exerted on it, and is equivalent to about 32.2 pounds (mass). The kilogram-force is a non-SI unit of force, defined as the force exerted by a one-kilogram mass in standard Earth gravity (equal to 9.80665 newtons exactly). The dyne is the cgs unit of force and is not a part of SI, while weights measured in the cgs unit of mass, the gram, remain a part of SI. Sensation The sensation of weight is caused by the force exerted by fluids in the vestibular system, a three-dimensional set of tubes in the inner ear. It is actually the sensation of g-force, regardless of whether this is due to being stationary in the presence of gravity, or, if the person is in motion, the result of any other forces acting on the body such as in the case of acceleration or deceleration of a lift, or centrifugal forces when turning sharply. Measuring Weight is commonly measured using one of two methods. A spring scale or hydraulic or pneumatic scale measures local weight, the local force of gravity on the object (strictly apparent weight force). Since the local force of gravity can vary by up to 0.5% at different locations, spring scales will measure slightly different weights for the same object (the same mass) at different locations. To standardize weights, scales are always calibrated to read the weight an object would have at a nominal standard gravity of 9.80665m/s2 (approx. 32.174ft/s2). However, this calibration is done at the factory. When the scale is moved to another location on Earth, the force of gravity will be different, causing a slight error. So to be highly accurate and legal for commerce, spring scales must be re-calibrated at the location at which they will be used. A balance on the other hand, compares the weight of an unknown object in one scale pan to the weight of standard masses in the other, using a lever mechanism – a lever-balance. The standard masses are often referred to, non-technically, as "weights". Since any variations in gravity will act equally on the unknown and the known weights, a lever-balance will indicate the same value at any location on Earth. Therefore, balance "weights" are usually calibrated and marked in mass units, so the lever-balance measures mass by comparing the Earth's attraction on the unknown object and standard masses in the scale pans. In the absence of a gravitational field, away from planetary bodies (e.g. space), a lever-balance would not work, but on the Moon, for example, it would give the same reading as on Earth. Some balances are marked in weight units, but since the weights are calibrated at the factory for standard gravity, the balance will measure standard weight, i.e. what the object would weigh at standard gravity, not the actual local force of gravity on the object. If the actual force of gravity on the object is needed, this can be calculated by multiplying the mass measured by the balance by the acceleration due to gravity – either standard gravity (for everyday work) or the precise local gravity (for precision work). Tables of the gravitational acceleration at different locations can be found on the web. Gross weight is a term that is generally found in commerce or trade applications, and refers to the total weight of a product and its packaging. Conversely, net weight refers to the weight of the product alone, discounting the weight of its container or packaging; and tare weight is the weight of the packaging alone. Relative weights on the Earth and other celestial bodies The table below shows comparative gravitational accelerations at the surface of the Sun, the Earth's moon, each of the planets in the solar system. The "surface" is taken to mean the cloud tops of the giant planets (Jupiter, Saturn, Uranus, and Neptune). For the Sun, the surface is taken to mean the photosphere. The values in the table have not been de-rated for the centrifugal effect of planet rotation (and cloud-top wind speeds for the giant planets) and therefore, generally speaking, are similar to the actual gravity that would be experienced near the poles.
Physical sciences
Classical mechanics
null
33978
https://en.wikipedia.org/wiki/Weather
Weather
Weather is the state of the atmosphere, describing for example the degree to which it is hot or cold, wet or dry, calm or stormy, clear or cloudy. On Earth, most weather phenomena occur in the lowest layer of the planet's atmosphere, the troposphere, just below the stratosphere. Weather refers to day-to-day temperature, precipitation, and other atmospheric conditions, whereas climate is the term for the averaging of atmospheric conditions over longer periods of time. When used without qualification, "weather" is generally understood to mean the weather of Earth. Weather is driven by air pressure, temperature, and moisture differences between one place and another. These differences can occur due to the Sun's angle at any particular spot, which varies with latitude. The strong temperature contrast between polar and tropical air gives rise to the largest scale atmospheric circulations: the Hadley cell, the Ferrel cell, the polar cell, and the jet stream. Weather systems in the middle latitudes, such as extratropical cyclones, are caused by instabilities of the jet streamflow. Because Earth's axis is tilted relative to its orbital plane (called the ecliptic), sunlight is incident at different angles at different times of the year. On Earth's surface, temperatures usually range ±40 °C (−40 °F to 104 °F) annually. Over thousands of years, changes in Earth's orbit can affect the amount and distribution of solar energy received by Earth, thus influencing long-term climate and global climate change. Surface temperature differences in turn cause pressure differences. Higher altitudes are cooler than lower altitudes, as most atmospheric heating is due to contact with the Earth's surface while radiative losses to space are mostly constant. Weather forecasting is the application of science and technology to predict the state of the atmosphere for a future time and a given location. Earth's weather system is a chaotic system; as a result, small changes to one part of the system can grow to have large effects on the system as a whole. Human attempts to control the weather have occurred throughout history, and there is evidence that human activities such as agriculture and industry have modified weather patterns. Studying how the weather works on other planets has been helpful in understanding how weather works on Earth. A famous landmark in the Solar System, Jupiter's Great Red Spot, is an anticyclonic storm known to have existed for at least 300 years. However, the weather is not limited to planetary bodies. A star's corona is constantly being lost to space, creating what is essentially a very thin atmosphere throughout the Solar System. The movement of mass ejected from the Sun is known as the solar wind. Causes On Earth, common weather phenomena include wind, cloud, rain, snow, fog and dust storms. Less common events include natural disasters such as tornadoes, hurricanes, typhoons and ice storms. Almost all familiar weather phenomena occur in the troposphere (the lower part of the atmosphere). Weather does occur in the stratosphere and can affect weather lower down in the troposphere, but the exact mechanisms are poorly understood. Weather occurs primarily due to air pressure, temperature and moisture differences from one place to another. These differences can occur due to the sun angle at any particular spot, which varies by latitude in the tropics. In other words, the farther from the tropics one lies, the lower the sun angle is, which causes those locations to be cooler due to the spread of the sunlight over a greater surface. The strong temperature contrast between polar and tropical air gives rise to the large scale atmospheric circulation cells and the jet stream. Weather systems in the mid-latitudes, such as extratropical cyclones, are caused by instabilities of the jet stream flow (see baroclinity). Weather systems in the tropics, such as monsoons or organized thunderstorm systems, are caused by different processes. Because the Earth's axis is tilted relative to its orbital plane, sunlight is incident at different angles at different times of the year. In June the Northern Hemisphere is tilted towards the Sun, so at any given Northern Hemisphere latitude sunlight falls more directly on that spot than in December (see Effect of sun angle on climate). This effect causes seasons. Over thousands to hundreds of thousands of years, changes in Earth's orbital parameters affect the amount and distribution of solar energy received by the Earth and influence long-term climate. (See Milankovitch cycles). The uneven solar heating (the formation of zones of temperature and moisture gradients, or frontogenesis) can also be due to the weather itself in the form of cloudiness and precipitation. Higher altitudes are typically cooler than lower altitudes, which is the result of higher surface temperature and radiational heating, which produces the adiabatic lapse rate. In some situations, the temperature actually increases with height. This phenomenon is known as an inversion and can cause mountaintops to be warmer than the valleys below. Inversions can lead to the formation of fog and often act as a cap that suppresses thunderstorm development. On local scales, temperature differences can occur because different surfaces (such as oceans, forests, ice sheets, or human-made objects) have differing physical characteristics such as reflectivity, roughness, or moisture content. Surface temperature differences in turn cause pressure differences. A hot surface warms the air above it causing it to expand and lower the density and the resulting surface air pressure. The resulting horizontal pressure gradient moves the air from higher to lower pressure regions, creating a wind, and the Earth's rotation then causes deflection of this airflow due to the Coriolis effect. The simple systems thus formed can then display emergent behaviour to produce more complex systems and thus other weather phenomena. Large scale examples include the Hadley cell while a smaller scale example would be coastal breezes. The atmosphere is a chaotic system. As a result, small changes to one part of the system can accumulate and magnify to cause large effects on the system as a whole. This atmospheric instability makes weather forecasting less predictable than tidal waves or eclipses. Although it is difficult to accurately predict weather more than a few days in advance, weather forecasters are continually working to extend this limit through meteorological research and refining current methodologies in weather prediction. However, it is theoretically impossible to make useful day-to-day predictions more than about two weeks ahead, imposing an upper limit to potential for improved prediction skill. Shaping the planet Earth Weather is one of the fundamental processes that shape the Earth. The process of weathering breaks down the rocks and soils into smaller fragments and then into their constituent substances. During rains precipitation, the water droplets absorb and dissolve carbon dioxide from the surrounding air. This causes the rainwater to be slightly acidic, which aids the erosive properties of water. The released sediment and chemicals are then free to take part in chemical reactions that can affect the surface further (such as acid rain), and sodium and chloride ions (salt) deposited in the seas/oceans. The sediment may reform in time and by geological forces into other rocks and soils. In this way, weather plays a major role in erosion of the surface. Effect on humans Weather, seen from an anthropological perspective, is something all humans in the world constantly experience through their senses, at least while being outside. There are socially and scientifically constructed understandings of what weather is, what makes it change, the effect the weather, and especially inclement weather, has on humans in different situations, etc. Therefore, weather is something people often communicate about. In the United States, the National Weather Service has an annual report for fatalities, injury, and total damage costs which include crop and property. They gather this data via National Weather Service offices located throughout the 50 states in the United States as well as Puerto Rico, Guam, and the Virgin Islands. As of 2019, tornadoes have had the greatest impact on humans with 42 fatalities while costing crop and property damage over 3 billion dollars. Effects on populations The weather has played a large and sometimes direct part in human history. Aside from climatic changes that have caused the gradual drift of populations (for example the desertification of the Middle East, and the formation of land bridges during glacial periods), extreme weather events have caused smaller scale population movements and intruded directly in historical events. One such event is the saving of Japan from invasion by the Mongol fleet of Kublai Khan by the Kamikaze winds in 1281. French claims to Florida came to an end in 1565 when a hurricane destroyed the French fleet, allowing Spain to conquer Fort Caroline. More recently, Hurricane Katrina redistributed over one million people from the central Gulf coast elsewhere across the United States, becoming the largest diaspora in the history of the United States. The Little Ice Age caused crop failures and famines in Europe. During the period known as the Grindelwald Fluctuation (1560–1630), volcanic forcing events seem to have led to more extreme weather events. These included droughts, storms and unseasonal blizzards, as well as causing the Swiss Grindelwald Glacier to expand. The 1690s saw the worst famine in France since the Middle Ages. Finland suffered a severe famine in 1696–1697, during which about one-third of the Finnish population died. Forecasting Weather forecasting is the application of science and technology to predict the state of the atmosphere for a future time and a given location. Human beings have attempted to predict the weather informally for millennia, and formally since at least the nineteenth century. Weather forecasts are made by collecting quantitative data about the current state of the atmosphere and using scientific understanding of atmospheric processes to project how the atmosphere will evolve. Once an all-human endeavor based mainly upon changes in barometric pressure, current weather conditions, and sky condition, forecast models are now used to determine future conditions. On the other hand, human input is still required to pick the best possible forecast model to base the forecast upon, which involves many disciplines such as pattern recognition skills, teleconnections, knowledge of model performance, and knowledge of model biases. The chaotic nature of the atmosphere, the massive computational power required to solve the equations that describe the atmosphere, the error involved in measuring the initial conditions, and an incomplete understanding of atmospheric processes mean that forecasts become less accurate as of the difference in current time and the time for which the forecast is being made (the range of the forecast) increases. The use of ensembles and model consensus helps to narrow the error and pick the most likely outcome. There are a variety of end users to weather forecasts. Weather warnings are important forecasts because they are used to protect life and property. Forecasts based on temperature and precipitation are important to agriculture, and therefore to commodity traders within stock markets. Temperature forecasts are used by utility companies to estimate demand over coming days. In some areas, people use weather forecasts to determine what to wear on a given day. Since outdoor activities are severely curtailed by heavy rain, snow and the wind chill, forecasts can be used to plan activities around these events and to plan ahead to survive through them. Tropical weather forecasting is different from that at higher latitudes. The sun shines more directly on the tropics than on higher latitudes (at least on average over a year), which makes the tropics warm (Stevens 2011). And, the vertical direction (up, as one stands on the Earth's surface) is perpendicular to the Earth's axis of rotation at the equator, while the axis of rotation and the vertical are the same at the pole; this causes the Earth's rotation to influence the atmospheric circulation more strongly at high latitudes than low latitudes. Because of these two factors, clouds and rainstorms in the tropics can occur more spontaneously compared to those at higher latitudes, where they are more tightly controlled by larger-scale forces in the atmosphere. Because of these differences, clouds and rain are more difficult to forecast in the tropics than at higher latitudes. On the other hand, the temperature is easily forecast in the tropics, because it does not change much. Modification The aspiration to control the weather is evident throughout human history: from ancient rituals intended to bring rain for crops to the U.S. Military Operation Popeye, an attempt to disrupt supply lines by lengthening the North Vietnamese monsoon. The most successful attempts at influencing weather involve cloud seeding; they include the fog- and low stratus dispersion techniques employed by major airports, techniques used to increase winter precipitation over mountains, and techniques to suppress hail. A recent example of weather control was China's preparation for the 2008 Summer Olympic Games. China shot 1,104 rain dispersal rockets from 21 sites in the city of Beijing in an effort to keep rain away from the opening ceremony of the games on 8 August 2008. Guo Hu, head of the Beijing Municipal Meteorological Bureau (BMB), confirmed the success of the operation with 100 millimeters falling in Baoding City of Hebei Province, to the southwest and Beijing's Fangshan District recording a rainfall of 25 millimeters. Whereas there is inconclusive evidence for these techniques' efficacy, there is extensive evidence that human activity such as agriculture and industry results in inadvertent weather modification: Acid rain, caused by industrial emission of sulfur dioxide and nitrogen oxides into the atmosphere, adversely affects freshwater lakes, vegetation, and structures. Anthropogenic pollutants reduce air quality and visibility. Climate change caused by human activities that emit greenhouse gases into the air is expected to affect the frequency of extreme weather events such as drought, extreme temperatures, flooding, high winds, and severe storms. Heat, generated by large metropolitan areas have been shown to minutely affect nearby weather, even at distances as far as . The effects of inadvertent weather modification may pose serious threats to many aspects of civilization, including ecosystems, natural resources, food and fiber production, economic development, and human health. Microscale meteorology Microscale meteorology is the study of short-lived atmospheric phenomena smaller than mesoscale, about 1 km or less. These two branches of meteorology are sometimes grouped together as "mesoscale and microscale meteorology" (MMM) and together study all phenomena smaller than synoptic scale; that is they study features generally too small to be depicted on a weather map. These include small and generally fleeting cloud "puffs" and other small cloud features. Extremes on Earth On Earth, temperatures usually range ±40 °C (100 °F to −40 °F) annually. The range of climates and latitudes across the planet can offer extremes of temperature outside this range. The coldest air temperature ever recorded on Earth is , at Vostok Station, Antarctica on 21 July 1983. The hottest air temperature ever recorded was at 'Aziziya, Libya, on 13 September 1922, but that reading is queried. The highest recorded average annual temperature was at Dallol, Ethiopia. The coldest recorded average annual temperature was at Vostok Station, Antarctica. The coldest average annual temperature in a permanently inhabited location is at Eureka, Nunavut, in Canada, where the annual average temperature is . The windiest place ever recorded is in Antarctica, Commonwealth Bay (George V Coast). Here the gales reach . Furthermore, the greatest snowfall in a period of twelve months occurred in Mount Rainier, Washington, US. It was recorded as of snow. Extraterrestrial weather Studying how the weather works on other planets has been seen as helpful in understanding how it works on Earth. Weather on other planets follows many of the same physical principles as weather on Earth, but occurs on different scales and in atmospheres having different chemical composition. The Cassini–Huygens mission to Titan discovered clouds formed from methane or ethane which deposit rain composed of liquid methane and other organic compounds. Earth's atmosphere includes six latitudinal circulation zones, three in each hemisphere. In contrast, Jupiter's banded appearance shows many such zones, Titan has a single jet stream near the 50th parallel north latitude, and Venus has a single jet near the equator. One of the most famous landmarks in the Solar System, Jupiter's Great Red Spot, is an anticyclonic storm known to have existed for at least 300 years. On other giant planets, the lack of a surface allows the wind to reach enormous speeds: gusts of up to 600 metres per second (about ) have been measured on the planet Neptune. This has created a puzzle for planetary scientists. The weather is ultimately created by solar energy and the amount of energy received by Neptune is only about of that received by Earth, yet the intensity of weather phenomena on Neptune is far greater than on Earth. , the strongest planetary winds discovered are on the extrasolar planet HD 189733 b, which is thought to have easterly winds moving at more than . Space weather Weather is not limited to planetary bodies. Like all stars, the Sun's corona is constantly being lost to space, creating what is essentially a very thin atmosphere throughout the Solar System. The movement of mass ejected from the Sun is known as the solar wind. Inconsistencies in this wind and larger events on the surface of the star, such as coronal mass ejections, form a system that has features analogous to conventional weather systems (such as pressure and wind) and is generally known as space weather. Coronal mass ejections have been tracked as far out in the Solar System as Saturn. The activity of this system can affect planetary atmospheres and occasionally surfaces. The interaction of the solar wind with the terrestrial atmosphere can produce spectacular aurorae, and can play havoc with electrically sensitive systems such as electricity grids and radio signals.
Physical sciences
Earth science
null
33986
https://en.wikipedia.org/wiki/War%20hammer
War hammer
A war hammer (French: martel-de-fer, "iron hammer") is a weapon that was used by both foot soldiers and cavalry. It is a very old weapon and gave its name, owing to its constant use, to Judah Maccabee, a 2nd-century BC Jewish rebel, and to Charles Martel, one of the rulers of France. In the 15th and 16th centuries, the war hammer became an elaborately decorated and handsome weapon. The war hammer was a popular weapon in the late medieval period. It became somewhat of a necessity in combat when armor became so strong that swords and axes were no longer able to pierce and ricocheted upon impact. The war hammer could inflict significant damage on the enemy through their heavy impact without the need to pierce the armor. Design A war hammer consists of a handle and a head. The length of the handle may vary, the longest being roughly equivalent to that of a halberd (five to six feet or 1.5 to 1.8 meters), and the shortest about the same as that of a mace (two to three feet or 60 to 90 centimeters). Long war hammers were polearms meant for use on foot, whereas short ones were used from horseback. War hammers, especially when mounted on a pole, could in some cases transmit their impact through helmets and cause concussions. Later war hammers often had a spike on one side of the head, making them more versatile weapons. The spike end could be used for grappling the target's armor, reins, or shield. If against mounted opponents, the weapon could be directed at the legs of a horse, toppling the armored foe to the ground where they could be more easily attacked. The blunt side of a war hammer was usually used first to knock down and stun an enemy and, once the opponent were immobilized on the ground, the hammer is rotated around to hit the target with the pointed side, which can punch a hole through the helmet and deliver the coup de grâce. A powerful swing from a war hammer (especially with the spike) can deliver a strike force of several hundred kilograms per square millimeterthis is the same penetrating force as a rifle bullet. Maul A maul is a long-handled hammer with a heavy head, of wood, lead, iron, or steel. It is similar in appearance and function to a modern sledgehammer, it is sometimes shown as having a spear-like spike on the fore-end of the shaft. The use of the maul as a weapon seems to date from the later 14th century. During the Harelle of 1382, rebellious citizens of Paris seized 3000 mauls () from the city armory, leading to the rebels' being dubbed Maillotins. Later in the same year, Froissart records French men-at-arms using mauls at the Battle of Roosebeke, demonstrating that they were not simply weapons of the lower classes. A particular use of the maul was by archers in the 15th and 16th centuries. At the Battle of Agincourt, English longbowmen are recorded as using lead mauls, initially as a tool to drive in stakes but later as improvised weapons. Other references during the century (for example, in Charles the Bold's 1472 Ordinance) suggest continued use. They are recorded as a weapon of Tudor archers as late as 1562. History Full-fledged warhammers emerged in the mid-14th century as a direct response to the growing prevalence and effectiveness of plate armor on European battlefields. By 1395, French infantry deployed sophisticated warhammers equipped with thrusting tips, side flanges, and a basic beak, known as "Picoise." These initially single-handed warhammers would later evolve into longer two-handed pole hammers, becoming not only widespread on European battlefields but also prominent in duels, particularly those involving armored combatants in tournaments or judicial settings. In the context of duels, the pole hammer was often categorized as a subtype of the pole-axe, commonly referred to as "axes" in period fencing manuals (German: (Mord)Axt, Italian: (Azza)). Pole hammers designed for duels frequently featured a rondel-shaped guard to protect the forward hand and a spike at the rear for increased versatility. Some variants also incorporated additional hooks at the rear to facilitate binding enemy weapons and limbs. While earlier pole hammers had flat surfaces, by the 15th century, there was a trend towards dividing the hammerhead into three or four diamond-shaped tips to avoid the head of the weapon from sliding on armor plate and to focus the impact onto a smaller area. According to Austrian army officer and weapons expert Wendelin Boeheim, these modifications were primarily driven by aesthetic considerations rather than functional improvements. In some instances, the hammer surface featured the monogram of its owner, enabling the identification of victims on the battlefield. Starting from the 15th century, shorter warhammers found use as cavalry weapons. Initially rejected by the nobility due to their commoner origins, practicality eventually compelled their adoption. This perception would later shift, with cavalry commanders, known as Rottmeister (lit. packmaster), carrying warhammers both as weapons and symbols of rank. These hammers became known as "Rottmeister hammers" or "packmaster hammers." In landsknecht armies, a similar connotation existed, where hammers evolved into status symbols among the lower nobility in eastern parts of central Europe (Poland-Lithuania/Hungary), second only to sabers in prestige. According to Polish aristocrat Andrzej Kitowicz, a nobleman would not leave his house without his saber and his warhammer, which could also serve as a walking stick. Late Central European hammers can be categorized into three subtypes: the Czekan, a warhammer with a flattened square or hexagonal surface and a bearded axe blade on the back; the Nadziak, a classic warhammer with a flattened square or hexagonal surface and a beak at the back; and the Ogbur, similar to the Nadziak but with an S-shaped or strictly curved beak on the back. All these types shared the characteristic of having staffs made from hard wood, often extending over the socket, and lacking a thrusting tip. While the warhammer fell out of favor in most parts of Europe, it remained popular in Poland and Hungary until the first half of the 18th century. Gallery
Technology
Melee weapons
null
34033
https://en.wikipedia.org/wiki/Wildebeest
Wildebeest
Wildebeest ( , ,), also called gnu ( or ), are antelopes of the genus Connochaetes and native to Eastern and Southern Africa. They belong to the family Bovidae, which includes true antelopes, cattle, goats, sheep, and other even-toed horned ungulates. There are two species of wildebeest: the black wildebeest or white-tailed gnu (C. gnou), and the blue wildebeest or brindled gnu (C. taurinus). Fossil records suggest these two species diverged about one million years ago, resulting in a northern and a southern species. The blue wildebeest remained in its original range and changed very little from the ancestral species, while the black wildebeest changed more as adaptation to its open grassland habitat in the south. The most obvious ways of telling the two species apart are the differences in their colouring and in the way their horns are oriented. In East Africa, the blue wildebeest is the most abundant big-game species; some populations perform an annual migration to new grazing grounds, but the black wildebeest is merely nomadic. Breeding in both takes place over a short period of time at the end of the rainy season and the calves are soon active and are able to move with the herd, a fact necessary for their survival. Nevertheless, some fall prey to large carnivores, especially the spotted hyena. Wildebeest often graze in mixed herds with zebra, which gives heightened awareness of potential predators. They are also alert to the warning signals emitted by other animals such as baboons. Wildebeest are a tourist attraction but compete with domesticated livestock for pasture and are sometimes blamed by farmers for transferring diseases and parasites to their cattle. Illegal hunting does take place but the population trend is fairly stable. Wildebeest can also be found in national parks or on private land. The International Union for Conservation of Nature lists both kinds of wildebeest as least-concern species. Etymology is Dutch for 'wild beast', 'wild ox' or 'wild cattle' in Afrikaans (bees 'cattle'), The name was given by Dutch settlers who saw them on their way to the interior of South Africa in about 1700 because they resemble wild ox. The blue wildebeest was first known to westerners in the northern part of South Africa a century later, in the 1800s. Some sources claim the name gnu originates from the Khoekhoe name for these animals, t'gnu. Others contend the name and its pronunciation in English go back to the word !nu: used for the black wildebeest by the San people. Classification Taxonomy and evolution The genus name Connochaetes derives from the Ancient Greek words , 'beard', and χαίτη, 'flowing hair, mane'. It is placed under the family Bovidae and subfamily Alcelaphinae, where its closest relatives are the hartebeest (Alcelaphus spp.), the hirola (Beatragus hunteri), and species in the genus Damaliscus, such as the topi, the tsessebe, the blesbok and the bontebok. The name Connochaetes was given by German zoologist Hinrich Lichtenstein in 1812. In the early 20th century, one species of the wildebeest, C. albojubatus, was identified in eastern Africa. In 1914, two separate races of the wildebeest were introduced, namely Gorgon a. albojubatus ("Athi white-bearded wildebeest") and G. a. mearnsi ("Loita white-bearded wildebeest"). However, in 1939, the two were once again merged into a single race, Connochaetes taurinus albojubatus. In the mid-20th century, two separate forms were recognised, Gorgon taurinus hecki and G. t. albojubatus. Finally, two distinct types of wildebeest – the blue and black wildebeest – were identified. The blue wildebeest was at first placed under a separate genus, Gorgon, while the black wildebeest belonged to the genus Connochaetes. Today, they are united in the single genus Connochaetes, with the black wildebeest being named (C. gnou) and the blue wildebeest (C. taurinus). According to a mitochondrial DNA analysis, the black wildebeest are estimated to have diverged from the main lineage during the Middle Pleistocene and became a distinct species around a million years ago. A divergence rate around 2% has been calculated. The split does not seem to have been driven by competition for resources, but instead because each species adopted a different ecological niche and occupied a different trophic level. Blue wildebeest fossils dating back some 2.5 million years ago are common and widespread. They have been found in the fossil-bearing caves at the Cradle of Humankind north of Johannesburg. Elsewhere in South Africa, they are plentiful at such sites as Elandsfontein, Cornelia, and Florisbad. The earliest fossils of the black wildebeest were found in sedimentary rock in Cornelia in the Free State and dated back about 800,000 years. Today, five subspecies of the blue wildebeest are recognised, while the black wildebeest has no named subspecies. Genetics and hybrids The diploid number of chromosomes in the wildebeest is 58. Chromosomes were studied in a male and a female wildebeest. In the female, all except a pair of very large submetacentric chromosomes were found to be acrocentric. Metaphases were studied in the male's chromosomes, and very large submetacentric chromosomes were found there, as well, similar to those in the female both in size and morphology. Other chromosomes were acrocentric. The X chromosome is a large acrocentric and the Y chromosome a minute one. The two species of the wildebeest are known to hybridise. Male black wildebeest have been reported to mate with female blue wildebeest and vice versa. The differences in social behaviour and habitats have historically prevented interspecific hybridisation between the species, but hybridisation may occur when they are both confined within the same area. The resulting offspring are usually fertile. A study of these hybrid animals at Spioenkop Dam Nature Reserve in South Africa revealed that many had disadvantageous abnormalities relating to their teeth, horns, and the wormian bones in the skull. Another study reported an increase in the size of the hybrid as compared to either of its parents. In some animals, the tympanic part of the temporal bone is highly deformed, and in others, the radius and ulna are fused. Characteristics of the species Both species of wildebeest are even-toed, horned, greyish-brown ungulates resembling cattle. Males are larger than females and both have heavy forequarters compared to their hindquarters. They have broad muzzles, Roman noses, and shaggy manes and tails. The most striking morphological differences between the black and blue wildebeest are the orientation and curvature of their horns and the colour of their coats. The blue wildebeest is the bigger of the two species. In males, blue wildebeest stand tall at the shoulder and weigh around , while the black wildebeest stands tall and weighs about . In females, blue wildebeest have a shoulder height of and weigh while black wildebeest females stand at the shoulder and weigh . The horns of blue wildebeest protrude to the side, then curve downwards before curving up back towards the skull, while the horns of the black wildebeest curve forward then downward before curving upwards at the tips. Blue wildebeest tend to be a dark grey colour with stripes, but may have a bluish sheen. The black wildebeest has brown-coloured hair, with a mane that ranges in colour from cream to black, and a cream-coloured tail. The blue wildebeest lives in a wide variety of habitats, including woodlands and grasslands, while the black wildebeest tends to reside exclusively in open grassland areas. In some areas, the blue wildebeest migrates over long distances in the winter, whereas the black wildebeest does not. The milk of the black wildebeest contains a higher protein, lower fat, and lower lactose content than the milk of the blue wildebeest. Wildebeest can live more than 40 years, though their average lifespan is around 20 years. Distribution and habitat Wildebeest inhabit the plains and open woodlands of parts of Africa south of the Sahara. The black wildebeest is native to the southernmost parts of the continent. Its historical range included South Africa, Eswatini, and Lesotho, but in the latter two countries, it was hunted to extinction in the 19th century. It has now been reintroduced to them and also introduced to Namibia, where it has become well established. It inhabits open plains, grasslands, and Karoo shrublands in both steep mountainous regions and lower undulating hills at altitudes varying between . In the past, it inhabited the highveld temperate grasslands during the dry winter season and the arid Karoo region during the rains. However, as a result of widespread hunting, the black wildebeest no longer occupies its historical range or makes migrations, and is now largely limited to game farms and protected reserves. The blue wildebeest is native to eastern and southern Africa. Its range includes Kenya, Tanzania, Botswana, Zambia, Zimbabwe, Mozambique, South Africa, Eswatini and Angola. It is no longer found in Malawi but has been successfully reintroduced into Namibia. Blue wildebeest are mainly found in short grass plains bordering bush-covered acacia savannas, thriving in areas that are neither too wet nor too dry. They can be found in habitats that vary from overgrazed areas with dense bush to open woodland floodplains. In East Africa, the blue wildebeest is the most abundant big game species, both in population and biomass. It is a notable feature of the Serengeti National Park in Tanzania, the Maasai Mara National Reserve in Kenya and the Liuwa Plain National Park in Zambia. Migration Not all wildebeest are migratory. Black wildebeest herds are often nomadic or may have a regular home range of . Bulls may occupy territories, usually about apart, but this spacing varies according to the quality of the habitat. In favourable conditions, they may be as close as , or they may be as far apart as in poor habitat. Female herds have home ranges of about in size. Herds of nonterritorial bachelor males roam at will and do not seem to have any restrictions on where they wander. Blue wildebeest have both migratory and sedentary populations. In the Ngorongoro, most animals are sedentary and males maintain a network of territories throughout the year, though breeding is seasonal in nature. Females and young form groups of about 10 individuals or join in larger aggregations, and nonterritorial males form bachelor groups. In the Serengeti and Tarangire ecosystems, populations are mostly migratory, with herds consisting of both sexes frequently moving, but resident subpopulations also exist. During the rutting season, the males may form temporary territories for a few hours or a day or so, and attempt to gather together a few females with which to mate, but soon they have to move on, often moving ahead to set up another temporary territory. In the Maasai Mara game reserve, a non-migratory population of blue wildebeest had dwindled from about 119,000 animals in 1977 to about 22,000 in 1997. The reason for the decline is thought to be the increasing competition between cattle and wildebeest for a diminishing area of grazing land as a result of changes in agricultural practices, and possibly fluctuations in rainfall. Each year, some East African populations of blue wildebeest have a long-distance migration, seemingly timed to coincide with the annual pattern of rainfall and grass growth. The timing of their migrations in both the rainy and dry seasons can vary considerably (by months) from year to year. At the end of the wet season (May or June in East Africa), wildebeest migrate to dry-season areas in response to a lack of surface (drinking) water. When the rainy season begins again (months later), animals quickly move back to their wet-season ranges. Factors suspected to affect migration include food abundance, surface water availability, predators, and phosphorus content in grasses. Phosphorus is a crucial element for all life forms, particularly for lactating female bovids. As a result, during the rainy season, wildebeest select grazing areas that contain particularly high phosphorus levels. One study found, in addition to phosphorus, wildebeest select ranges containing grass with relatively high nitrogen content. Aerial photography has revealed that a level of organisation occurs in the movement of the herd that cannot be apparent to each individual animal; for example, the migratory herd exhibits a wavy front, and this suggests that some degree of local decision-making is taking place. Numerous documentaries feature wildebeest crossing rivers, with many being eaten by crocodiles or drowning in the attempt. While having the appearance of a frenzy, recent research has shown a herd of wildebeest possesses what is known as a "swarm intelligence", whereby the animals systematically explore and overcome the obstacle as one. Interdigital glands The wildebeest only has interdigital (hoof) glands in its fore legs. Analysis of chemical constituents from a free ranging blue wildebeest from Zimbabwe (Cawston Block) showed this gland contains cyclohexanecarboxylic acid, phenol, 2-phenolethanol, and six short-chain carboxylic acids. . Ecology Predators Major predators that feed on wildebeest include the lion, hyena, African wild dog, cheetah, leopard, and Nile crocodile, which seem to favour the wildebeest over other prey. Wildebeest, however, are very strong, and can inflict considerable injury even to a lion. Wildebeest have a maximum running speed of around . The primary defensive tactic is herding, where the young animals are protected by the older, larger ones, while the herd runs as a group. Typically, the predators attempt to isolate a young or ill animal and attack without having to worry about the herd. Wildebeest have developed additional sophisticated cooperative behaviours, such as animals taking turns sleeping while others stand guard against a night attack by invading predators. Wildebeest migrations are closely followed by vultures, as wildebeest carcasses are an important source of food for these scavengers. The vultures consume about 70% of the wildebeest carcasses available. Decreases in the number of migrating wildebeest have also had a negative effect on the vultures. In the Serengeti ecosystem, Tanzania, wildebeest may help facilitate the migration of other, smaller-bodied grazers, such as Thomson's gazelles (Eudorcas thomsonii), which eat the new-growth grasses stimulated by wildebeest foraging. Interactions with nonpredators Zebras and wildebeest group together in open savannah environments with high chances of predation. This grouping strategy reduces predation risk because larger groups decrease each individual's chance of being hunted, and predators are more easily seen in open areas. The seasonal presence of thousands of migratory wildebeests reduces local lion predation on giraffe calves, resulting in greater survival of giraffes. Wildebeest can also listen in on the alarm calls of other species, and by doing so, can reduce their risk of predation. One study showed, along with other ungulates, wildebeests responded more strongly to the baboon alarm calls compared to the baboon contest calls, though both types of calls had similar patterns, amplitudes, and durations. The alarm calls were a response of the baboons to lions, and the contest calls were recorded when a dispute between two males occurred. Wildebeest compete with domesticated livestock for pasture and are sometimes blamed by farmers for transferring diseases and parasites to their cattle. Breeding and reproduction Wildebeest do not form permanent pair bonds and during the mating season, or rut, the males establish temporary territories and try to attract females into them. These small territories are about , with up to 300 territories per . The males defend these territories from other males while competing for females that are coming into oestrus. The males use grunts and distinctive behaviour to entice females into their territories. Wildebeest usually breed at the end of the rainy season when the animals are well fed and at their peak of fitness. This usually occurs between May and July, and birthing usually takes place between January and March, at the start of the wet season. Wildebeest females breed seasonally and ovulate spontaneously. The estrous cycle is about 23 days and the gestation period lasts 250 to 260 days. The calves weigh about at birth and scramble to their feet within minutes, being able to move with the herd soon afterwards, a fact on which their survival relies. The main predator of the calves is the spotted hyena. The calving peak period lasts for 2–3 weeks, and in small subpopulations and isolated groups, mortality of calves may be as high as 50%. However, in larger aggregations, or small groups living near large herds, mortality rates may be under 20%. Groups of wildebeest females and young live in the small areas established by the male. When groups of wildebeest join, the female to male ratio is higher because the females choose to move to the areas held by a smaller number of males. This female-dominated sex ratio may be due to illegal hunting and human disturbance, with higher male mortality having been attributed to hunting. Threats and conservation The black wildebeest has been classified as a least-concern species by the International Union for Conservation of Nature in the IUCN Red List. The populations of this species are on an increase. Now, more than 18,000 individuals are believed to remain, 7,000 of which are in Namibia, outside their natural range, and where it is farmed. Around 80% of the wildebeest occurs in private areas, while the other 20% is confined in protected areas. Its introduction into Namibia has been a success and numbers have increased substantially there from 150 in 1982 to 7,000 in 1992. The blue wildebeest has also been rated as of least concern. The population trend is stable, and their numbers are estimated to be around 1,500,000 – mainly due to the increase of the populations in Serengeti National Park (Tanzania) to 1,300,000 as of 1998. However, the numbers of one of the subspecies, the eastern white-bearded wildebeest (C. t. albojubatus) have seen a steep decline. Population density ranges from 0.15/km2. in Hwange and Etosha National Parks to 35/km2 in Ngorongoro Conservation Area and Serengeti National Park. Overland migration as a biological process requires large, connected landscapes, which are increasingly difficult to maintain, particularly over the long term, when human demands on the landscape compete. The most acute threat comes from migration barriers, such as fences and roads. In one of the more striking examples of the consequences of fence-building on terrestrial migrations, Botswanan authorities placed thousands of kilometres of fences across the Kalahari that prevented wildebeests from reaching watering holes and grazing grounds, resulting in the deaths of tens of thousands of individuals, reducing the wildebeest population to less than 10% of its previous size. Illegal hunting is a major conservation concern in many areas, along with natural threats posed by main predators (which include lions, leopards, African hunting dogs, cheetahs and hyenas). Where the black and blue wildebeest share a common range, the two can hybridise, and this is regarded as a potential threat to the black wildebeest. Uses and interaction with humans Wildebeest provide several useful animal products. The hide makes good-quality leather and the flesh is coarse, dry and rather hard. Wildebeest are killed for food, especially to make biltong in Southern Africa. This dried game meat is a delicacy and an important food item in Africa. The meat of females is more tender than that of males, and is the most tender during the autumn season. Wildebeest are a regular target for illegal meat hunters because their numbers make them easy to find. Cooks preparing the wildebeest carcass usually cut it into 11 pieces. The estimated price for wildebeest meat was about US$0.47 per around 2008. The silky, flowing tail of the black wildebeest is used to make fly-whisks or chowries. Wildebeest benefit the ecosystem by increasing soil fertility with their excreta. They are economically important for human beings, as they are a major tourist attraction. They also provide important products, such as leather, to humans. Wildebeest, however, can also have a negative impact on humans. Wild individuals can be competitors of commercial livestock, and can transmit diseases and cause epidemics among animals, particularly domestic cattle. They can also spread ticks, lungworms, tapeworms, flies, and paramphistome flukes. Cultural depictions The wildebeest is depicted on the coat of arms of the Province of Natal and later KwaZulu-Natal in South Africa. Over the years, the South African authorities have issued several stamps displaying the animal and the South African Mint has struck a two-cent piece with a prancing black wildebeest. Movies and television shows also feature wildebeests, including Khumba (Mama V), The Wild (Kazar and his minions), All Hail King Julien (Vigman Wildebeest), Phineas and Ferb (Newton the Gnu), The Great Space Coaster (newscaster Gary Gnu), and The Lion King (the wildebeest stampede that resulted in Mufasa's death). Michael Flanders wrote a humorous song called "The Gnu", which was very popular when he performed it, with Donald Swann, in a revue called At the Drop of a Hat, which opened in London on 31 December 1956. In the 1970s, British tea brand Typhoo ran a series of television advertisements featuring an animated anthropomorphic gnu character. The wildebeest is the mascot of the GNU Project and GNU operating system. In the Llama Llama picture-book series by Anna Dewdney, an anthropomorphised wildebeest named Nelly Gnu is the main character, Llama Llama's best friend, and is also featured in a title of her own, Nelly Gnu and Daddy Too. In Terry Pratchett's Discworld series, Discworld analogues of wildebeests, which inhabited Howonderland, were called "bewilderbeest".
Biology and health sciences
Artiodactyla
null
34043
https://en.wikipedia.org/wiki/Wormhole
Wormhole
A wormhole is a hypothetical structure which connects disparate points in spacetime. It may be visualized as a tunnel with two ends at separate points in spacetime (i.e., different locations, different points in time, or both). Wormholes are based on a special solution of the Einstein field equations. Specifically, they are a transcendental bijection of the spacetime continuum, an asymptotic projection of the Calabi–Yau manifold manifesting itself in anti-de Sitter space. Wormholes are consistent with the general theory of relativity, but whether they actually exist is unknown. Many scientists postulate that wormholes are merely projections of a fourth spatial dimension, analogous to how a two-dimensional (2D) being could experience only part of a three-dimensional (3D) object. A well-known analogy of such constructs is provided by the Klein bottle, displaying a hole when rendered in three dimensions but not in four or higher dimensions. In 1995, Matt Visser suggested there may be many wormholes in the universe if cosmic strings with negative mass were generated in the early universe. Some physicists, such as Kip Thorne, have suggested how to make wormholes artificially. Visualization technique For a simplified notion of a wormhole, space can be visualized as a two-dimensional surface. In this case, a wormhole would appear as a hole in that surface, lead into a 3D tube (the inside surface of a cylinder), then re-emerge at another location on the 2D surface with a hole similar to the entrance. An actual wormhole would be analogous to this, but with the spatial dimensions raised by one. For example, instead of circular holes on a 2-Dimensional plane, the entry and exit points could be visualized as spherical holes in 3D space leading into a four-dimensional "tube" similar to a spherinder. Another way to imagine wormholes is to take a sheet of paper and draw two somewhat distant points on one side of the paper. The sheet of paper represents a plane in the spacetime continuum, and the two points represent a distance to be traveled, but theoretically, a wormhole could connect these two points by folding that plane (⁠i.e. the paper) so the points are touching. In this way, it would be much easier to traverse the distance since the two points are now touching. Terminology In 1928, German mathematician, philosopher and theoretical physicist Hermann Weyl proposed a wormhole hypothesis of matter in connection with mass analysis of electromagnetic field energy; however, he did not use the term "wormhole" (he spoke of "one-dimensional tubes" instead). American theoretical physicist John Archibald Wheeler (inspired by Weyl's work) coined the term "wormhole". In a 1957 paper that he wrote with Charles W. Misner, they write: Modern definitions Wormholes have been defined both geometrically and topologically. From a topological point of view, an intra-universe wormhole (a wormhole between two points in the same universe) is a compact region of spacetime whose boundary is topologically trivial, but whose interior is not simply connected. Formalizing this idea leads to definitions such as the following, taken from Matt Visser's Lorentzian Wormholes (1996). Geometrically, wormholes can be described as regions of spacetime that constrain the incremental deformation of closed surfaces. For example, in Enrico Rodrigo's The Physics of Stargates, a wormhole is defined informally as: Development Schwarzschild wormholes The first type of wormhole solution discovered was the Schwarzschild wormhole, which would be present in the Schwarzschild metric describing an eternal black hole, but it was found that it would collapse too quickly for anything to cross from one end to the other. Wormholes that could be crossed in both directions, known as traversable wormholes, were thought to be possible only if exotic matter with negative energy density could be used to stabilize them. However, physicists later reported that microscopic traversable wormholes may be possible and not require any exotic matter, instead requiring only electrically charged fermionic matter with small enough mass that it cannot collapse into a charged black hole. While such wormholes, if possible, may be limited to transfers of information, humanly traversable wormholes may exist if reality can broadly be described by the Randall–Sundrum model 2, a brane-based theory consistent with string theory. Einstein–Rosen bridges Einstein–Rosen bridges (or ER bridges), named after Albert Einstein and Nathan Rosen, are connections between areas of space that can be modeled as vacuum solutions to the Einstein field equations, and that are now understood to be intrinsic parts of the maximally extended version of the Schwarzschild metric describing an eternal black hole with no charge and no rotation. Here, "maximally extended" refers to the idea that the spacetime should not have any "edges": it should be possible to continue this path arbitrarily far into the particle's future or past for any possible trajectory of a free-falling particle (following a geodesic in the spacetime). In order to satisfy this requirement, it turns out that in addition to the black hole interior region that particles enter when they fall through the event horizon from the outside, there must be a separate white hole interior region that allows us to extrapolate the trajectories of particles that an outside observer sees rising up away from the event horizon. And just as there are two separate interior regions of the maximally extended spacetime, there are also two separate exterior regions, sometimes called two different "universes", with the second universe allowing us to extrapolate some possible particle trajectories in the two interior regions. This means that the interior black hole region can contain a mix of particles that fell in from either universe (and thus an observer who fell in from one universe might be able to see the light that fell in from the other one), and likewise particles from the interior white hole region can escape into either universe. All four regions can be seen in a spacetime diagram that uses Kruskal–Szekeres coordinates. In this spacetime, it is possible to come up with coordinate systems such that if a hypersurface of constant time (a set of points that all have the same time coordinate, such that every point on the surface has a space-like separation, giving what is called a 'space-like surface') is picked and an "embedding diagram" drawn depicting the curvature of space at that time, the embedding diagram will look like a tube connecting the two exterior regions, known as an "Einstein–Rosen bridge". The Schwarzschild metric describes an idealized black hole that exists eternally from the perspective of external observers; a more realistic black hole that forms at some particular time from a collapsing star would require a different metric. When the infalling stellar matter is added to a diagram of a black hole's geography, it removes the part of the diagram corresponding to the white hole interior region, along with the part of the diagram corresponding to the other universe. The Einstein–Rosen bridge was discovered by Ludwig Flamm in 1916, a few months after Schwarzschild published his solution, and was rediscovered by Albert Einstein and his colleague Nathan Rosen, who published their result in 1935. However, in 1962, John Archibald Wheeler and Robert W. Fuller published a paper showing that this type of wormhole is unstable if it connects two parts of the same universe, and that it will pinch off too quickly for light (or any particle moving slower than light) that falls in from one exterior region to make it to the other exterior region. According to general relativity, the gravitational collapse of a sufficiently compact mass forms a singular Schwarzschild black hole. In the Einstein–Cartan–Sciama–Kibble theory of gravity, however, it forms a regular Einstein–Rosen bridge. This theory extends general relativity by removing a constraint of the symmetry of the affine connection and regarding its antisymmetric part, the torsion tensor, as a dynamic variable. Torsion naturally accounts for the quantum-mechanical, intrinsic angular momentum (spin) of matter. The minimal coupling between torsion and Dirac spinors generates a repulsive spin–spin interaction that is significant in fermionic matter at extremely high densities. Such an interaction prevents the formation of a gravitational singularity (e.g. a black hole). Instead, the collapsing matter reaches an enormous but finite density and rebounds, forming the other side of the bridge. Although Schwarzschild wormholes are not traversable in both directions, their existence inspired Kip Thorne to imagine traversable wormholes created by holding the "throat" of a Schwarzschild wormhole open with exotic matter (material that has negative mass/energy). Other non-traversable wormholes include Lorentzian wormholes (first proposed by John Archibald Wheeler in 1957), wormholes creating a spacetime foam in a general relativistic spacetime manifold depicted by a Lorentzian manifold, and Euclidean wormholes (named after Euclidean manifold, a structure of Riemannian manifold). Traversable wormholes The Casimir effect shows that quantum field theory allows the energy density in certain regions of space to be negative relative to the ordinary matter vacuum energy, and it has been shown theoretically that quantum field theory allows states where energy can be arbitrarily negative at a given point. Many physicists, such as Stephen Hawking, Kip Thorne, and others, argued that such effects might make it possible to stabilize a traversable wormhole. The only known natural process that is theoretically predicted to form a wormhole in the context of general relativity and quantum mechanics was put forth by Juan Maldacena and Leonard Susskind in their ER = EPR conjecture. The quantum foam hypothesis is sometimes used to suggest that tiny wormholes might appear and disappear spontaneously at the Planck scale, and stable versions of such wormholes have been suggested as dark matter candidates. It has also been proposed that, if a tiny wormhole held open by a negative mass cosmic string had appeared around the time of the Big Bang, it could have been inflated to macroscopic size by cosmic inflation. Lorentzian traversable wormholes would allow travel in both directions from one part of the universe to another part of that same universe very quickly or would allow travel from one universe to another. The possibility of traversable wormholes in general relativity was first demonstrated in a 1973 paper by Homer Ellis and independently in a 1973 paper by K. A. Bronnikov. Ellis analyzed the topology and the geodesics of the Ellis drainhole, showing it to be geodesically complete, horizonless, singularity-free, and fully traversable in both directions. The drainhole is a solution manifold of Einstein's field equations for a vacuum spacetime, modified by inclusion of a scalar field minimally coupled to the Ricci tensor with antiorthodox polarity (negative instead of positive). (Ellis specifically rejected referring to the scalar field as 'exotic' because of the antiorthodox coupling, finding arguments for doing so unpersuasive.) The solution depends on two parameters: , which fixes the strength of its gravitational field, and , which determines the curvature of its spatial cross sections. When is set equal to 0, the drainhole's gravitational field vanishes. What is left is the Ellis wormhole, a nongravitating, purely geometric, traversable wormhole. Kip Thorne and his graduate student Mike Morris independently discovered in 1988 the Ellis wormhole and argued for its use as a tool for teaching general relativity. For this reason, the type of traversable wormhole they proposed, held open by a spherical shell of exotic matter, is also known as a Morris–Thorne wormhole. Later, other types of traversable wormholes were discovered as allowable solutions to the equations of general relativity, including a variety analyzed in a 1989 paper by Matt Visser, in which a path through the wormhole can be made where the traversing path does not pass through a region of exotic matter. However, in the pure Gauss–Bonnet gravity (a modification to general relativity involving extra spatial dimensions which is sometimes studied in the context of brane cosmology) exotic matter is not needed in order for wormholes to exist—they can exist even with no matter. A type held open by negative mass cosmic strings was put forth by Visser in collaboration with Cramer et al., in which it was proposed that such wormholes could have been naturally created in the early universe. Wormholes connect two points in spacetime, which means that they would in principle allow travel in time, as well as in space. In 1988, Morris, Thorne and Yurtsever worked out how to convert a wormhole traversing space into one traversing time by accelerating one of its two mouths. However, according to general relativity, it would not be possible to use a wormhole to travel back to a time earlier than when the wormhole was first converted into a time "machine". Until this time it could not have been noticed or have been used. Raychaudhuri's theorem and exotic matter To see why exotic matter is required, consider an incoming light front traveling along geodesics, which then crosses the wormhole and re-expands on the other side. The expansion goes from negative to positive. As the wormhole neck is of finite size, we would not expect caustics to develop, at least within the vicinity of the neck. According to the optical Raychaudhuri's theorem, this requires a violation of the averaged null energy condition. Quantum effects such as the Casimir effect cannot violate the averaged null energy condition in any neighborhood of space with zero curvature, but calculations in semiclassical gravity suggest that quantum effects may be able to violate this condition in curved spacetime. Although it was hoped recently that quantum effects could not violate an achronal version of the averaged null energy condition, violations have nevertheless been found, so it remains an open possibility that quantum effects might be used to support a wormhole. Modified general relativity In some hypotheses where general relativity is modified, it is possible to have a wormhole that does not collapse without having to resort to exotic matter. For example, this is possible with R gravity, a form of () gravity. Faster-than-light travel The impossibility of faster-than-light relative speed applies only locally. Wormholes might allow effective superluminal (faster-than-light) travel by ensuring that the speed of light is not exceeded locally at any time. While traveling through a wormhole, subluminal (slower-than-light) speeds are used. If two points are connected by a wormhole whose length is shorter than the distance between them outside the wormhole, the time taken to traverse it could be less than the time it would take a light beam to make the journey if it took a path through the space outside the wormhole. However, a light beam traveling through the same wormhole would beat the traveler. Time travel If traversable wormholes exist, they might allow time travel. A proposed time-travel machine using a traversable wormhole might hypothetically work in the following way: One end of the wormhole is accelerated to some significant fraction of the speed of light, perhaps with some advanced propulsion system, and then brought back to the point of origin. Alternatively, another way is to take one entrance of the wormhole and move it to within the gravitational field of an object that has higher gravity than the other entrance, and then return it to a position near the other entrance. For both these methods, time dilation causes the end of the wormhole that has been moved to have aged less, or become "younger", than the stationary end as seen by an external observer; however, time connects differently through the wormhole than outside it, so that synchronized clocks at either end of the wormhole will always remain synchronized as seen by an observer passing through the wormhole, no matter how the two ends move around. This means that an observer entering the "younger" end would exit the "older" end at a time when it was the same age as the "younger" end, effectively going back in time as seen by an observer from the outside. One significant limitation of such a time machine is that it is only possible to go as far back in time as the initial creation of the machine; it is more of a path through time rather than it is a device that itself moves through time, and it would not allow the technology itself to be moved backward in time. According to current theories on the nature of wormholes, construction of a traversable wormhole would require the existence of a substance with negative energy, often referred to as "exotic matter". More technically, the wormhole spacetime requires a distribution of energy that violates various energy conditions, such as the null energy condition along with the weak, strong, and dominant energy conditions. However, it is known that quantum effects can lead to small measurable violations of the null energy condition, and many physicists believe that the required negative energy may actually be possible due to the Casimir effect in quantum physics. Although early calculations suggested a very large amount of negative energy would be required, later calculations showed that the amount of negative energy can be made arbitrarily small. In 1993, Matt Visser argued that the two mouths of a wormhole with such an induced clock difference could not be brought together without inducing quantum field and gravitational effects that would either make the wormhole collapse or the two mouths repel each other, or otherwise prevent information from passing through the wormhole. Because of this, the two mouths could not be brought close enough for causality violation to take place. However, in a 1997 paper, Visser hypothesized that a complex "Roman ring" (named after Tom Roman) configuration of an N number of wormholes arranged in a symmetric polygon could still act as a time machine, although he concludes that this is more likely a flaw in classical quantum gravity theory rather than proof that causality violation is possible. Interuniversal travel A possible resolution to the paradoxes resulting from wormhole-enabled time travel rests on the many-worlds interpretation of quantum mechanics. In 1991 David Deutsch showed that quantum theory is fully consistent (in the sense that the so-called density matrix can be made free of discontinuities) in spacetimes with closed timelike curves. However, later it was shown that such a model of closed timelike curves can have internal inconsistencies as it will lead to strange phenomena like distinguishing non-orthogonal quantum states and distinguishing proper and improper mixture. Accordingly, the destructive positive feedback loop of virtual particles circulating through a wormhole time machine, a result indicated by semi-classical calculations, is averted. A particle returning from the future does not return to its universe of origination but to a parallel universe. This suggests that a wormhole time machine with an exceedingly short time jump is a theoretical bridge between contemporaneous parallel universes. Because a wormhole time-machine introduces a type of nonlinearity into quantum theory, this sort of communication between parallel universes is consistent with Joseph Polchinski's proposal of an Everett phone (named after Hugh Everett) in Steven Weinberg's formulation of nonlinear quantum mechanics. The possibility of communication between parallel universes has been dubbed interuniversal travel. Wormhole can also be depicted in a Penrose diagram of a Schwarzschild black hole. In the Penrose diagram, an object traveling faster than light will cross the black hole and will emerge from another end into a different space, time or universe. This will be an inter-universal wormhole. Metrics Theories of wormhole metrics describe the spacetime geometry of a wormhole and serve as theoretical models for time travel. An example of a (traversable) wormhole metric is the following: first presented by Ellis (see Ellis wormhole) as a special case of the Ellis drainhole. One type of non-traversable wormhole metric is the Schwarzschild solution (see the first diagram): The original Einstein–Rosen bridge was described in an article published in July 1935. For the Schwarzschild spherically symmetric static solution where is the proper time and . If one replaces with according to For the combined field, gravity and electricity, Einstein and Rosen derived the following Schwarzschild static spherically symmetric solution where is the electric charge. The field equations without denominators in the case when can be written In order to eliminate singularities, if one replaces by according to the equation: and with one obtains In fiction Wormholes are a common element in science fiction because they allow interstellar, intergalactic, and sometimes even interuniversal travel within human lifetime scales. In fiction, wormholes have also served as a method for time travel.
Physical sciences
Basics_2
Astronomy
34049
https://en.wikipedia.org/wiki/Wart
Wart
Warts are non-cancerous viral growths usually occurring on the hands and feet but which can also affect other locations, such as the genitals or face. One or many warts may appear. They are distinguished from cancerous tumors as they are caused by a viral infection, such as a human papillomavirus, rather than a cancer growth. Factors that increase the risk include the use of public showers and pools, working with meat, eczema, and a weak immune system. The virus is believed to infect the host through the entrance of a skin wound. A number of types exist, including plantar warts, "filiform warts", and genital warts. Genital warts are often sexually transmitted. Without treatment, most types of warts resolve in months to years. A number of treatments may speed resolution, including salicylic acid applied to the skin and cryotherapy. In those who are otherwise healthy, they do not typically result in significant problems. Treatment of genital warts differs from that of other types. Infection of a virus, such as HIV, can cause warts. This is prevented through careful handling of needles or sharp objects that could infect the individual through physical trauma of the skin, plus the practice of safe sex or sexual abstinence. Viruses that are not sexually transmitted, or are not transmitted in the case of a wart, can be prevented through a number of behaviors, such as wearing shoes outdoors and avoiding unsanitized areas without proper shoes or clothing, such as public restrooms or locker rooms. Warts are very common, with most people being infected at some point in their lives. The estimated current rate of non-genital warts among the general population is 1–13%. They are more common among young people. Prior to widespread adoption of the HPV vaccine, the estimated rate of genital warts in sexually active women was 12%. Warts have been described as far back as 400 BC by Hippocrates. Types A range of types of wart have been identified, varying in shape and site affected, as well as the type of human papillomavirus involved. These include: Common wart (verruca vulgaris), a raised wart with a roughened surface, most common on hands, but can grow anywhere on the body. Sometimes known as a Palmer wart or Junior wart. Flat wart (verruca plana), a small, smooth flattened wart, flesh-coloured, which can occur in large numbers; most common on the face, neck, hands, wrists, and knees. Filiform or digitate wart, a thread- or finger-like wart, most common on the face, especially near the eyelids and lips. Genital wart (venereal wart, condyloma acuminatum, verruca acuminata), a wart that occurs on the genitalia. Periungual wart, a cauliflower-like cluster of warts that occurs around the nails. Plantar wart (verruca, verruca plantaris), a hard, sometimes painful lump, often with multiple black specks in the center; usually only found on pressure points on the soles of the feet. Mosaic wart, a group of tightly clustered plantar-type warts, commonly on the hands or soles of the feet. Cause Warts are caused by the human papillomavirus (HPV). There are about 130 known types of human papillomaviruses. HPV infects the squamous epithelium, usually of the skin or genitals, but each HPV type is typically only able to infect a few specific areas of the body. Many HPV types can produce a benign growth, often called a "wart" or "papilloma", in the area they infect. Many of the more common HPV and wart types are listed below. Common warts – HPV types 2 and 4 (most common); also types 1, 3, 26, 29, and 57 and others. Cancers and genital dysplasia – "high-risk" HPV types are associated with cancers, notably cervical cancer, and can also cause some vulvar, vaginal, penile, anal and some oropharyngeal cancers. "Low-risk" types are associated with warts or other conditions. High-risk: 16, 18 (cause the most cervical cancer); also 31, 33, 35, 39, 45, 52, 58, 59, and others. Plantar warts (verruca) – HPV type 1 (most common); also types 2, 3, 4, 27, 28, and 58 and others. Anogenital warts (condylomata acuminata or venereal warts) – HPV types 6 and 11 (most common); also types 42, 44 and others. Low-risk: 6, 11 (most common); also 13, 44, 40, 43, 42, 54, 61, 72, 81, 89, and others. Verruca plana (flat warts) – HPV types 3, 10, and 28. Butcher's warts – HPV type 7. Heck's disease (focal epithelial hyperplasia) – HPV types 13 and 32. Pathophysiology Common warts have a characteristic appearance under the microscope. They have thickening of the stratum corneum (hyperkeratosis), thickening of the stratum spinosum (acanthosis), thickening of the stratum granulosum, rete ridge elongation, and large blood vessels at the dermoepidermal junction. Diagnosis On dermatoscopic examination, warts will commonly have fingerlike or knoblike extensions. Prevention Gardasil 6 is an HPV vaccine aimed at preventing cervical cancers and genital warts. Gardasil is designed to prevent infection with HPV types 16, 18, 6, and 11. HPV types 16 and 18 currently cause about 70% of cervical cancer cases, and also cause some vulvar, vaginal, penile and anal cancers. HPV types 6 and 11 are responsible for 90% of documented cases of genital warts. Gardasil 9 protects against HPV types 6, 11, 16, 18, 31, 33, 45, 52, and 58. HPV vaccines do not currently protect against the virus strains responsible for plantar warts (verrucae). Disinfection The virus is relatively hardy and immune to many common disinfectants. Exposure to 90% ethanol for at least 1 minute, 2% glutaraldehyde, 30% Savlon, and/or 1% sodium hypochlorite can disinfect the pathogen. The virus is resistant to drying and heat, but killed by temperature and ultraviolet radiation. Treatment There are many treatments and procedures associated with wart removal. A review of various skin wart treatments concluded that topical treatments containing salicylic acid were more effective than placebo. Cryotherapy appears to be as effective as salicylic acid, but there have been fewer trials. Medication Salicylic acid can be prescribed by a dermatologist in a higher concentration than that found in over-the-counter products. Several over-the-counter products are readily available at pharmacies and supermarkets of roughly two types: adhesive pads treated with salicylic acid, and bottled concentrated salicylic acid and lactic acid solution. Fluorouracil — Fluorouracil cream, a chemotherapy agent sometimes used to treat skin cancer, can be used on particularly resistant warts, by blocking viral DNA and RNA production and repair. Imiquimod is a topical cream that helps the body's immune system fight the wart virus by encouraging interferon production. It has been approved by the U.S. Food and Drug Administration (FDA) for genital warts. Cantharidin, found naturally in the bodies of many members of the beetle family Meloidae, causes dermal blistering. It is used either by itself or compounded with podophyllin. Not FDA approved, but available through Canada or select US compounding pharmacies. Bleomycin — A more potent chemotherapy drug, can be injected into deep warts, destroying the viral DNA or RNA. Bleomycin is notably not US FDA approved for this purpose. Possible side effects include necrosis of the digits, nail loss and Raynaud syndrome. The usual treatment is one or two injections. Dinitrochlorobenzene (DNCB), like salicylic acid, is applied directly to the wart. Studies show this method is effective with a cure rate of 80%. But DNCB must be used much more cautiously than salicylic acid; the chemical is known to cause genetic mutations, so it must be administered by a physician. This drug induces an allergic immune response, resulting in inflammation that wards off the wart-causing virus. Cidofovir is an antiviral drug which is injected into HPV lesions within the larynx (laryngeal papillomatosis) as an experimental treatment. Verrutop verruca treatment is a topical solution made from a combination of organic acids, inorganic acids, and metal ions. This solution causes the production of nitrites, which act to denature viral proteins and mummify the wart tissue. The difference between Verrutop and other acid treatments is that it does not damage the surrounding skin. Another product available over-the-counter that can aid in wart removal is silver nitrate in the form of a caustic pencil, which is also available at drug stores. In a placebo-controlled study of 70 patients, silver nitrate given over nine days resulted in clearance of all warts in 43% and improvement in warts in 26% one month after treatment compared to 11% and 14%, respectively, in the placebo group. The instructions must be followed to minimize staining of skin and clothing. Occasionally, pigmented scars may develop. Trichloroacetic acid can be used to treat warts if salicylic acid or cryotherapy fail or are not available. It requires repeat treatments every week or so. Side effects are burning and stinging. Procedures Keratolysis, of dead surface skin cells usually using salicylic acid, blistering agents, immune system modifiers ("immunomodulators"), or formaldehyde, often with mechanical paring of the wart with a pumice stone, blade etc. Electrodesiccation Microwave Treatment Cryosurgery or cryotherapy, which involves freezing the wart (generally with liquid nitrogen), creating a blister between the wart and epidermal layer after which the wart and the surrounding dead skin fall off. An average of three to four treatments are required for warts on thin skin. Warts on calloused skin like plantar warts might take dozens or more treatments. Surgical curettage of the wart Laser treatment – often with a pulse dye laser or carbon dioxide (CO2) laser. Pulse dye lasers (wavelength 582 nm) work by selective absorption by blood cells (specifically hemoglobin). CO2 lasers work by selective absorption by water molecules. Pulse dye lasers are less destructive and more likely to heal without scarring. CO2 laser works by vaporizing and destroying tissue and skin. Laser treatments can be painful, expensive (though covered by many insurance plans), and not extensively scarring when used appropriately. CO2 lasers will require local anaesthetic. Pulse dye laser treatment does not need conscious sedation or local anesthetic. It takes 2 to 4 treatments but can be many more for extreme cases. Typically, 10–14 days are required between treatments. Preventive measures are important. Infrared coagulator – an intense source of infrared light in a small beam like a laser. This works essentially on the same principle as laser treatment. It is less expensive. Like the laser, it can cause blistering, pain and scarring. Intralesional immunotherapy with purified candida, MMR, and tuberculin (PPD) protein appears safe and effective. Duct tape occlusion therapy involves placing a piece of duct tape over the wart. The mechanism of action of this technique still remains unknown. Despite several trials, evidence for the efficacy of duct tape therapy is inconclusive. Despite the mixed evidence for efficacy, the simplicity of the method and its limited side-effects leads some researchers to be reluctant to dismiss it. No intervention. Spontaneous resolution within a few years can be recommended. Alternative medicine Daily application of the latex of Chelidonium majus is a traditional treatment. The acrid yellow sap of Greater Celandine is used as a traditional wart remedy. According to English folk belief, touching toads causes warts; according to a German belief, touching a toad under a full moon cures warts. The most common Northern Hemisphere toads have glands that protrude from their skin that superficially resemble warts. Warts are caused by a virus, and toads do not harbor it. A variety of traditional folk remedies and rituals claim to be able to remove warts. In The Adventures of Tom Sawyer, Mark Twain has his characters discuss a variety of such remedies. Tom Sawyer proposes "spunk-water" (or "stump-water", the water collecting in the hollow of a tree stump) as a remedy for warts on the hand. You put your hand into the water at midnight and say: You then "walk away quick, eleven steps, with your eyes shut, and then turn around three times and walk home without speaking to anybody. Because if you speak the charm's busted." This is given as an example of Huckleberry Finn's planned remedy, which involves throwing a dead cat into a graveyard as a devil or devils comes to collect a recently buried wicked person. Another remedy involved splitting a bean, drawing blood from the wart and putting it on one of the halves, and burying that half at a crossroads at midnight. The theory of operation is that the blood on the buried bean will draw away the wart. Twain is recognized as an early collector and recorder of genuine American folklore. Similar practices are recorded elsewhere. In Louisiana, one remedy for warts involves rubbing the wart with a potato, which is then buried; when the "buried potato dries up, the wart will be cured". Another remedy similar to Twain's is reported from Northern Ireland, where water from a specific well on Rathlin Island is credited with the power to cure warts. History Surviving ancient medical texts show that warts were a documented disease since at least the time of Hippocrates, who lived – . In the book De Medecia by the Roman physician Aulus Cornelius Celsus, who lived – , different types of warts were described. Celsus described Myrmecia, today recognized as plantar wart, and categorized acrochordon (a skin tag) as a wart. In the 13th century, warts were described in books published by the surgeons William of Saliceto and Lanfranc of Milan. The word verruca to describe a wart was introduced by the physician Daniel Sennert, who described warts in his 1636 book Hypomnemata physicae. The cause of warts was initially disputed in the medical profession. In the early 18th century the physician Daniel Turner, who published the first book on dermatology, suggested that warts were caused by damaged nerves close to the skin. In the mid-18th century, the surgeon John Hunter popularized the belief that warts were caused by a bacterial syphilis infection. The surgeon Benjamin Bell documented that warts were caused by a disease entirely unrelated to syphilis, and established a causal link between warts and cancer. In the 19th century, the chief physician of Verona Hospital established a link between warts and cervical cancer. But in 1874 it was noted by the dermatologist Ferdinand Ritter von Hebra that while various theories were advanced by the medical profession, the "influences causing warts are still very obscure". In 1907 the physician Giuseppe Ciuffo was the first to demonstrate that warts were caused by a virus infection. In 1976 the virologist Harald zur Hausen was the first to discover that warts were caused by the human papillomavirus (HPV). His continuous research established the evidence necessary to develop a HPV vaccine, which first became available in 2006. Other animals
Biology and health sciences
Symptoms and signs
Health
34061
https://en.wikipedia.org/wiki/Winter
Winter
Winter is the coldest and darkest season of the year in polar and temperate climates. It occurs after autumn and before spring. The tilt of Earth's axis causes seasons; winter occurs when a hemisphere is oriented away from the Sun. Different cultures define different dates as the start of winter, and some use a definition based on weather. When it is winter in the Northern Hemisphere, it is summer in the Southern Hemisphere, and vice versa. Winter typically brings precipitation that, depending on a region's climate, is mainly rain or snow. The moment of winter solstice is when the Sun's elevation with respect to the North or South Pole is at its most negative value; that is, the Sun is at its farthest below the horizon as measured from the pole. The day on which this occurs has the shortest day and the longest night, with day length increasing and night length decreasing as the season progresses after the solstice. The earliest sunset and latest sunrise dates outside the polar regions differ from the date of the winter solstice and depend on latitude. They differ due to the variation in the solar day throughout the year caused by the Earth's elliptical orbit (see: earliest and latest sunrise and sunset). Etymology The English word winter comes from the Proto-Germanic noun *wintru-, whose origin is unclear. Several proposals exist, a commonly mentioned one connecting it to the Proto-Indo-European root *wed- 'water' or a nasal infix variant *wend-. Cause The tilt of the Earth's axis relative to its orbital plane plays a large role in the formation of weather. The Earth is tilted at an angle of 23.44° to the plane of its orbit, causing different latitudes to directly face the Sun as the Earth moves through its orbit. This variation brings about seasons. When it is winter in the Northern Hemisphere, the Southern Hemisphere faces the Sun more directly and thus experiences warmer temperatures than the Northern Hemisphere. Conversely, winter in the Southern Hemisphere occurs when the Northern Hemisphere is tilted more toward the Sun. From the perspective of an observer on the Earth, the winter Sun has a lower maximum altitude in the sky than the summer Sun. During winter in either hemisphere, the lower altitude of the Sun causes the sunlight to hit the Earth at an oblique angle. Thus a lower amount of solar radiation strikes the Earth per unit of surface area. Furthermore, the light must travel a longer distance through the atmosphere, allowing the atmosphere to dissipate more heat. Compared with these effects, the effect of the changes in the distance of the Earth from the Sun (due to the Earth's elliptical orbit) is negligible. The manifestation of the meteorological winter (freezing temperatures) in the northerly snow-prone latitudes is highly variable, depending on elevation, position versus marine winds, and the amount of precipitation. For instance, within Canada (a country of cold winters), Winnipeg, on the Great Plains (a long way from the ocean), has a January high of and a low of . In comparison, Vancouver, on the west coast (with a marine influence from moderating Pacific winds), has a January low of , with days well above freezing, at . Both places are at 49°N latitude and in the same western half of the continent. A similar but less extreme effect is found in Europe: in spite of their northerly latitude, the British Isles have no non-mountain weather stations with a below-freezing mean January temperature. Meteorological reckoning Meteorological reckoning is the method of measuring the winter season used by meteorologists based on "sensible weather patterns" for record keeping purposes, so the start of meteorological winter varies with latitude. Winter is often defined by meteorologists to be the three calendar months with the lowest average temperatures. This corresponds to the months of December, January and February in the Northern Hemisphere, and June, July and August in the Southern Hemisphere. The coldest average temperatures of the season are typically experienced in January or February in the Northern Hemisphere and in June, July or August in the Southern Hemisphere. Nighttime predominates in the winter season, and in some regions, winter has the highest rate of precipitation as well as prolonged dampness because of permanent snow cover or high precipitation rates coupled with low temperatures, precluding evaporation. Blizzards often develop and cause many transportation delays. Diamond dust, also known as ice needles or ice crystals, forms at temperatures approaching due to air with slightly higher moisture from above mixing with colder, surface-based air. They are made of simple hexagonal ice crystals. The Swedish Meteorological Institute (SMHI) defines thermal winter as when the daily mean temperatures are below for five consecutive days. According to the SMHI, winter in Scandinavia is more pronounced when Atlantic low-pressure systems take more southerly and northerly routes, leaving the path open for high-pressure systems to come in and cold temperatures to occur. As a result, the coldest January on record in Stockholm, in 1987, was also the sunniest. Accumulations of snow and ice are commonly associated with winter in the Northern Hemisphere, due to the large land masses there. In the Southern Hemisphere, the more maritime climate and the relative lack of land south of 40°S make the winters milder; thus, snow and ice are less common in inhabited regions of the Southern Hemisphere. In this region, snow occurs every year in elevated regions such as the Andes, the Great Dividing Range in Australia, and the mountains of New Zealand, and also in the southerly Patagonia region of South Argentina. Snow occurs year-round in Antarctica. Astronomical and other calendar-based reckoning In the Northern Hemisphere, some authorities define the period of winter based on astronomical fixed points (i.e., based solely on the position of the Earth in its orbit around the Sun), regardless of weather conditions. In one version of this definition, winter begins at the winter solstice and ends at the March equinox. These dates are somewhat later than those used to define the beginning and end of the meteorological winter — usually considered to span the entirety of December, January, and February in the Northern Hemisphere and June, July, and August in the Southern. Astronomically, the winter solstice — being the day of the year that has fewest hours of daylight — ought to be in the middle of the season, but seasonal lag means that the coldest period normally follows the solstice by a few weeks. In some cultures, the season is regarded as beginning at the solstice and ending on the following equinox. In the Northern Hemisphere, depending on the year, this corresponds to the period between 20, 21 or 22 December and 19, 20 or 21 March. In an old Norwegian tradition, winter begins on 14 October and ends on the last day of February. In many countries in the Southern Hemisphere, including Australia, New Zealand, and South Africa, winter begins on 1 June and ends on 31 August. In Celtic nations such as Ireland (using the Irish calendar) and in Scandinavia, the winter solstice is traditionally considered as midwinter, with the winter season beginning 1 November, on All Hallows, or Samhain. Winter ends and spring begins on Imbolc, or Candlemas, which is 1 or 2 February. In Chinese astronomy and other East Asian calendars, winter is taken to commence on or around 7 November, on Lìdōng, and end with the arrival of spring on 3 or 4 February, on Lìchūn. Late Roman Republic scholar Marcus Terentius Varro defined winter as lasting from the fourth day before the Ides of November (10 November) to the eighth day before the Ides of Februarius (6 February). This system of seasons is based on the length of days exclusively. The three-month period of the shortest days and weakest solar radiation occurs during November, December and January in the Northern Hemisphere and May, June and July in the Southern Hemisphere. Many mainland European countries tended to recognize Martinmas or St. Martin's Day (11 November) as the first calendar day of winter. The day falls at the midpoint between the old Julian equinox and solstice dates. Also, Valentine's Day (14 February) is recognized by some countries as heralding the first rites of spring, such as flowers blooming. The three-month period associated with the coldest average temperatures typically begins somewhere in late November or early December in the Northern Hemisphere and lasts through late February or early March. This "thermological winter" is earlier than the solstice delimited definition, but later than the daylight (Celtic or Chinese) definition. Depending on seasonal lag, this period will vary between climatic regions. Since by almost all definitions valid for the Northern Hemisphere, winter spans 31 December and 1 January, the season is split across years, just like summer in the Southern Hemisphere. Each calendar year includes parts of two winters. This causes ambiguity in associating a winter with a particular year, e.g. "Winter 2018". Solutions for this problem include naming both years, e.g. "Winter 18/19", or settling on the year the season starts in or on the year most of its days belong to, which is the later year for most definitions. Ecological reckoning and activity Ecological reckoning of winter differs from calendar-based by avoiding the use of fixed dates. It is one of six seasons recognized by most ecologists who customarily use the term hibernal for this period of the year (the other ecological seasons being prevernal, vernal, estival, serotinal, and autumnal). The hibernal season coincides with the main period of biological dormancy each year whose dates vary according to local and regional climates in temperate zones of the Earth. The appearance of flowering plants like the crocus can mark the change from ecological winter to the prevernal season as early as late January in mild temperate climates. To survive the harshness of winter, many animals have developed different behavioral and morphological adaptations for overwintering: Migration is a common effect of winter upon animals such as migratory birds. Some butterflies also migrate seasonally. Hibernation is a state of reduced metabolic activity during the winter. Some animals "sleep" during winter and only come out when the warm weather returns; e.g., gophers, frogs, snakes, and bats. Some animals store food for the winter and live on it instead of hibernating completely. This is the case for squirrels, beavers, skunks, badgers, and raccoons. Resistance is observed when an animal endures winter but changes in ways such as color and musculature. The color of the fur or plumage changes to white (in order to be confused with snow) and thus retains its cryptic coloration year-round. Examples are the rock ptarmigan, Arctic fox, weasel, white-tailed jackrabbit, and mountain hare. Some fur-coated mammals grow a heavier coat during the winter; this improves the heat-retention qualities of the fur. The coat is then shed following the winter season to allow better cooling. The heavier coat in winter made it a favorite season for trappers, who sought more profitable skins. Snow also affects the ways animals behave; many take advantage of the insulating properties of snow by burrowing in it. Mice and voles typically live under the snow layer. Some annual plants never survive the winter. Other annual plants require winter cold to complete their life cycle; this is known as vernalization. As for perennials, many small ones profit from the insulating effects of snow by being buried in it. Larger plants, particularly deciduous trees, usually let their upper part go dormant, but their roots are still protected by the snow layer. Few plants bloom in the winter, one exception being the flowering plum, which flowers in time for Chinese New Year. The process by which plants become acclimated to cold weather is called hardening. Examples Exceptionally cold 1683–1684, "The Great Frost", when the Thames, hosting the River Thames frost fairs, was frozen all the way up to London Bridge and remained frozen for about two months. Ice was about thick in London and about thick in Somerset. The sea froze up to out around the coast of the southern North Sea, causing severe problems for shipping and preventing use of many harbors. 1739–1740, one of the most severe winters in the UK on record. The Thames remained frozen over for about 8 weeks. The Irish famine of 1740–1741 claimed the lives of at least 300,000 people. 1816 was the Year Without a Summer in the Northern Hemisphere. The unusual coolness of the winter of 1815–1816 and of the following summer was primarily due to the eruption of Mount Tambora in Indonesia, in April 1815. There were secondary effects from an unknown eruption or eruptions around 1810, and several smaller eruptions around the world between 1812 and 1814. The cumulative effects were worldwide but were especially strong in the Eastern United States, Atlantic Canada, and Northern Europe. Frost formed in May in New England, killing many newly planted crops, and the summer never recovered. Snow fell in New York and Maine in June, and ice formed in lakes and rivers in July and August. In the UK, snow drifts remained on hills until late July, and the Thames froze in September. Agricultural crops failed and livestock died in much of the Northern Hemisphere, resulting in food shortages and the worst famine of the 19th century. 1887–1888: There were record cold temperatures in the Upper Midwest, heavy snowfalls worldwide, and amazing storms, including the Schoolhouse Blizzard of 1888 (in the Midwest in January) and the Great Blizzard of 1888 (in the Eastern US and Canada in March). In Europe, the winters of early 1947, February 1956, 1962–1963, 1981–1982, and 2009–2010 were abnormally cold. The UK winter of 1946–1947 started out relatively normal but became one of the snowiest UK winters to date, with nearly continuous snowfall from late January until March. In South America, the winter of 1975 was one of the strongest, with record snow occurring at 25°S in cities of low altitude, with the registration of −17 °C (1.4 °F) in some parts of southern Brazil. In the eastern United States and Canada, the winter of 2013–2014 and the second half of February 2015 were abnormally cold. Historically significant 1310–1330: Many severe winters and cold, wet summers in Europe, the first clear manifestation of the unpredictable weather of the Little Ice Age that lasted for several centuries (from about 1300 to 1900). The persistently cold, wet weather caused great hardship, was primarily responsible for the Great Famine of 1315–1317, and strongly contributed to the weakened immunity and malnutrition leading up to the Black Death (1348–1350). 1600–1602: Extremely cold winters in Switzerland and Baltic region after the eruption of Huaynaputina in Peru in 1600. 1607–1608: In North America, ice persisted on Lake Superior until June. Londoners held their first frost fair on the frozen-over River Thames. 1622: In Turkey, the Golden Horn and southern section of Bosphorus froze over. 1690s: Extremely cold, snowy, severe winters. Ice surrounded Iceland for miles in every direction. 1779–1780: Scotland's coldest winter on record, and ice surrounded Iceland in every direction (like in the 1690s). In the United States, a record five-week cold spell bottomed out at in Hartford, Connecticut and in New York City. The Hudson River and New York's harbor froze over. 1783–1786: The Thames partially froze, and snow remained on the ground for months. In February 1784, the North Carolina was frozen in Chesapeake Bay. 1794–1795: A severe winter, with the coldest January in the UK and lowest temperature ever recorded in London: on 25 January. The cold began on Christmas Eve and lasted until late March, with a few temporary warm-ups. The Severn and Thames froze, and frost fairs started up again. The French army tried to invade the Netherlands over its frozen rivers, while the Dutch fleet was stuck in its harbor. The winter had easterlies (from Siberia) as its dominant feature. 1813–1814: Severe cold, last freeze-over of Thames, and last frost fair. (Removal of old London Bridge and changes to river's banks made freeze-overs less likely.) 1883–1888: Colder temperatures worldwide, including an unbroken string of abnormally cold and brutal winters in the Upper Midwest, related to the explosion of Krakatoa in August 1883. There was snow recorded in the UK as early as October and as late as July during this period. 1976–1977: One of the coldest winters in the US in decades. 1985: Arctic outbreak in the US resulting from shift in polar vortex, with many cold temperature records broken. 2002–2003 was an unusually cold winter in the Northern and Eastern US. 2010–2011: Persistent bitter cold in the entire eastern half of the US from December onward, with few or no midwinter warm-ups, and with cool conditions continuing into spring. La Niña and negative Arctic oscillation were strong factors. Heavy and persistent precipitation contributed to almost constant snow cover in the Northeastern US, which finally receded in early May. 2011 was one of the coldest on record in New Zealand, with sea level snow falling in Wellington in July for the first time in 35 years and a much heavier snowstorm for 3 days in a row in August. Effect on humans Humans are sensitive to winter cold, which compromises the body's ability to maintain both core and surface heat of the body. Slipping on icy surfaces is a common cause of winter injuries. Other injuries from the cold include: Hypothermia — Shivering, leading to uncoordinated movements and death. Frostbite — Freezing of skin, leading to loss of feeling and damaged tissue. Trench foot — Numbness, leading to damaged tissue and gangrene. Chilblains — Capillary damage in digits can lead to more severe cold injuries. Rates of influenza, COVID-19, and other respiratory diseases also increase during the winter. Mythology In Persian culture, the winter solstice is called Yaldā (meaning: birth) and has been celebrated for thousands of years. It is referred to as the eve of the birth of Mithra, who symbolised light, goodness and strength on Earth. In Greek mythology, Hades kidnapped Persephone to be his wife. Zeus ordered Hades to return her to Demeter, the goddess of the Earth and her mother. Hades tricked Persephone into eating the food of the dead, so Zeus decreed that she spend six months with Demeter and six months with Hades. During the time her daughter is with Hades, Demeter became depressed and caused winter. In Welsh mythology, Gwyn ap Nudd abducted a maiden named Creiddylad. On May Day, her lover, Gwythr ap Greidawl, fought Gwyn to win her back. The battle between them represented the contest between summer and winter. Old Man Winter Jack Frost Ded Moroz Snegurochka Vetr
Physical sciences
Seasons
null
34062
https://en.wikipedia.org/wiki/WAV
WAV
Waveform Audio File Format (WAVE, or WAV due to its filename extension; pronounced or ) is an audio file format standard for storing an audio bitstream on personal computers. The format was developed and published for the first time in 1991 by IBM and Microsoft. It is the main format used on Microsoft Windows systems for uncompressed audio. The usual bitstream encoding is the linear pulse-code modulation (LPCM) format. WAV is an application of the Resource Interchange File Format (RIFF) bitstream format method for storing data in chunks, and thus is similar to the 8SVX and the Audio Interchange File Format (AIFF) format used on Amiga and Macintosh computers, respectively. Description The WAV file is an instance of a Resource Interchange File Format (RIFF) defined by IBM and Microsoft. The RIFF format acts as a wrapper for various audio coding formats. Though a WAV file can contain compressed audio, the most common WAV audio format is uncompressed audio in the linear pulse-code modulation (LPCM) format. LPCM is also the standard audio coding format for audio CDs, which store two-channel LPCM audio sampled at 44.1 kHz with 16 bits per sample. Since LPCM is uncompressed and retains all of the samples of an audio track, professional users or audio experts may use the WAV format with LPCM audio for maximum audio quality. WAV files can also be edited and manipulated with relative ease using software. On Microsoft Windows, the WAV format supports compressed audio using the Audio Compression Manager (ACM). Any ACM codec can be used to compress a WAV file. The user interface (UI) for ACM may be accessed through various programs that use it, including Sound Recorder in some versions of Windows. Beginning with Windows 2000, a WAVE_FORMAT_EXTENSIBLE header was defined which specifies multiple audio channel data along with speaker positions, eliminates ambiguity regarding sample types and container sizes in the standard WAV format and supports defining custom extensions to the format. File specifications RIFF A RIFF file is a tagged file format. It has a specific container format (a chunk) with a header that includes a four-character tag (FourCC) and the size (number of bytes) of the chunk. The tag specifies how the data within the chunk should be interpreted, and there are several standard FourCC tags. Tags consisting of all capital letters are reserved tags. The outermost chunk of a RIFF file has a RIFF tag; the first four bytes of chunk data are an additional FourCC tag that specify the form type and are followed by a sequence of subchunks. In the case of a WAV file, the additional tag is WAVE. The remainder of the RIFF data is a sequence of chunks describing the audio information. The advantage of a tagged file format is that the format can be extended later while maintaining backward compatibility. The rule for a RIFF (or WAV) reader is that it should ignore any tagged chunk that it does not recognize. The reader will not be able to use the new information, but the reader should not be confused. The specification for RIFF files includes the definition of an INFO chunk. The chunk may include information such as the title of the work, the author, the creation date, and copyright information. Although the INFO chunk was defined for RIFF in version 1.0, the chunk was not referenced in the formal specification of a WAV file. Many readers had trouble processing this. Consequently, the safest thing to do from an interchange standpoint was to omit the INFO chunk and other extensions and send a lowest-common-denominator file. There are other INFO chunk placement problems. RIFF files were expected to be used in international environments, so there is CSET chunk to specify the country code, language, dialect, and code page for the strings in a RIFF file. For example, specifying an appropriate CSET chunk should allow the strings in an INFO chunk (and other chunks throughout the RIFF file) to be interpreted as Cyrillic or Japanese characters. RIFF also defines a JUNK chunk whose contents are uninteresting. The chunk allows a chunk to be deleted by just changing its FourCC. The chunk could also be used to reserve some space for future edits so the file could be modified without being resized. A later definition of RIFF introduced a similar PAD chunk. RIFF WAVE The top-level definition of a WAV file is: <WAVE-form> → RIFF('WAVE' <fmt-ck> // Format of the file [<fact-ck>] // Fact chunk [<cue-ck>] // Cue points [<playlist-ck>] // Playlist [<assoc-data-list>] // Associated data list <wave-data> ) // Wave data The top-level RIFF form uses a WAVE tag. It is followed by a mandatory <fmt-ck> chunk that describes the format of the sample data that follows. This chunk includes information such as the sample encoding, number of bits per channel, the number of channels, and the sample rate. The WAV specification includes some optional features. The optional <fact-ck> chunk reports the number of samples for some compressed coding schemes. The <cue-ck> chunk identifies some significant sample numbers in the wave file. The <playlist-ck> chunk allows the samples to be played out of order or repeated rather than just from beginning to end. The associated data list (<assoc-data-list>) allows labels and notes to be attached to cue points; text annotation may be given for a group of samples (e.g., caption information). Finally, the mandatory <wave-data> chunk contains the actual samples in the format previously specified. Note that the WAV file definition does not show where an INFO chunk should be placed. It is also silent about the placement of a CSET chunk (which specifies the character set used). The RIFF specification attempts to be a formal specification, but its formalism lacks the precision seen in other tagged formats. For example, the RIFF specification does not clearly distinguish between a set of subchunks and an ordered sequence of subchunks. The RIFF form chunk suggests it should be a sequence container. Sequencing information is specified in the RIFF form of a WAV file consistent with the formalism: "However, <fmt-ck> must always occur before <wave-data>, and both of these chunks are mandatory in a WAVE file." The specification suggests a LIST chunk is also a sequence: "A LIST chunk contains a list, or ordered sequence, of subchunks." However, the specification does not give a formal specification of the INFO chunk; an example INFO LIST chunk ignores the chunk sequence implied in the INFO description. The LIST chunk definition for <wave-data> does use the LIST chunk as a sequence container with good formal semantics. The WAV specification supports, and most WAV files use, a single contiguous array of audio samples. The specification also supports discrete blocks of samples and silence that are played in order. The specification for the sample data contains apparent errors: The <wave-data> contains the waveform data. It is defined as follows: <wave-data> → { <data-ck> | <data-list> } <data-ck> → data( <wave-data> ) <wave-list> → LIST( 'wavl' { <data-ck> | // Wave samples <silence-ck> }... ) // Silence <silence-ck> → slnt( <dwSamples:DWORD> ) // Count of silent samples Apparently <data-list> (undefined) and <wave-list> (defined but not referenced) should be identical. Even with this resolved, the productions then allow a <data-ck> to contain a recursive <wave-data> (which implies data interpretation problems). To avoid the recursion, the specification can be interpreted as: <wave-data> → { <data-ck> | <wave-list> } <data-ck> → data( <bSampleData:BYTE> ... ) <wave-list> → LIST( 'wavl' { <data-ck> | // Wave samples <silence-ck> }... ) // Silence <silence-ck> → slnt( <dwSamples:DWORD> ) // Count of silent samples WAV files can contain embedded IFF lists, which can contain several sub-chunks. WAV file header This is an example of a WAV file header (44 bytes). Data is stored in little-endian byte order. [Master RIFF chunk] FileTypeBlocID (4 bytes) : Identifier « RIFF » (0x52, 0x49, 0x46, 0x46) FileSize (4 bytes) : Overall file size minus 8 bytes FileFormatID (4 bytes) : Format = « WAVE » (0x57, 0x41, 0x56, 0x45) [Chunk describing the data format] FormatBlocID (4 bytes) : Identifier « fmt␣ » (0x66, 0x6D, 0x74, 0x20) BlocSize (4 bytes) : Chunk size minus 8 bytes, which is 16 bytes here (0x10) AudioFormat (2 bytes) : Audio format (1: PCM integer, 3: IEEE 754 float) NbrChannels (2 bytes) : Number of channels Frequency (4 bytes) : Sample rate (in hertz) BytePerSec (4 bytes) : Number of bytes to read per second (Frequency * BytePerBloc). BytePerBloc (2 bytes) : Number of bytes per block (NbrChannels * BitsPerSample / 8). BitsPerSample (2 bytes) : Number of bits per sample [Chunk containing the sampled data] DataBlocID (4 bytes) : Identifier « data » (0x64, 0x61, 0x74, 0x61) DataSize (4 bytes) : SampledData size SampledData Metadata As a derivative of RIFF, WAV files can be tagged with metadata in the INFO chunk. In addition, WAV files can embed any kind of metadata, including but not limited to Extensible Metadata Platform (XMP) data or ID3 tags in extra chunks. The RIFF specification requires that applications ignore chunks they do not recognize and applications may not necessarily use this extra information. Popularity Uncompressed WAV files are large, so file sharing of WAV files over the Internet is uncommon except among video, music and audio professionals. The high resolution of the format makes it suitable for retaining first generation archived files of high quality, for use on a system where disk space and network bandwidth are not constraints. Use by broadcasters In spite of their large size, uncompressed WAV files are used by most radio broadcasters, especially those that have adopted a tapeless system. BBC Radio in the UK requires LPCM 48 kHz 16-bit WAV audio as standard for all content made for broadcast on its stations. The UK Commercial radio company Global Radio uses 44.1 kHz 16-bit two-channel WAV files throughout their broadcast chain. The ABC "D-Cart" system, which was developed by the Australian broadcaster, uses 48 kHz 16-bit two-channel WAV files. The Digital Radio Mondiale consortium uses WAV files as an informal standard for transmitter simulation and receiver testing. Limitations The WAV format is limited to files that are less than , because of its use of a 32-bit unsigned integer to record the file size in the header. Although this is equivalent to about 6.8 hours of CD-quality audio at 44.1 kHz, 16-bit stereo, it is sometimes necessary to exceed this limit, especially when greater sampling rates, bit resolutions or channel count are required. The W64 format was therefore created for use in Sound Forge. Its 64-bit file size field in the header allows for much longer recording times. The RF64 format specified by the European Broadcasting Union has also been created to solve this problem. Non-audio data Since the sampling rate of a WAV file can vary from to , and the number of channels can be as high as 65535, WAV files have also been used for non-audio data. LTspice, for instance, can store multiple circuit trace waveforms in separate channels, at any appropriate sampling rate, with the full-scale range representing ±1 V or A rather than a sound pressure. Audio compact discs Audio compact discs (CDs) do not use the WAV file format, using instead Red Book audio. The commonality is that audio CDs are encoded as uncompressed 16-bit 44.1 kHz stereo LPCM, which is one of the formats supported by WAV. Comparison of coding schemes Audio in WAV files can be encoded in a variety of audio coding formats, such as GSM or MP3, to reduce the file size. All WAV files, even those that use MP3 compression, use the .wav extension. This is a reference to compare the monophonic (not stereophonic) audio quality and compression bitrates of audio coding formats available for WAV files including LPCM, ADPCM, Microsoft GSM 06.10, CELP, SBC, Truespeech and MPEG Layer-3. These are the default ACM codecs that come with Windows.
Technology
File formats
null
34085
https://en.wikipedia.org/wiki/Wolverine
Wolverine
The wolverine ( , ; Gulo gulo), also called the carcajou or quickhatch (from East Cree, kwiihkwahaacheew), is the largest land-dwelling member of the family Mustelidae. It is a muscular carnivore and a solitary animal. The wolverine has a reputation for ferocity and strength out of proportion to its size, with the documented ability to kill prey many times larger than itself. The wolverine is found primarily in remote reaches of the Northern boreal forests and subarctic and alpine tundra of the Northern Hemisphere, with the greatest numbers in Northern Canada, the U.S. state of Alaska, the mainland Nordic countries of Europe, and throughout western Russia and Siberia. Its population has steadily declined since the 19th century owing to trapping, range reduction and habitat fragmentation. The wolverine is now essentially absent from the southern end of its range in both Europe and North America. Naming The wolverine's questionable reputation as an insatiable glutton (reflected in its Latin genus name Gulo, meaning "glutton") may be in part due to a false etymology. The less common name for the animal in Norwegian, fjellfross, meaning "mountain cat", is thought to have worked its way into German as Vielfraß, which means "glutton" (literally "devours much"). Its name in other West Germanic languages is similar (e.g. ). The Finnish name is ahma, derived from ahmatti, which is translated as "glutton". Similarly, the Estonian name is ahm, with the equivalent meaning to the Finnish name. In Lithuanian, it is ernis; in Latvian, tinis or āmrija. The Eastern Slavic росомаха (rosomakha) and the Polish and Czech name rosomák seem to be borrowed from the Finnish rasva-maha (fat belly). Similarly, the Hungarian name is rozsomák or torkosborz which means "gluttonous badger". In French-speaking parts of Canada, the wolverine is referred to as carcajou, borrowed from the Innu-aimun or Montagnais kuàkuàtsheu. However, in France, the wolverine's name is glouton (glutton). Purported gluttony is reflected neither in the English name wolverine nor in the names used in North Germanic languages. The English word wolverine (alteration of the earlier form, wolvering, of uncertain origin) probably implies "a little wolf". The name in Proto-Norse, erafaz and Old Norse, jarfr, lives on in the regular Icelandic name jarfi, regular Norwegian name jerv, regular Swedish name järv and regular Danish name jærv. Taxonomy and evolutionary history Classification Genetic evidence suggests that the wolverine is most closely related to the tayra and martens, all of which shared a Eurasian ancestor. There are two subspecies: the Old World form, Gulo gulo gulo, and the New World form, G. g. luscus. Some authors had described as many as four additional North American subspecies, including ones limited to Vancouver Island (G. g. vancouverensis) and the Kenai Peninsula in Alaska (G. g. katschemakensis). However, the most currently accepted taxonomy recognizes either the two continental subspecies or G. gulo as a single Holarctic taxon. Evolution Recently compiled genetic evidence suggests most of North America's wolverines are descended from a single source, likely originating from Beringia during the last glaciation and rapidly expanding thereafter, though considerable uncertainty to this conclusion is due to the difficulty of collecting samples in the extremely depleted southern extent of the range. Physical characteristics Anatomically, the wolverine is an elongated animal that is low to the ground. With strong limbs, broad and rounded head, small eyes and short rounded ears, it most closely resembles a large fisher. Though its legs are short, its large, five-toed paws with crampon-like claws and plantigrade posture enable it to climb up and over steep cliffs, trees and snow-covered peaks with relative ease. The adult wolverine is about the size of a medium dog, with a body length ranging from ; standing at the shoulder; and a tail length of . Weight is usually in males, and in females . Exceptionally large males of as much as are referenced in Soviet literature, though such weights are deemed in Mammals of the Soviet Union to be improbable. The males are often 10–15% larger than the females in linear measurements and can be 30–40% greater in weight. According to some sources, Eurasian wolverines are claimed to be larger and heavier than those in North America, with weights reaching up to . However, this may refer more specifically to areas such as Siberia, as data from Fennoscandian wolverines shows they are typically around the same size as their American counterparts. It is the largest of terrestrial mustelids; only the marine-dwelling sea otter, the giant otter of the Amazon basin and the semi-aquatic African clawless otter are larger—while the European badger may reach a similar body mass, especially in autumn. Wolverines have thick, dark, oily fur which is highly hydrophobic, making it resistant to frost. This has led to its traditional popularity among hunters and trappers as a lining in jackets and parkas in Arctic conditions. A light-silvery facial mask is distinct in some individuals, and a pale buff stripe runs laterally from the shoulders along the side and crossing the rump just above a bushy tail. Some individuals display prominent white hair patches on their throats or chests. Like many other mustelids, it has potent anal scent glands used for marking territory and sexual signaling. The pungent odor has given rise to the nicknames "skunk bear" and "nasty cat." The anal gland secretion for the samples obtained from six animal's secretion was complex and variable: 123 compounds were detected in total, with the number per animal ranging from 45 to 71 compounds. Only six compounds were common to all extracts: 3-methylbutanoic acid, 2-methylbutanoic acid, phenylacetic acid, alpha-tocopherol, cholesterol, and a compound tentatively identified as 2-methyldecanoic acid. The highly odoriferous thietanes and dithiolanes found in anal gland secretions of some members of the Mustelinae [ferrets, mink, stoats, and weasels (Mustela spp.) and zorillas (Ictonyx spp.)] were not observed. The composition of the wolverine's anal gland secretion is similar to that of two other members of the Mustelinae, the pine and beech marten (Martes spp.) Wolverines, like other mustelids, possess a special upper molar in the back of the mouth that is rotated 90 degrees, towards the inside of the mouth. This special characteristic allows wolverines to tear off meat from prey or carrion that has been frozen solid. Distribution Wolverines live primarily in isolated arctic, boreal, and alpine regions of northern Canada, Alaska, Siberia, and Fennoscandia; they are also native to European Russia, the Baltic countries, the Russian Far East, northeast China and Mongolia. Wolverine remains have been found in Ukraine, but they are extirpated there today and it is unclear whether the wolverines would have formed sustainable populations. Unique records of encounters with wolverines have been noted in Latvia, the most recent one being in late July 2022 (although it can be disputed because of the unclear footage); the population was widespread in the 16th and 17th centuries, but nowadays it is not native to the area. Most New World wolverines live in Canada and Alaska. However, wolverines were once recorded as also being present in Colorado, areas of the southwestern United States (Arizona and New Mexico), the Midwest (Indiana, Nebraska, North and South Dakota, Ohio, Minnesota, and Wisconsin), New England (Maine, New Hampshire, Vermont, and Massachusetts) and in New York and Pennsylvania. In the Sierra Nevada, wolverines were sighted near Winnemucca Lake in spring 1995 and at Toe Jam Lake north of the Yosemite border in 1996; and later photographed by baited cameras, including in 2008 and 2009, near Lake Tahoe. According to a 2014 U.S. Fish and Wildlife Service publication, "wolverines are found in the North Cascades in Washington and the Northern Rocky Mountains in Idaho, Montana, Oregon (Wallowa Range), and Wyoming. Individual wolverines have also moved into historic range in the Sierra Nevada Mountains of California and the Southern Rocky Mountains of Colorado, but have not established breeding populations in these areas". In 2022, Colorado Parks and Wildlife considered plans to reintroduce the wolverine to the state. Wolverines are also found in Utah but are very rarely seen, with only six confirmed sightings since the first confirmed sighting in 1979. Three of these six confirmed Utah sightings have been caught on video. A wolverine, a male, was finally captured and tagged in Utah in 2022 before being released back into the wild to better understand the animal's range. In August 2020, the National Park Service reported that wolverines had been sighted at Mount Rainier, Washington, for the first time in more than a century. The sighting was of a reproductive female and her two offspring. In 2004, the first confirmed sighting of a wolverine in Michigan since the early 19th century took place, when a Michigan Department of Natural Resources wildlife biologist photographed a wolverine in Ubly, Michigan. The specimen was found dead at the Minden City State Game Area in Sanilac County, Michigan in 2010. Behavior and ecology Diet and hunting Wolverines are primarily scavengers. Most of their food is carrion, especially in winter and early spring. They may find carrion themselves, feed on it after the predator (often, a wolf pack) has finished, or simply take it from another predator. Wolverines are known to follow wolf and lynx trails to scavenge the remains of their kills. Whether eating live prey or carrion, the wolverine's feeding style appears voracious, leading to the nickname of "glutton" (also the basis of the scientific name). However, this feeding style is believed to be an adaptation to food scarcity, especially in winter. The wolverine is also a powerful and versatile predator. Its prey mainly consists of small to medium-sized mammals, but wolverines have been recorded killing prey many times larger than itself, such as adult deer. Prey species include porcupines, squirrels, chipmunks, beavers, marmots, moles, gophers, rabbits, voles, mice, rats, shrews, lemmings, caribou, roe deer, white-tailed deer, mule deer, sheep, goats, cattle, bison, moose, and elk. Smaller predators are occasionally preyed on, including martens, mink, foxes, Eurasian lynx, weasels, coyote, and wolf pups. Wolverines have also been known to kill Canada lynx in the Yukon of Canada. Wolverines often pursue live prey that are relatively easy to obtain, including animals caught in traps, newborn mammals, and deer (including adult moose and elk) when they are weakened by winter or immobilized by heavy snow. Their diets are sometimes supplemented by birds' eggs, birds (especially geese), roots, seeds, insect larvae, and berries. Adult wolverines appear to be one of the few conspecific mammal carnivores to actively pose a threat to golden eagles. Wolverines were observed to prey on nestling golden eagles in Denali National Park. During incubation in Northern Sweden, an incubating adult golden eagle was killed on its nest by a wolverine. Wolverines inhabiting the Old World (specifically, Fennoscandia) hunt more actively than their North American relatives. This may be because competing predator populations in Eurasia are less dense, making it more practical for the wolverine to hunt for itself than to wait for another animal to make a kill and then try to snatch it. They often feed on carrion left by wolves, so changes in wolf populations may affect the population of wolverines. They are also known on occasion to eat plant material. Wolverines often cache their food during times of plenty. This is of particular importance to lactating females in the winter and early spring, a time when food is scarce. Reproduction Wolverines are induced ovulators. Successful males will form lifetime relationships with two or three females, which they will visit occasionally, while other males are left without a mate. Mating season is in the summer, but the actual implantation of the embryo (blastocyst) in the uterus is stayed until early winter, delaying the development of the fetus. Females will often not produce young if food is scarce. The gestation period is 30–50 days, and litters of typically two or three young ("kits") are born in the spring. Kits develop rapidly, reaching adult size within the first year. The typical longevity of a wolverine in captivity is around 15 to 17 years, but in the wild the average lifespan is more likely between 8 and 10 years. Fathers make visits to their offspring until they are weaned at 10 weeks of age; also, once the young are about six months old, some reconnect with their fathers and travel together for a time. Interspecies interactions Wolves, American black bears, brown bears, cougars, and golden eagles are capable of killing wolverines, particularly young and inexperienced individuals. Wolves are thought to be the wolverine's most important natural predator, with the arrival of wolves to a wolverine's territory presumably leading the latter to abandon the area. Armed with powerful jaws, sharp claws, and a thick hide, wolverines, like most mustelids, are remarkably strong for their size. They may defend against larger or more numerous predators such as wolves or bears. By far, their most serious predator is the grey wolf, with an extensive record of wolverine fatalities attributed to wolves in both North America and Eurasia. In North America, another predator (less frequent) is the cougar. At least one account reported a wolverine's apparent attempt to steal a kill from a black bear, although the bear won what was ultimately a fatal contest for the wolverine. There are a few accounts of brown bears killing and consuming wolverines as well and, although also reported at times to be chased off prey, in some areas such as Denali National Park, wolverines seemed to try to actively avoid encounters with grizzly bears as they have been reported in areas where wolves start hunting them. Urine scent marking Wolverines have been observed to use urine as a scent-marking behavior. Headspace analysis of the volatiles emanating from urine samples identified 19 potential semiochemicals. The major classes of identified chemicals are the ketones: 2-heptanone, 4-heptanone and 4-nonanone and the monoterpenes: alpha-pinene, beta-pinene, limonene, linalool and geraniol. In other mammals, the excretion of these terpenes is unusual. The conifer needles that are found in wolverine scat are likely the source of these monoterpenes. Threats and conservation The world's total wolverine population is not known. The animal exhibits a low population density and requires a very large home range. The wolverine is listed by the IUCN as Least Concern because of its "wide distribution, remaining large populations, and the unlikelihood that it is in decline at a rate fast enough to trigger even Near Threatened". The range of a male wolverine can be more than , encompassing the ranges of several females which have smaller home ranges of roughly 130–260 km2 (50–100 mi2). Adult wolverines try for the most part to keep nonoverlapping ranges with adults of the same sex. Radio tracking suggests an animal can range hundreds of miles in a few months. Female wolverines burrow into snow in February to create a den, which is used until weaning in mid-May. Areas inhabited nonseasonally by wolverines are thus restricted to zones with late-spring snowmelts. This fact has led to concern that global warming will shrink the ranges of wolverine populations. This requirement for large territories brings wolverines into conflict with human development, and hunting and trapping further reduce their numbers, causing them to disappear from large parts of their former range; attempts to have them declared an endangered species have met with little success. In February 2013, the United States Fish and Wildlife Service proposed giving Endangered Species Act protections to the wolverine due to its winter habitat in the northern Rockies diminishing. This was as a result of a lawsuit brought by the Center for Biological Diversity and Defenders of Wildlife. In November 2023, the United States Fish and Wildlife Service announced that it was adding the wolverine in the United States Lower 48 states to the threatened list. The Wildlife Conservation Society reported in June 2009 that a wolverine researchers had been tracking for almost three months had crossed into northern Colorado. Society officials had tagged the young male wolverine in Wyoming near Grand Teton National Park, and it had traveled southward for about . It was the first wolverine seen in Colorado since 1919, and its appearance was also confirmed by the Colorado Division of Wildlife. In May 2016 the same wolverine was killed by a cattle ranch-hand in North Dakota, ending a greater-than- trip by this lone male wolverine, dubbed M-56. This was the first verified sighting of a Wolverine in North Dakota in 150 years. In February 2014, a wolverine was seen in Utah, the first confirmed sighting in that state in 30 years. In captivity Around a hundred wolverines are held in zoos across North America and Europe, and they have been bred in captivity, but only with difficulty and high infant mortality. Human interactions Many North American cities, sports teams, and organizations use the wolverine as a mascot. For example, the US state of Michigan is, by tradition, known as "the Wolverine State", and the University of Michigan takes the animal as its mascot. There have also been professional baseball and football clubs called the "Wolverines". The association is well and long established: for example, many Detroiters volunteered to fight during the American Civil War and George Armstrong Custer, who led the Michigan Brigade, called them the "Wolverines". The origins of this association are obscure; it may derive from a busy trade in wolverine furs in Sault Ste. Marie in the 18th century or may recall a disparagement intended to compare early settlers in Michigan with the vicious mammal. Wolverines are, however, extremely rare in Michigan. A sighting in February 2004 near Ubly was the first confirmed sighting in Michigan in 200 years. The animal was found dead in 2010. The Marvel Comics superhero James "Logan" Howlett was given the nickname "Wolverine" while cage fighting because of his skill, short stature, keen animal senses, ferocity, and most notably, claws that retract from both sets of knuckles. The wolverine is prevalent in stories and oral history from various Algonquian tribes and figures prominently in the mythology of the Innu people of eastern Quebec and Labrador. The wolverine is known as Kuekuatsheu, a conniving trickster who created the world. The story of the formation of the Innu world begins long ago when Kuekuatsheu built a big boat similar to Noah's Ark and put all the various animal species in it. There was a great deal of rain, and the land was flooded. Kuekuatsheu told a mink to dive into the water to retrieve some mud and rocks which he mixed together to create an island, which is the world that is presently inhabited along with all the animals. Many tales of Kuekuatsheu are often humorous and irreverent and include crude references to bodily functions. Some Northeastern tribes, such as the Miꞌkmaq and Passamaquoddy, refer to the wolverine as Lox, who usually appears in tales as a trickster and thief (although generally more dangerous than its Innu counterpart) and is often depicted as a companion to the wolf. Similarly, the Dené, a group of the Athabaskan-speaking natives of northwestern Canada, have many stories of the wolverine as a trickster and cultural transformer much like the coyote in the Navajo tradition or raven in Northwest Coast traditions.
Biology and health sciences
Carnivora
null
34138
https://en.wikipedia.org/wiki/XML
XML
Extensible Markup Language (XML) is a markup language and file format for storing, transmitting, and reconstructing data. It defines a set of rules for encoding documents in a format that is both human-readable and machine-readable. The World Wide Web Consortium's XML 1.0 Specification of 1998 and several other related specifications—all of them free open standards—define XML. The design goals of XML emphasize simplicity, generality, and usability across the Internet. It is a textual data format with strong support via Unicode for different human languages. Although the design of XML focuses on documents, the language is widely used for the representation of arbitrary data structures, such as those used in web services. Several schema systems exist to aid in the definition of XML-based languages, while programmers have developed many application programming interfaces (APIs) to aid the processing of XML data. Overview The main purpose of XML is serialization, i.e. storing, transmitting, and reconstructing arbitrary data. For two disparate systems to exchange information, they need to agree upon a file format. XML standardizes this process. It is therefore analogous to a lingua franca for representing information. As a markup language, XML labels, categorizes, and structurally organizes information. XML tags represent the data structure and contain metadata. What is within the tags is data, encoded in the way the XML standard specifies. An additional XML schema (XSD) defines the necessary metadata for interpreting and validating XML. (This is also referred to as the canonical schema.) An XML document that adheres to basic XML rules is "well-formed"; one that adheres to its schema is "valid". IETF RFC 7303 (which supersedes the older RFC 3023), provides rules for the construction of media types for use in XML message. It defines three media types: application/xml (text/xml is an alias), application/xml-external-parsed-entity (text/xml-external-parsed-entity is an alias) and application/xml-dtd. They are used for transmitting raw XML files without exposing their internal semantics. RFC 7303 further recommends that XML-based languages be given media types ending in +xml, for example, image/svg+xml for SVG. Further guidelines for the use of XML in a networked context appear in RFC 3470, also known as IETF BCP 70, a document covering many aspects of designing and deploying an XML-based language. Applications XML has come into common use for the interchange of data over the Internet. Hundreds of document formats using XML syntax have been developed, including RSS, Atom, Office Open XML, OpenDocument, SVG, COLLADA, and XHTML. XML also provides the base language for communication protocols such as SOAP and XMPP. It is one of the message exchange formats used in the Asynchronous JavaScript and XML (AJAX) programming technique. Many industry data standards, such as Health Level 7, OpenTravel Alliance, FpML, MISMO, and National Information Exchange Model are based on XML and the rich features of the XML schema specification. In publishing, Darwin Information Typing Architecture is an XML industry data standard. XML is used extensively to underpin various publishing formats. One of the applications of XML in science is the representation of operational meteorology information based on IWXXM standards. Key terminology The material in this section is based on the XML Specification. This is not an exhaustive list of all the constructs that appear in XML; it provides an introduction to the key constructs most often encountered in day-to-day use. An XML document is a string of characters. Every legal Unicode character (except Null) may appear in an (1.1) XML document (while some are discouraged). The processor analyzes the markup and passes structured information to an application. The specification places requirements on what an XML processor must do and not do, but the application is outside its scope. The processor (as the specification calls it) is often referred to colloquially as an XML parser. The characters making up an XML document are divided into markup and content, which may be distinguished by the application of simple syntactic rules. Generally, strings that constitute markup either begin with the character < and end with a >, or they begin with the character & and end with a ;. Strings of characters that are not markup are content. However, in a CDATA section, the delimiters <![CDATA[ and ]]> are classified as markup, while the text between them is classified as content. In addition, whitespace before and after the outermost element is classified as markup. A tag is a markup construct that begins with < and ends with >. There are three types of tag: start-tag, such as <section>; end-tag, such as </section>; empty-element tag, such as <line-break />. An element is a logical document component that either begins with a start-tag and ends with a matching end-tag or consists only of an empty-element tag. The characters between the start-tag and end-tag, if any, are the element's content, and may contain markup, including other elements, which are called child elements. An example is <greeting>Hello, world!</greeting>. Another is <line-break />. An attribute is a markup construct consisting of a name–value pair that exists within a start-tag or empty-element tag. An example is <img src="madonna.jpg" alt="Madonna" />, where the names of the attributes are "src" and "alt", and their values are "madonna.jpg" and "Madonna" respectively. Another example is <step number="3">Connect A to B.</step>, where the name of the attribute is "number" and its value is "3". An XML attribute can only have a single value and each attribute can appear at most once on each element. In the common situation where a list of multiple values is desired, this must be done by encoding the list into a well-formed XML attribute with some format beyond what XML defines itself. Usually this is either a comma or semi-colon delimited list or, if the individual values are known not to contain spaces, a space-delimited list can be used. <div class="inner greeting-box">Welcome!</div>, where the attribute "class" has both the value "inner greeting-box" and also indicates the two CSS class names "inner" and "greeting-box". XML documents may begin with an XML declaration that describes some information about themselves. An example is <?xml version="1.0" encoding="UTF-8"?>. Characters and escaping XML documents consist entirely of characters from the Unicode repertoire. Except for a small number of specifically excluded control characters, any character defined by Unicode may appear within the content of an XML document. XML includes facilities for identifying the encoding of the Unicode characters that make up the document, and for expressing characters that, for one reason or another, cannot be used directly. Valid characters Unicode code points in the following ranges are valid in XML 1.0 documents: U+0009 (Horizontal Tab), U+000A (Line Feed), U+000D (Carriage Return): these are the only C0 controls accepted in XML 1.0; U+0020–U+D7FF, U+E000–U+FFFD: this excludes some noncharacters in the BMP (all surrogates, U+FFFE and U+FFFF are forbidden); U+10000–U+10FFFF: this includes all code points in supplementary planes, including noncharacters. XML 1.1 extends the set of allowed characters to include all the above, plus the remaining characters in the range U+0001–U+001F. At the same time, however, it restricts the use of C0 and C1 control characters other than U+0009 (Horizontal Tab), U+000A (Line Feed), U+000D (Carriage Return), and U+0085 (Next Line) by requiring them to be written in escaped form (for example U+0001 must be written as &#x01; or its equivalent). In the case of C1 characters, this restriction is a backwards incompatibility; it was introduced to allow common encoding errors to be detected. The code point U+0000 (Null) is the only character that is not permitted in any XML 1.1 document. Encoding detection The Unicode character set can be encoded into bytes for storage or transmission in a variety of different ways, called "encodings". Unicode itself defines encodings that cover the entire repertoire; well-known ones include UTF-8 (which the XML standard recommends using, without a BOM) and UTF-16. There are many other text encodings that predate Unicode, such as ASCII and various ISO/IEC 8859; their character repertoires are in every case subsets of the Unicode character set. XML allows the use of any of the Unicode-defined encodings and any other encodings whose characters also appear in Unicode. XML also provides a mechanism whereby an XML processor can reliably, without any prior knowledge, determine which encoding is being used. Encodings other than UTF-8 and UTF-16 are not necessarily recognized by every XML parser (and in some cases not even UTF-16, even though the standard mandates it to also be recognized). Escaping XML provides escape facilities for including characters that are problematic to include directly. For example: The characters "<" and "&" are key syntax markers and may never appear in content outside a CDATA section. It is allowed, but not recommended, to use "<" in XML entity values. Some character encodings support only a subset of Unicode. For example, it is legal to encode an XML document in ASCII, but ASCII lacks code points for Unicode characters such as "é". It might not be possible to type the character on the author's machine. Some characters have glyphs that cannot be visually distinguished from other characters, such as the nonbreaking space (&#xa0;) " " and the space (&#x20;) " ", and the Cyrillic capital letter A (&#x410;) "А" and the Latin capital letter A (&#x41;) "A". There are five predefined entities: &lt; represents "<"; &gt; represents ">"; &amp; represents "&"; &apos; represents ""; &quot; represents ''. All permitted Unicode characters may be represented with a numeric character reference. Consider the Chinese character "中", whose numeric code in Unicode is hexadecimal 4E2D, or decimal 20,013. A user whose keyboard offers no method for entering this character could still insert it in an XML document encoded either as &#20013; or &#x4e2d;. Similarly, the string "I <3 Jörg" could be encoded for inclusion in an XML document as I &lt;3 J&#xF6;rg. &#0; is not permitted because the null character is one of the control characters excluded from XML, even when using a numeric character reference. An alternative encoding mechanism such as Base64 is needed to represent such characters. Comments Comments may appear anywhere in a document outside other markup. Comments cannot appear before the XML declaration. Comments begin with <!-- and end with -->. For compatibility with SGML, the string "--" (double-hyphen) is not allowed inside comments; this means comments cannot be nested. The ampersand has no special significance within comments, so entity and character references are not recognized as such, and there is no way to represent characters outside the character set of the document encoding. An example of a valid comment: <!--no need to escape <code> & such in comments--> International use XML 1.0 (Fifth Edition) and XML 1.1 support the direct use of almost any Unicode character in element names, attributes, comments, character data, and processing instructions (other than the ones that have special symbolic meaning in XML itself, such as the less-than sign, "<"). The following is a well-formed XML document including Chinese, Armenian and Cyrillic characters: <?xml version="1.0" encoding="UTF-8"?> <俄语 լեզու="ռուսերեն">данные</俄语> Syntactical correctness and error-handling The XML specification defines an XML document as a well-formed text, meaning that it satisfies a list of syntax rules provided in the specification. Some key points include: The document contains only properly encoded legal Unicode characters. None of the special syntax characters such as < and & appear except when performing their markup-delineation roles. The start-tag, end-tag, and empty-element tag that delimit elements are correctly nested, with none missing and none overlapping. Tag names are case-sensitive; the start-tag and end-tag must match exactly. Tag names cannot contain any of the characters !"#$%&'()*+,/;<=>?@[\]^`{|}~, nor a space character, and cannot begin with "-", ".", or a numeric digit. A single root element contains all the other elements. The definition of an XML document excludes texts that contain violations of well-formedness rules; they are simply not XML. An XML processor that encounters such a violation is required to report such errors and to cease normal processing. This policy, occasionally referred to as "draconian error handling", stands in notable contrast to the behavior of programs that process HTML, which are designed to produce a reasonable result even in the presence of severe markup errors. XML's policy in this area has been criticized as a violation of Postel's law ("Be conservative in what you send; be liberal in what you accept"). The XML specification defines a valid XML document as a well-formed XML document which also conforms to the rules of a Document Type Definition (DTD). Schemas and validation In addition to being well formed, an XML document may be valid. This means that it contains a reference to a Document Type Definition (DTD), and that its elements and attributes are declared in that DTD and follow the grammatical rules for them that the DTD specifies. XML processors are classified as validating or non-validating depending on whether or not they check XML documents for validity. A processor that discovers a validity error must be able to report it, but may continue normal processing. A DTD is an example of a schema or grammar. Since the initial publication of XML 1.0, there has been substantial work in the area of schema languages for XML. Such schema languages typically constrain the set of elements that may be used in a document, which attributes may be applied to them, the order in which they may appear, and the allowable parent/child relationships. Document type definition The oldest schema language for XML is the document type definition (DTD), inherited from SGML. DTDs have the following benefits: DTD support is ubiquitous due to its inclusion in the XML 1.0 standard. DTDs are terse compared to element-based schema languages and consequently present more information in a single screen. DTDs allow the declaration of standard public entity sets for publishing characters. DTDs define a document type rather than the types used by a namespace, thus grouping all constraints for a document in a single collection. DTDs have the following limitations: They have no explicit support for newer features of XML, most importantly namespaces. They lack expressiveness. XML DTDs are simpler than SGML DTDs and there are certain structures that cannot be expressed with regular grammars. DTDs only support rudimentary datatypes. They lack readability. DTD designers typically make heavy use of parameter entities (which behave essentially as textual macros), which make it easier to define complex grammars, but at the expense of clarity. They use a syntax based on regular expression syntax, inherited from SGML, to describe the schema. Typical XML APIs such as SAX do not attempt to offer applications a structured representation of the syntax, so it is less accessible to programmers than an element-based syntax may be. Two peculiar features that distinguish DTDs from other schema types are the syntactic support for embedding a DTD within XML documents and for defining entities, which are arbitrary fragments of text or markup that the XML processor inserts in the DTD itself and in the XML document wherever they are referenced, like character escapes. DTD technology is still used in many applications because of its ubiquity. Schema A newer schema language, described by the W3C as the successor of DTDs, is XML Schema, often referred to by the initialism for XML Schema instances, XSD (XML Schema Definition). XSDs are far more powerful than DTDs in describing XML languages. They use a rich datatyping system and allow for more detailed constraints on an XML document's logical structure. XSDs also use an XML-based format, which makes it possible to use ordinary XML tools to help process them. xs:schema element that defines a schema: <?xml version="1.0" encoding="UTF-8" ?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"></xs:schema> RELAX NG RELAX NG (Regular Language for XML Next Generation) was initially specified by OASIS and is now a standard (Part 2: Regular-grammar-based validation of ISO/IEC 19757 – DSDL). RELAX NG schemas may be written in either an XML based syntax or a more compact non-XML syntax; the two syntaxes are isomorphic and James Clark's conversion tool—Trang—can convert between them without loss of information. RELAX NG has a simpler definition and validation framework than XML Schema, making it easier to use and implement. It also has the ability to use datatype framework plug-ins; a RELAX NG schema author, for example, can require values in an XML document to conform to definitions in XML Schema Datatypes. Schematron Schematron is a language for making assertions about the presence or absence of patterns in an XML document. It typically uses XPath expressions. Schematron is now a standard (Part 3: Rule-based validation of ISO/IEC 19757 – DSDL). DSDL and other schema languages DSDL (Document Schema Definition Languages) is a multi-part ISO/IEC standard (ISO/IEC 19757) that brings together a comprehensive set of small schema languages, each targeted at specific problems. DSDL includes RELAX NG full and compact syntax, Schematron assertion language, and languages for defining datatypes, character repertoire constraints, renaming and entity expansion, and namespace-based routing of document fragments to different validators. DSDL schema languages do not have the vendor support of XML Schemas yet, and are to some extent a grassroots reaction of industrial publishers to the lack of utility of XML Schemas for publishing. Some schema languages not only describe the structure of a particular XML format but also offer limited facilities to influence processing of individual XML files that conform to this format. DTDs and XSDs both have this ability; they can for instance provide the infoset augmentation facility and attribute defaults. RELAX NG and Schematron intentionally do not provide these. Related specifications A cluster of specifications closely related to XML have been developed, starting soon after the initial publication of XML 1.0. It is frequently the case that the term "XML" is used to refer to XML together with one or more of these other technologies that have come to be seen as part of the XML core. XML namespaces enable the same document to contain XML elements and attributes taken from different vocabularies, without any naming collisions occurring. Although XML Namespaces are not part of the XML specification itself, virtually all XML software also supports XML Namespaces. XML Base defines the xml:base attribute, which may be used to set the base for resolution of relative URI references within the scope of a single XML element. XML Information Set or XML Infoset is an abstract data model for XML documents in terms of information items. The infoset is commonly used in the specifications of XML languages, for convenience in describing constraints on the XML constructs those languages allow. XSL (Extensible Stylesheet Language) is a family of languages used to transform and render XML documents, split into three parts: XSLT (XSL Transformations), an XML language for transforming XML documents into other XML documents or other formats such as HTML, plain text, or XSL-FO. XSLT is very tightly coupled with XPath, which it uses to address components of the input XML document, mainly elements and attributes. XSL-FO (XSL Formatting Objects), an XML language for rendering XML documents, often used to generate PDFs. XPath (XML Path Language), a non-XML language for addressing the components (elements, attributes, and so on) of an XML document. XPath is widely used in other core-XML specifications and in programming libraries for accessing XML-encoded data. XQuery (XML Query) is an XML query language strongly rooted in XPath and XML Schema. It provides methods to access, manipulate and return XML, and is mainly conceived as a query language for XML databases. XML Signature defines syntax and processing rules for creating digital signatures on XML content. XML Encryption defines syntax and processing rules for encrypting XML content. XML model (Part 11: Schema Association of ISO/IEC 19757 – DSDL) defines a means of associating any xml document with any of the schema types mentioned above. Some other specifications conceived as part of the "XML Core" have failed to find wide adoption, including XInclude, XLink, and XPointer. Programming interfaces The design goals of XML include, "It shall be easy to write programs which process XML documents." Despite this, the XML specification contains almost no information about how programmers might go about doing such processing. The XML Infoset specification provides a vocabulary to refer to the constructs within an XML document, but does not provide any guidance on how to access this information. A variety of APIs for accessing XML have been developed and used, and some have been standardized. Existing APIs for XML processing tend to fall into these categories: Stream-oriented APIs accessible from a programming language, for example SAX and StAX. Tree-traversal APIs accessible from a programming language, for example DOM. XML data binding, which provides an automated translation between an XML document and programming-language objects. Declarative transformation languages such as XSLT and XQuery. Syntax extensions to general-purpose programming languages, for example LINQ and Scala. Stream-oriented facilities require less memory and, for certain tasks based on a linear traversal of an XML document, are faster and simpler than other alternatives. Tree-traversal and data-binding APIs typically require the use of much more memory, but are often found more convenient for use by programmers; some include declarative retrieval of document components via the use of XPath expressions. XSLT is designed for declarative description of XML document transformations, and has been widely implemented both in server-side packages and Web browsers. XQuery overlaps XSLT in its functionality, but is designed more for searching of large XML databases. Simple API for XML Simple API for XML (SAX) is a lexical, event-driven API in which a document is read serially and its contents are reported as callbacks to various methods on a handler object of the user's design. SAX is fast and efficient to implement, but difficult to use for extracting information at random from the XML, since it tends to burden the application author with keeping track of what part of the document is being processed. It is better suited to situations in which certain types of information are always handled the same way, no matter where they occur in the document. Pull parsing Pull parsing treats the document as a series of items read in sequence using the iterator design pattern. This allows for writing of recursive descent parsers in which the structure of the code performing the parsing mirrors the structure of the XML being parsed, and intermediate parsed results can be used and accessed as local variables within the functions performing the parsing, or passed down (as function parameters) into lower-level functions, or returned (as function return values) to higher-level functions. Examples of pull parsers include Data::Edit::Xml in Perl, StAX in the Java programming language, XMLPullParser in Smalltalk, XMLReader in PHP, ElementTree.iterparse in Python, SmartXML in Red, System.Xml.XmlReader in the .NET Framework, and the DOM traversal API (NodeIterator and TreeWalker). A pull parser creates an iterator that sequentially visits the various elements, attributes, and data in an XML document. Code that uses this iterator can test the current item (to tell, for example, whether it is a start-tag or end-tag, or text), and inspect its attributes (local name, namespace, values of XML attributes, value of text, etc.), and can also move the iterator to the next item. The code can thus extract information from the document as it traverses it. The recursive-descent approach tends to lend itself to keeping data as typed local variables in the code doing the parsing, while SAX, for instance, typically requires a parser to manually maintain intermediate data within a stack of elements that are parent elements of the element being parsed. Pull-parsing code can be more straightforward to understand and maintain than SAX parsing code. Document Object Model The Document Object Model (DOM) is an interface that allows for navigation of the entire document as if it were a tree of node objects representing the document's contents. A DOM document can be created by a parser, or can be generated manually by users (with limitations). Data types in DOM nodes are abstract; implementations provide their own programming language-specific bindings. DOM implementations tend to be memory intensive, as they generally require the entire document to be loaded into memory and constructed as a tree of objects before access is allowed. Data binding XML data binding is a technique for simplifying development of applications that need to work with XML documents. It involves mapping the XML document to a hierarchy of strongly typed objects, rather than using the generic objects created by a DOM parser. The resulting code is often easier to read and maintain, and it can help to identify problems at compile time rather than run-time. XML data binding is particularly well-suited for applications where the document structure is known and fixed at the time the application is written. By creating a strongly typed representation of the XML data, developers can take advantage of modern integrated development environments (IDEs) that provide features like auto-complete, code refactoring, and code highlighting. This can make it easier to write correct and efficient code, and reduce the risk of errors and bugs. Example data-binding systems include the Java Architecture for XML Binding (JAXB), XML Serialization in .NET Framework, and XML serialization in gSOAP. XML as data type XML has appeared as a first-class data type in other languages. The ECMAScript for XML (E4X) extension to the ECMAScript/JavaScript language explicitly defines two specific objects (XML and XMLList) for JavaScript, which support XML document nodes and XML node lists as distinct objects and use a dot-notation specifying parent-child relationships. E4X is supported by the Mozilla 2.5+ browsers (though now deprecated) and Adobe Actionscript but has not been widely adopted. Similar notations are used in Microsoft's LINQ implementation for Microsoft .NET 3.5 and above, and in Scala (which uses the Java VM). The open-source xmlsh application, which provides a Linux-like shell with special features for XML manipulation, similarly treats XML as a data type, using the <[ ]> notation. The Resource Description Framework defines a data type rdf:XMLLiteral to hold wrapped, canonical XML. Facebook has produced extensions to the PHP and JavaScript languages that add XML to the core syntax in a similar fashion to E4X, namely XHP and JSX respectively. History XML is an application profile of SGML (ISO 8879). The versatility of SGML for dynamic information display was understood by early digital media publishers in the late 1980s prior to the rise of the Internet. By the mid-1990s some practitioners of SGML had gained experience with the then-new World Wide Web, and believed that SGML offered solutions to some of the problems the Web was likely to face as it grew. Dan Connolly added SGML to the list of W3C's activities when he joined the staff in 1995; work began in mid-1996 when Sun Microsystems engineer Jon Bosak developed a charter and recruited collaborators. Bosak was well-connected in the small community of people who had experience both in SGML and the Web. XML was compiled by a working group of eleven members, supported by a (roughly) 150-member Interest Group. Technical debate took place on the Interest Group mailing list and issues were resolved by consensus or, when that failed, majority vote of the Working Group. A record of design decisions and their rationales was compiled by Michael Sperberg-McQueen on December 4, 1997. James Clark served as Technical Lead of the Working Group, notably contributing the empty-element <empty /> syntax and the name "XML". Other names that had been put forward for consideration included "MAGMA" (Minimal Architecture for Generalized Markup Applications), "SLIM" (Structured Language for Internet Markup) and "MGML" (Minimal Generalized Markup Language). The co-editors of the specification were originally Tim Bray and Michael Sperberg-McQueen. Halfway through the project, Bray accepted a consulting engagement with Netscape, provoking vociferous protests from Microsoft. Bray was temporarily asked to resign the editorship. This led to intense dispute in the Working Group, eventually solved by the appointment of Microsoft's Jean Paoli as a third co-editor. The XML Working Group communicated primarily through email and weekly teleconferences. The major design decisions were reached in a short burst of intense work between August and November 1996, when the first Working Draft of an XML specification was published. Further design work continued through 1997, and XML 1.0 became a W3C Recommendation on February 10, 1998. Sources XML is a profile of an ISO standard, SGML, and most of XML comes from SGML unchanged. From SGML comes the separation of logical and physical structures (elements and entities), the availability of grammar-based validation (DTDs), the separation of data and metadata (elements and attributes), mixed content, the separation of processing from representation (processing instructions), and the default angle-bracket syntax. The SGML declaration was removed; thus, XML has a fixed delimiter set and adopts Unicode as the document character set. Other sources of technology for XML were the TEI (Text Encoding Initiative), which defined a profile of SGML for use as a "transfer syntax" and HTML. The ERCS (Extended Reference Concrete Syntax) project of the SPREAD (Standardization Project Regarding East Asian Documents) project of the ISO-related China/Japan/Korea Document Processing expert group was the basis of XML 1.0's naming rules; SPREAD also introduced hexadecimal numeric character references and the concept of references to make available all Unicode characters. To support ERCS, XML and HTML better, the SGML standard IS 8879 was revised in 1996 and 1998 with WebSGML Adaptations. Ideas that developed during discussion that are novel in XML included the algorithm for encoding detection and the encoding header, the processing instruction target, the xml:space attribute, and the new close delimiter for empty-element tags. The notion of well-formedness as opposed to validity (which enables parsing without a schema) was first formalized in XML, although it had been implemented successfully in the Electronic Book Technology "Dynatext" software; the software from the University of Waterloo New Oxford English Dictionary Project; the RISP LISP SGML text processor at Uniscope, Tokyo; the US Army Missile Command IADS hypertext system; Mentor Graphics Context; Interleaf and Xerox Publishing System. Versions 1.0 and 1.1 The first (XML 1.0) was initially defined in 1998. It has undergone minor revisions since then, without being given a new version number, and is currently in its fifth edition, as published on November 26, 2008. It is widely implemented and still recommended for general use. The second (XML 1.1) was initially published on February 4, 2004, the same day as XML 1.0 Third Edition, and is currently in its second edition, as published on August 16, 2006. It contains features (some contentious) that are intended to make XML easier to use in certain cases. The main changes are to enable the use of line-ending characters used on EBCDIC platforms, and the use of scripts and characters absent from Unicode 3.2. XML 1.1 is not very widely implemented and is recommended for use only by those who need its particular features. Prior to its fifth edition release, XML 1.0 differed from XML 1.1 in having stricter requirements for characters available for use in element and attribute names and unique identifiers: in the first four editions of XML 1.0 the characters were exclusively enumerated using a specific version of the Unicode standard (Unicode 2.0 to Unicode 3.2.) The fifth edition substitutes the mechanism of XML 1.1, which is more future-proof but reduces redundancy. The approach taken in the fifth edition of XML 1.0 and in all editions of XML 1.1 is that only certain characters are forbidden in names, and everything else is allowed to accommodate suitable name characters in future Unicode versions. In the fifth edition, XML names may contain characters in the Balinese, Cham, or Phoenician scripts among many others added to Unicode since Unicode 3.2. Almost any Unicode code point can be used in the character data and attribute values of an XML 1.0/1.1 document, even if the character corresponding to the code point is not defined in the current version of Unicode. In character data and attribute values, XML 1.1 allows the use of more control characters than XML 1.0, but, for "robustness", most of the control characters introduced in XML 1.1 must be expressed as numeric character references (and #x7F through #x9F, which had been allowed in XML 1.0, are in XML 1.1 even required to be expressed as numeric character references). Among the supported control characters in XML 1.1 are two line break codes that must be treated as whitespace characters, which are the only control codes that can be written directly. 2.0 There has been discussion of an XML 2.0, although no organization has announced plans for work on such a project. XML-SW (SW for skunkworks), which one of the original developers of XML has written, contains some proposals for what an XML 2.0 might look like, including elimination of DTDs from syntax, as well as integration of XML namespaces, XML Base and XML Information Set into the base standard. MicroXML In 2012, James Clark (technical lead of the XML Working Group) and John Cowan (editor of the XML 1.1 specification) formed the MicroXML Community Group within the W3C and published a specification for a significantly reduced subset of XML. Binary XML The World Wide Web Consortium also has an XML Binary Characterization Working Group doing preliminary research into use cases and properties for a binary encoding of XML Information Set. The working group is not chartered to produce any official standards. Since XML is by definition text-based, ITU-T and ISO are using the name Fast Infoset for their own binary format (ITU-T Rec. X.891 and ISO/IEC 24824-1) to avoid confusion. Criticism XML and its extensions have regularly been criticized for verbosity, complexity and redundancy. Mapping the basic tree model of XML to type systems of programming languages or databases can be difficult, especially when XML is used for exchanging highly structured data between applications, which was not its primary design goal. However, XML data binding systems allow applications to access XML data directly from objects representing a data structure of the data in the programming language used, which ensures type safety, rather than using the DOM or SAX to retrieve data from a direct representation of the XML itself. This is accomplished by automatically creating a mapping between elements of the XML schema XSD of the document and members of a class to be represented in memory. Other criticisms attempt to refute the claim that XML is a self-describing language (though the XML specification itself makes no such claim). JSON, YAML, and S-Expressions are frequently proposed as simpler alternatives (see Comparison of data-serialization formats) that focus on representing highly structured data rather than documents, which may contain both highly structured and relatively unstructured content. However, W3C standardized XML schema specifications offer a broader range of structured XSD data types compared to simpler serialization formats and offer modularity and reuse through XML namespaces.
Technology
Markup languages
null
34139
https://en.wikipedia.org/wiki/Xenon
Xenon
Xenon is a chemical element; it has symbol Xe and atomic number 54. It is a dense, colorless, odorless noble gas found in Earth's atmosphere in trace amounts. Although generally unreactive, it can undergo a few chemical reactions such as the formation of xenon hexafluoroplatinate, the first noble gas compound to be synthesized. Xenon is used in flash lamps and arc lamps, and as a general anesthetic. The first excimer laser design used a xenon dimer molecule (Xe2) as the lasing medium, and the earliest laser designs used xenon flash lamps as pumps. Xenon is also used to search for hypothetical weakly interacting massive particles and as a propellant for ion thrusters in spacecraft. Naturally occurring xenon consists of seven stable isotopes and two long-lived radioactive isotopes. More than 40 unstable xenon isotopes undergo radioactive decay, and the isotope ratios of xenon are an important tool for studying the early history of the Solar System. Radioactive xenon-135 is produced by beta decay from iodine-135 (a product of nuclear fission), and is the most significant (and unwanted) neutron absorber in nuclear reactors. History Xenon was discovered in England by the Scottish chemist William Ramsay and English chemist Morris Travers on July 12, 1898, shortly after their discovery of the elements krypton and neon. They found xenon in the residue left over from evaporating components of liquid air. Ramsay suggested the name xenon for this gas from the Greek word ξένον xénon, neuter singular form of ξένος xénos, meaning 'foreign(er)', 'strange(r)', or 'guest'. In 1902, Ramsay estimated the proportion of xenon in the Earth's atmosphere to be one part in 20 million. During the 1930s, American engineer Harold Edgerton began exploring strobe light technology for high speed photography. This led him to the invention of the xenon flash lamp in which light is generated by passing brief electric current through a tube filled with xenon gas. In 1934, Edgerton was able to generate flashes as brief as one microsecond with this method. In 1939, American physician Albert R. Behnke Jr. began exploring the causes of "drunkenness" in deep-sea divers. He tested the effects of varying the breathing mixtures on his subjects, and discovered that this caused the divers to perceive a change in depth. From his results, he deduced that xenon gas could serve as an anesthetic. Although Russian toxicologist Nikolay V. Lazarev apparently studied xenon anesthesia in 1941, the first published report confirming xenon anesthesia was in 1946 by American medical researcher John H. Lawrence, who experimented on mice. Xenon was first used as a surgical anesthetic in 1951 by American anesthesiologist Stuart C. Cullen, who successfully used it with two patients. Xenon and the other noble gases were for a long time considered to be completely chemically inert and not able to form compounds. However, while teaching at the University of British Columbia, Neil Bartlett discovered that the gas platinum hexafluoride (PtF6) was a powerful oxidizing agent that could oxidize oxygen gas (O2) to form dioxygenyl hexafluoroplatinate (). Since O2 (1165 kJ/mol) and xenon (1170 kJ/mol) have almost the same first ionization potential, Bartlett realized that platinum hexafluoride might also be able to oxidize xenon. On March 23, 1962, he mixed the two gases and produced the first known compound of a noble gas, xenon hexafluoroplatinate. Bartlett thought its composition to be Xe+[PtF6]−, but later work revealed that it was probably a mixture of various xenon-containing salts. Since then, many other xenon compounds have been discovered, in addition to some compounds of the noble gases argon, krypton, and radon, including argon fluorohydride (HArF), krypton difluoride (KrF2), and radon fluoride. By 1971, more than 80 xenon compounds were known. In November 1989, IBM scientists demonstrated a technology capable of manipulating individual atoms. The program, called IBM in atoms, used a scanning tunneling microscope to arrange 35 individual xenon atoms on a substrate of chilled crystal of nickel to spell out the three-letter company initialism. It was the first-time atoms had been precisely positioned on a flat surface. Characteristics Xenon has atomic number 54; that is, its nucleus contains 54 protons. At standard temperature and pressure, pure xenon gas has a density of 5.894 kg/m3, about 4.5 times the density of the Earth's atmosphere at sea level, 1.217 kg/m3. As a liquid, xenon has a density of up to 3.100 g/mL, with the density maximum occurring at the triple point. Liquid xenon has a high polarizability due to its large atomic volume, and thus is an excellent solvent. It can dissolve hydrocarbons, biological molecules, and even water. Under the same conditions, the density of solid xenon, 3.640 g/cm3, is greater than the average density of granite, 2.75 g/cm3. Under gigapascals of pressure, xenon forms a metallic phase. Solid xenon changes from Face-centered cubic (fcc) to hexagonal close packed (hcp) crystal phase under pressure and begins to turn metallic at about 140 GPa, with no noticeable volume change in the hcp phase. It is completely metallic at 155 GPa. When metallized, xenon appears sky blue because it absorbs red light and transmits other visible frequencies. Such behavior is unusual for a metal and is explained by the relatively small width of the electron bands in that state. Liquid or solid xenon nanoparticles can be formed at room temperature by implanting Xe+ ions into a solid matrix. Many solids have lattice constants smaller than solid Xe. This results in compression of the implanted Xe to pressures that may be sufficient for its liquefaction or solidification. Xenon is a member of the zero-valence elements that are called noble or inert gases. It is inert to most common chemical reactions (such as combustion, for example) because the outer valence shell contains eight electrons. This produces a stable, minimum energy configuration in which the outer electrons are tightly bound. In a gas-filled tube, xenon emits a blue or lavenderish glow when excited by electrical discharge. Xenon emits a band of emission lines that span the visual spectrum, but the most intense lines occur in the region of blue light, producing the coloration. Occurrence and production Xenon is a trace gas in Earth's atmosphere, occurring at a volume fraction of (parts per billion), or approximately 1 part per 11.5 million. It is also found as a component of gases emitted from some mineral springs. Given a total mass of the atmosphere of , the atmosphere contains on the order of of xenon in total when taking the average molar mass of the atmosphere as 28.96 g/mol which is equivalent to some 394-mass ppb. Commercial Xenon is obtained commercially as a by-product of the separation of air into oxygen and nitrogen. After this separation, generally performed by fractional distillation in a double-column plant, the liquid oxygen produced will contain small quantities of krypton and xenon. By additional fractional distillation, the liquid oxygen may be enriched to contain 0.1–0.2% of a krypton/xenon mixture, which is extracted either by adsorption onto silica gel or by distillation. Finally, the krypton/xenon mixture may be separated into krypton and xenon by further distillation. Worldwide production of xenon in 1998 was estimated at . At a density of this is equivalent to roughly . Because of its scarcity, xenon is much more expensive than the lighter noble gases—approximate prices for the purchase of small quantities in Europe in 1999 were 10 €/L (=~€1.7/g) for xenon, 1 €/L (=~€0.27/g) for krypton, and 0.20 €/L (=~€0.22/g) for neon, while the much more plentiful argon, which makes up over 1% by volume of earth's atmosphere, costs less than a cent per liter. Solar System Within the Solar System, the nucleon fraction of xenon is , for an abundance of approximately one part in 630 thousand of the total mass. Xenon is relatively rare in the Sun's atmosphere, on Earth, and in asteroids and comets. The abundance of xenon in the atmosphere of planet Jupiter is unusually high, about 2.6 times that of the Sun. This abundance remains unexplained, but may have been caused by an early and rapid buildup of planetesimals—small, sub-planetary bodies—before the heating of the presolar disk; otherwise, xenon would not have been trapped in the planetesimal ices. The problem of the low terrestrial xenon may be explained by covalent bonding of xenon to oxygen within quartz, reducing the outgassing of xenon into the atmosphere. Stellar Unlike the lower-mass noble gases, the normal stellar nucleosynthesis process inside a star does not form xenon. Nucleosynthesis consumes energy to produce nuclides more massive than iron-56, and thus the synthesis of xenon represents no energy gain for a star. Instead, xenon is formed during supernova explosions during the r-process, by the slow neutron-capture process (s-process) in red giant stars that have exhausted their core hydrogen and entered the asymptotic giant branch, and from radioactive decay, for example by beta decay of extinct iodine-129 and spontaneous fission of thorium, uranium, and plutonium. Nuclear fission Xenon-135 is a notable neutron poison with a high fission product yield. As it is relatively short lived, it decays at the same rate it is produced during steady operation of a nuclear reactor. However, if power is reduced or the reactor is scrammed, less xenon is destroyed than is produced from the beta decay of its parent nuclides. This phenomenon called xenon poisoning can cause significant problems in restarting a reactor after a scram or increasing power after it had been reduced and it was one of several contributing factors in the Chernobyl nuclear accident. Stable or extremely long lived isotopes of xenon are also produced in appreciable quantities in nuclear fission. Xenon-136 is produced when xenon-135 undergoes neutron capture before it can decay. The ratio of xenon-136 to xenon-135 (or its decay products) can give hints as to the power history of a given reactor and the absence of xenon-136 is a "fingerprint" for nuclear explosions, as xenon-135 is not produced directly but as a product of successive beta decays and thus it cannot absorb any neutrons in a nuclear explosion which occurs in fractions of a second. The stable isotope xenon-132 has a fission product yield of over 4% in the thermal neutron fission of which means that stable or nearly stable xenon isotopes have a higher mass fraction in spent nuclear fuel (which is about 3% fission products) than it does in air. However, there is as of 2022 no commercial effort to extract xenon from spent fuel during nuclear reprocessing. Isotopes Naturally occurring xenon is composed of seven stable isotopes: 126Xe, 128–132Xe, and 134Xe. The isotopes 126Xe and 134Xe are predicted by theory to undergo double beta decay, but this has never been observed so they are considered stable. In addition, more than 40 unstable isotopes have been studied. The longest-lived of these isotopes are the primordial 124Xe, which undergoes double electron capture with a half-life of , and 136Xe, which undergoes double beta decay with a half-life of . 129Xe is produced by beta decay of 129I, which has a half-life of 16 million years. 131mXe, 133Xe, 133mXe, and 135Xe are some of the fission products of 235U and 239Pu, and are used to detect and monitor nuclear explosions. Nuclear spin Nuclei of two of the stable isotopes of xenon, 129Xe and 131Xe (both stable isotopes with odd mass numbers), have non-zero intrinsic angular momenta (nuclear spins, suitable for nuclear magnetic resonance). The nuclear spins can be aligned beyond ordinary polarization levels by means of circularly polarized light and rubidium vapor. The resulting spin polarization of xenon nuclei can surpass 50% of its maximum possible value, greatly exceeding the thermal equilibrium value dictated by paramagnetic statistics (typically 0.001% of the maximum value at room temperature, even in the strongest magnets). Such non-equilibrium alignment of spins is a temporary condition, and is called hyperpolarization. The process of hyperpolarizing the xenon is called optical pumping (although the process is different from pumping a laser). Because a 129Xe nucleus has a spin of 1/2, and therefore a zero electric quadrupole moment, the 129Xe nucleus does not experience any quadrupolar interactions during collisions with other atoms, and the hyperpolarization persists for long periods even after the engendering light and vapor have been removed. Spin polarization of 129Xe can persist from several seconds for xenon atoms dissolved in blood to several hours in the gas phase and several days in deeply frozen solid xenon. In contrast, 131Xe has a nuclear spin value of and a nonzero quadrupole moment, and has t1 relaxation times in the millisecond and second ranges. From fission Some radioactive isotopes of xenon (for example, 133Xe and 135Xe) are produced by neutron irradiation of fissionable material within nuclear reactors. 135Xe is of considerable significance in the operation of nuclear fission reactors. 135Xe has a huge cross section for thermal neutrons, 2.6×106 barns, and operates as a neutron absorber or "poison" that can slow or stop the chain reaction after a period of operation. This was discovered in the earliest nuclear reactors built by the American Manhattan Project for plutonium production. However, the designers had made provisions in the design to increase the reactor's reactivity (the number of neutrons per fission that go on to fission other atoms of nuclear fuel). 135Xe reactor poisoning was a major factor in the Chernobyl disaster. A shutdown or decrease of power of a reactor can result in buildup of 135Xe, with reactor operation going into a condition known as the iodine pit. Under adverse conditions, relatively high concentrations of radioactive xenon isotopes may emanate from cracked fuel rods, or fissioning of uranium in cooling water. Isotope ratios of xenon produced in natural nuclear fission reactors at Oklo in Gabon reveal the reactor properties during chain reaction that took place about 2 billion years ago. Cosmic processes Because xenon is a tracer for two parent isotopes, xenon isotope ratios in meteorites are a powerful tool for studying the formation of the Solar System. The iodine–xenon method of dating gives the time elapsed between nucleosynthesis and the condensation of a solid object from the solar nebula. In 1960, physicist John H. Reynolds discovered that certain meteorites contained an isotopic anomaly in the form of an overabundance of xenon-129. He inferred that this was a decay product of radioactive iodine-129. This isotope is produced slowly by cosmic ray spallation and nuclear fission, but is produced in quantity only in supernova explosions. Because the half-life of 129I is comparatively short on a cosmological time scale (16 million years), this demonstrated that only a short time had passed between the supernova and the time the meteorites had solidified and trapped the 129I. These two events (supernova and solidification of gas cloud) were inferred to have happened during the early history of the Solar System, because the 129I isotope was likely generated shortly before the Solar System was formed, seeding the solar gas cloud with isotopes from a second source. This supernova source may also have caused collapse of the solar gas cloud. In a similar way, xenon isotopic ratios such as 129Xe/130Xe and 136Xe/130Xe are a powerful tool for understanding planetary differentiation and early outgassing. For example, the atmosphere of Mars shows a xenon abundance similar to that of Earth (0.08 parts per million) but Mars shows a greater abundance of 129Xe than the Earth or the Sun. Since this isotope is generated by radioactive decay, the result may indicate that Mars lost most of its primordial atmosphere, possibly within the first 100 million years after the planet was formed. In another example, excess 129Xe found in carbon dioxide well gases from New Mexico is believed to be from the decay of mantle-derived gases from soon after Earth's formation. Compounds After Neil Bartlett's discovery in 1962 that xenon can form chemical compounds, a large number of xenon compounds have been discovered and described. Almost all known xenon compounds contain the electronegative atoms fluorine or oxygen. The chemistry of xenon in each oxidation state is analogous to that of the neighboring element iodine in the immediately lower oxidation state. Halides Three fluorides are known: , , and . XeF is theorized to be unstable. These are the starting points for the synthesis of almost all xenon compounds. The solid, crystalline difluoride is formed when a mixture of fluorine and xenon gases is exposed to ultraviolet light. The ultraviolet component of ordinary daylight is sufficient. Long-term heating of at high temperatures under an catalyst yields . Pyrolysis of in the presence of NaF yields high-purity . The xenon fluorides behave as both fluoride acceptors and fluoride donors, forming salts that contain such cations as and , and anions such as , , and . The green, paramagnetic is formed by the reduction of by xenon gas. also forms coordination complexes with transition metal ions. More than 30 such complexes have been synthesized and characterized. Whereas the xenon fluorides are well characterized, the other halides are not. Xenon dichloride, formed by the high-frequency irradiation of a mixture of xenon, fluorine, and silicon or carbon tetrachloride, is reported to be an endothermic, colorless, crystalline compound that decomposes into the elements at 80 °C. However, may be merely a van der Waals molecule of weakly bound Xe atoms and molecules and not a real compound. Theoretical calculations indicate that the linear molecule is less stable than the van der Waals complex. Xenon tetrachloride and xenon dibromide are even more unstable and they cannot be synthesized by chemical reactions. They were created by radioactive decay of and , respectively. Oxides and oxohalides Three oxides of xenon are known: xenon trioxide () and xenon tetroxide (), both of which are dangerously explosive and powerful oxidizing agents, and xenon dioxide (XeO2), which was reported in 2011 with a coordination number of four. XeO2 forms when xenon tetrafluoride is poured over ice. Its crystal structure may allow it to replace silicon in silicate minerals. The XeOO+ cation has been identified by infrared spectroscopy in solid argon. Xenon does not react with oxygen directly; the trioxide is formed by the hydrolysis of : + 3 → + 6 HF is weakly acidic, dissolving in alkali to form unstable xenate salts containing the anion. These unstable salts easily disproportionate into xenon gas and perxenate salts, containing the anion. Barium perxenate, when treated with concentrated sulfuric acid, yields gaseous xenon tetroxide: + 2 → 2 + 2 + To prevent decomposition, the xenon tetroxide thus formed is quickly cooled into a pale-yellow solid. It explodes above −35.9 °C into xenon and oxygen gas, but is otherwise stable. A number of xenon oxyfluorides are known, including , , , and . is formed by reacting with xenon gas at low temperatures. It may also be obtained by partial hydrolysis of . It disproportionates at −20 °C into and . is formed by the partial hydrolysis of ... + → + 2 ...or the reaction of with sodium perxenate, . The latter reaction also produces a small amount of . is also formed by partial hydrolysis of . + 2 → + 4 reacts with CsF to form the anion, while XeOF3 reacts with the alkali metal fluorides KF, RbF and CsF to form the anion. Other compounds Xenon can be directly bonded to a less electronegative element than fluorine or oxygen, particularly carbon. Electron-withdrawing groups, such as groups with fluorine substitution, are necessary to stabilize these compounds. Numerous such compounds have been characterized, including: , where C6F5 is the pentafluorophenyl group. Other compounds containing xenon bonded to a less electronegative element include and . The latter is synthesized from dioxygenyl tetrafluoroborate, , at −100 °C. An unusual ion containing xenon is the tetraxenonogold(II) cation, , which contains Xe–Au bonds. This ion occurs in the compound , and is remarkable in having direct chemical bonds between two notoriously unreactive atoms, xenon and gold, with xenon acting as a transition metal ligand. A similar mercury complex (HgXe)(Sb3F17) (formulated as [HgXe2+][Sb2F11–][SbF6–]) is also known. The compound contains a Xe–Xe bond, the longest element-element bond known (308.71 pm = 3.0871 Å). In 1995, M. Räsänen and co-workers, scientists at the University of Helsinki in Finland, announced the preparation of xenon dihydride (HXeH), and later xenon hydride-hydroxide (HXeOH), hydroxenoacetylene (HXeCCH), and other Xe-containing molecules. In 2008, Khriachtchev et al. reported the preparation of HXeOXeH by the photolysis of water within a cryogenic xenon matrix. Deuterated molecules, HXeOD and DXeOH, have also been produced. Clathrates and excimers In addition to compounds where xenon forms a chemical bond, xenon can form clathrates—substances where xenon atoms or pairs are trapped by the crystalline lattice of another compound. One example is xenon hydrate (Xe·H2O), where xenon atoms occupy vacancies in a lattice of water molecules. This clathrate has a melting point of 24 °C. The deuterated version of this hydrate has also been produced. Another example is xenon hydride (Xe(H2)8), in which xenon pairs (dimers) are trapped inside solid hydrogen. Such clathrate hydrates can occur naturally under conditions of high pressure, such as in Lake Vostok underneath the Antarctic ice sheet. Clathrate formation can be used to fractionally distill xenon, argon and krypton. Xenon can also form endohedral fullerene compounds, where a xenon atom is trapped inside a fullerene molecule. The xenon atom trapped in the fullerene can be observed by 129Xe nuclear magnetic resonance (NMR) spectroscopy. Through the sensitive chemical shift of the xenon atom to its environment, chemical reactions on the fullerene molecule can be analyzed. These observations are not without caveat, however, because the xenon atom has an electronic influence on the reactivity of the fullerene. When xenon atoms are in the ground energy state, they repel each other and will not form a bond. When xenon atoms becomes energized, however, they can form an excimer (excited dimer) until the electrons return to the ground state. This entity is formed because the xenon atom tends to complete the outermost electronic shell by adding an electron from a neighboring xenon atom. The typical lifetime of a xenon excimer is 1–5 nanoseconds, and the decay releases photons with wavelengths of about 150 and 173 nm. Xenon can also form excimers with other elements, such as the halogens bromine, chlorine, and fluorine. Applications Although xenon is rare and relatively expensive to extract from the Earth's atmosphere, it has a number of applications. Illumination and optics Gas-discharge lamps Xenon is used in light-emitting devices called xenon flash lamps, used in photographic flashes and stroboscopic lamps; to excite the active medium in lasers which then generate coherent light; and, occasionally, in bactericidal lamps. The first solid-state laser, invented in 1960, was pumped by a xenon flash lamp, and lasers used to power inertial confinement fusion are also pumped by xenon flash lamps. Continuous, short-arc, high pressure xenon arc lamps have a color temperature closely approximating noon sunlight and are used in solar simulators. That is, the chromaticity of these lamps closely approximates a heated black body radiator at the temperature of the Sun. First introduced in the 1940s, these lamps replaced the shorter-lived carbon arc lamps in movie projectors. They are also employed in typical 35mm, IMAX, and digital film projection systems. They are an excellent source of short wavelength ultraviolet radiation and have intense emissions in the near infrared used in some night vision systems. Xenon is used as a starter gas in metal halide lamps for automotive HID headlights, and high-end "tactical" flashlights. The individual cells in a plasma display contain a mixture of xenon and neon ionized with electrodes. The interaction of this plasma with the electrodes generates ultraviolet photons, which then excite the phosphor coating on the front of the display. Xenon is used as a "starter gas" in high pressure sodium lamps. It has the lowest thermal conductivity and lowest ionization potential of all the non-radioactive noble gases. As a noble gas, it does not interfere with the chemical reactions occurring in the operating lamp. The low thermal conductivity minimizes thermal losses in the lamp while in the operating state, and the low ionization potential causes the breakdown voltage of the gas to be relatively low in the cold state, which allows the lamp to be more easily started. Lasers In 1962, a group of researchers at Bell Laboratories discovered laser action in xenon, and later found that the laser gain was improved by adding helium to the lasing medium. The first excimer laser used a xenon dimer (Xe2) energized by a beam of electrons to produce stimulated emission at an ultraviolet wavelength of 176 nm. Xenon chloride and xenon fluoride have also been used in excimer (or, more accurately, exciplex) lasers. Medical Anesthesia Xenon has been used as a general anesthetic, but it is more expensive than conventional anesthetics. Xenon interacts with many different receptors and ion channels, and like many theoretically multi-modal inhalation anesthetics, these interactions are likely complementary. Xenon is a high-affinity glycine-site NMDA receptor antagonist. However, xenon is different from certain other NMDA receptor antagonists in that it is not neurotoxic and it inhibits the neurotoxicity of ketamine and nitrous oxide (N2O), while actually producing neuroprotective effects. Unlike ketamine and nitrous oxide, xenon does not stimulate a dopamine efflux in the nucleus accumbens. Like nitrous oxide and cyclopropane, xenon activates the two-pore domain potassium channel TREK-1. A related channel TASK-3 also implicated in the actions of inhalation anesthetics is insensitive to xenon. Xenon inhibits nicotinic acetylcholine α4β2 receptors which contribute to spinally mediated analgesia. Xenon is an effective inhibitor of plasma membrane Ca2+ ATPase. Xenon inhibits Ca2+ ATPase by binding to a hydrophobic pore within the enzyme and preventing the enzyme from assuming active conformations. Xenon is a competitive inhibitor of the serotonin 5-HT3 receptor. While neither anesthetic nor antinociceptive, this reduces anesthesia-emergent nausea and vomiting. Xenon has a minimum alveolar concentration (MAC) of 72% at age 40, making it 44% more potent than N2O as an anesthetic. Thus, it can be used with oxygen in concentrations that have a lower risk of hypoxia. Unlike nitrous oxide, xenon is not a greenhouse gas and is viewed as environmentally friendly. Though recycled in modern systems, xenon vented to the atmosphere is only returning to its original source, without environmental impact. Neuroprotectant Xenon induces robust cardioprotection and neuroprotection through a variety of mechanisms. Through its influence on Ca2+, K+, KATP\HIF, and NMDA antagonism, xenon is neuroprotective when administered before, during and after ischemic insults. Xenon is a high affinity antagonist at the NMDA receptor glycine site. Xenon is cardioprotective in ischemia-reperfusion conditions by inducing pharmacologic non-ischemic preconditioning. Xenon is cardioprotective by activating PKC-epsilon and downstream p38-MAPK. Xenon mimics neuronal ischemic preconditioning by activating ATP sensitive potassium channels. Xenon allosterically reduces ATP mediated channel activation inhibition independently of the sulfonylurea receptor1 subunit, increasing KATP open-channel time and frequency. Sports doping Inhaling a xenon/oxygen mixture activates production of the transcription factor HIF-1-alpha, which may lead to increased production of erythropoietin. The latter hormone is known to increase red blood cell production and athletic performance. Reportedly, doping with xenon inhalation has been used in Russia since 2004 and perhaps earlier. On August 31, 2014, the World Anti Doping Agency (WADA) added xenon (and argon) to the list of prohibited substances and methods, although no reliable doping tests for these gases have yet been developed. In addition, effects of xenon on erythropoietin production in humans have not been demonstrated, so far. Imaging Gamma emission from the radioisotope 133Xe of xenon can be used to image the heart, lungs, and brain, for example, by means of single photon emission computed tomography. 133Xe has also been used to measure blood flow. Xenon, particularly hyperpolarized 129Xe, is a useful contrast agent for magnetic resonance imaging (MRI). In the gas phase, it can image cavities in a porous sample, alveoli in lungs, or the flow of gases within the lungs. Because xenon is soluble both in water and in hydrophobic solvents, it can image various soft living tissues. Xenon-129 is currently being used as a visualization agent in MRI scans. When a patient inhales hyperpolarized xenon-129 ventilation and gas exchange in the lungs can be imaged and quantified. Unlike xenon-133, xenon-129 is non-ionizing and is safe to be inhaled with no adverse effects. Surgery The xenon chloride excimer laser has certain dermatological uses. NMR spectroscopy Because of the xenon atom's large, flexible outer electron shell, the NMR spectrum changes in response to surrounding conditions and can be used to monitor the surrounding chemical circumstances. For instance, xenon dissolved in water, xenon dissolved in hydrophobic solvent, and xenon associated with certain proteins can be distinguished by NMR. Hyperpolarized xenon can be used by surface chemists. Normally, it is difficult to characterize surfaces with NMR because signals from a surface are overwhelmed by signals from the atomic nuclei in the bulk of the sample, which are much more numerous than surface nuclei. However, nuclear spins on solid surfaces can be selectively polarized by transferring spin polarization to them from hyperpolarized xenon gas. This makes the surface signals strong enough to measure and distinguish from bulk signals. Other In nuclear energy studies, xenon is used in bubble chambers, probes, and in other areas where a high molecular weight and inert chemistry is desirable. A by-product of nuclear weapon testing is the release of radioactive xenon-133 and xenon-135. These isotopes are monitored to ensure compliance with nuclear test ban treaties, and to confirm nuclear tests by states such as North Korea. Liquid xenon is used in calorimeters to measure gamma rays, and as a detector of hypothetical weakly interacting massive particles, or WIMPs. When a WIMP collides with a xenon nucleus, theory predicts it will impart enough energy to cause ionization and scintillation. Liquid xenon is useful for these experiments because its density makes dark matter interaction more likely and it permits a quiet detector through self-shielding. Xenon is the preferred propellant for ion propulsion of spacecraft because it has low ionization potential per atomic weight and can be stored as a liquid at near room temperature (under high pressure), yet easily evaporated to feed the engine. Xenon is inert, environmentally friendly, and less corrosive to an ion engine than other fuels such as mercury or cesium. Xenon was first used for satellite ion engines during the 1970s. It was later employed as a propellant for JPL's Deep Space 1 probe, Europe's SMART-1 spacecraft and for the three ion propulsion engines on NASA's Dawn Spacecraft. Chemically, the perxenate compounds are used as oxidizing agents in analytical chemistry. Xenon difluoride is used as an etchant for silicon, particularly in the production of microelectromechanical systems (MEMS). The anticancer drug 5-fluorouracil can be produced by reacting xenon difluoride with uracil. Xenon is also used in protein crystallography. Applied at pressures from 0.5 to 5 MPa (5 to 50 atm) to a protein crystal, xenon atoms bind in predominantly hydrophobic cavities, often creating a high-quality, isomorphous, heavy-atom derivative that can be used for solving the phase problem. Precautions Xenon gas can be safely kept in normal sealed glass or metal containers at standard temperature and pressure. However, it readily dissolves in most plastics and rubber, and will gradually escape from a container sealed with such materials. Xenon is non-toxic, although it does dissolve in blood and belongs to a select group of substances that penetrate the blood–brain barrier, causing mild to full surgical anesthesia when inhaled in high concentrations with oxygen. The speed of sound in xenon gas (169 m/s) is less than that in air because the average velocity of the heavy xenon atoms is less than that of nitrogen and oxygen molecules in air. Hence, xenon vibrates more slowly in the vocal cords when exhaled and produces lowered voice tones (low-frequency-enhanced sounds, but the fundamental frequency or pitch does not change), an effect opposite to the high-toned voice produced in helium. Specifically, when the vocal tract is filled with xenon gas, its natural resonant frequency becomes lower than when it is filled with air. Thus, the low frequencies of the sound wave produced by the same direct vibration of the vocal cords would be enhanced, resulting in a change of the timbre of the sound amplified by the vocal tract. Like helium, xenon does not satisfy the body's need for oxygen, and it is both a simple asphyxiant and an anesthetic more powerful than nitrous oxide; consequently, and because xenon is expensive, many universities have prohibited the voice stunt as a general chemistry demonstration. The gas sulfur hexafluoride is similar to xenon in molecular weight (146 versus 131), less expensive, and though an asphyxiant, not toxic or anesthetic; it is often substituted in these demonstrations. Dense gases such as xenon and sulfur hexafluoride can be breathed safely when mixed with at least 20% oxygen. Xenon at 80% concentration along with 20% oxygen rapidly produces the unconsciousness of general anesthesia. Breathing mixes gases of different densities very effectively and rapidly so that heavier gases are purged along with the oxygen, and do not accumulate at the bottom of the lungs. There is, however, a danger associated with any heavy gas in large quantities: it may sit invisibly in a container, and a person who enters an area filled with an odorless, colorless gas may be asphyxiated without warning. Xenon is rarely used in large enough quantities for this to be a concern, though the potential for danger exists any time a tank or container of xenon is kept in an unventilated space. Water-soluble xenon compounds such as monosodium xenate are moderately toxic, but have a very short half-life of the body – intravenously injected xenate is reduced to elemental xenon in about a minute.
Physical sciences
Chemical elements_2
null
34147
https://en.wikipedia.org/wiki/X%20Window%20System
X Window System
The X Window System (X11, or simply X; stylized ) is a windowing system for bitmap displays, common on Unix-like operating systems. X originated as part of Project Athena at Massachusetts Institute of Technology (MIT) in 1984. The X protocol has been at version 11 (hence "X11") since September 1987. The X.Org Foundation leads the X project, with the current reference implementation, X.Org Server, available as free and open-source software under the MIT License and similar permissive licenses. Purpose and abilities X is an architecture-independent system for remote graphical user interfaces and input device capabilities. Each person using a networked terminal has the ability to interact with the display with any type of user input device. In its standard distribution it is a complete, albeit simple, display and interface solution which delivers a standard toolkit and protocol stack for building graphical user interfaces on most Unix-like operating systems and OpenVMS, and has been ported to many other contemporary general purpose operating systems. X provides the basic framework, or primitives, for building such GUI environments: drawing and moving windows on the display and interacting with a mouse, keyboard or touchscreen. X does not mandate the user interface; individual client programs handle this. Programs may use X's graphical abilities with no user interface. As such, the visual styling of X-based environments varies greatly; different programs may present radically different interfaces. Unlike most earlier display protocols, X was specifically designed to be used over network connections rather than on an integral or attached display device. X features network transparency, which means an X program running on a computer somewhere on a network (such as the Internet) can display its user interface on an X server running on some other computer on the network. The X server is typically the provider of graphics resources and keyboard/mouse events to X clients, meaning that the X server is usually running on the computer in front of a human user, while the X client applications run anywhere on the network and communicate with the user's computer to request the rendering of graphics content and receive events from input devices including keyboards and mice. The fact that the term "server" is applied to the software in front of the user is often surprising to users accustomed to their programs being clients to services on remote computers. Here, rather than a remote database being the resource for a local app, the user's graphic display and input devices become resources made available by the local X server to both local and remotely hosted X client programs who need to share the user's graphics and input devices to communicate with the user. X's network protocol is based on X command primitives. This approach allows both 2D and (through extensions like GLX) 3D operations by an X client application which might be running on a different computer to still be fully accelerated on the X server's display. For example, in classic OpenGL (before version 3.0), display lists containing large numbers of objects could be constructed and stored entirely in the X server by a remote X client program, and each then rendered by sending a single glCallList(which) across the network. X provides no native support for audio; several projects exist to fill this niche, some also providing transparent network support. Software architecture X uses a client–server model: an X server communicates with various client programs. The server accepts requests for graphical output (windows) and sends back user input (from keyboard, mouse, or touchscreen). The server may function as: an application displaying to a window of another display system a system program controlling the video output of a PC a dedicated piece of hardware This client–server terminologythe user's terminal being the server and the applications being the clientsoften confuses new X users, because the terms appear reversed. But X takes the perspective of the application, rather than that of the end-user: X provides display and I/O services to applications, so it is a server; applications use these services, thus they are clients. The communication protocol between server and client operates network-transparently: the client and server may run on the same machine or on different ones, possibly with different architectures and operating systems. A client and server can even communicate securely over the Internet by tunneling the connection over an encrypted network session. An X client itself may emulate an X server by providing display services to other clients. This is known as "X nesting". Open-source clients such as Xnest and Xephyr support such X nesting. Remote desktop To run an X client application on a remote machine, the user may do the following: on the local machine, open a terminal window use command to connect to the remote machine request a local display/input service (e.g., export DISPLAY=[user's machine]:0 if not using SSH with X forwarding enabled) The remote X client application will then make a connection to the user's local X server, providing display and input to the user. Alternatively, the local machine may run a small program that connects to the remote machine and starts the client application. Practical examples of remote clients include: administering a remote machine graphically (similar to using remote desktop, but with single windows) using a client application to join with large numbers of other terminal users in collaborative workgroups running a computationally intensive simulation on a remote machine and displaying the results on a local desktop machine running graphical software on several machines at once, controlled by a single display, keyboard and mouse User interfaces X primarily defines protocol and graphics primitivesit deliberately contains no specification for application user-interface design, such as button, menu, or window title-bar styles. Instead, application softwaresuch as window managers, GUI widget toolkits and desktop environments, or application-specific graphical user interfacesdefine and provide such details. As a result, there is no typical X interface and several different desktop environments have become popular among users. A window manager controls the placement and appearance of application windows. This may result in desktop interfaces reminiscent of those of Microsoft Windows or of the Apple Macintosh (examples include GNOME 2, KDE Plasma, Xfce) or have radically different controls (such as a tiling window manager, like wmii or Ratpoison). Some interfaces such as Sugar or ChromeOS eschew the desktop metaphor altogether, simplifying their interfaces for specialized applications. Window managers range in sophistication and complexity from the bare-bones (e.g., twm, the basic window manager supplied with X, or evilwm, an extremely light window manager) to the more comprehensive desktop environments such as Enlightenment and even to application-specific window managers for vertical markets such as point-of-sale. Many users use X with a desktop environment, which, aside from the window manager, includes various applications using a consistent user interface. Popular desktop environments include GNOME, KDE Plasma and Xfce. The UNIX 98 standard environment is the Common Desktop Environment (CDE). The freedesktop.org initiative addresses interoperability between desktops and the components needed for a competitive X desktop. Implementations The X.Org implementation is the canonical implementation of X. Owing to liberal licensing, a number of variations, both free and open source and proprietary, have appeared. Commercial Unix vendors have tended to take the reference implementation and adapt it for their hardware, usually customizing it and adding proprietary extensions. Until 2004, XFree86 provided the most common X variant on free Unix-like systems. XFree86 started as a port of X to 386-compatible PCs and, by the end of the 1990s, had become the greatest source of technical innovation in X and the de facto standard of X development. Since 2004, however, the X.Org Server, a fork of XFree86, has become predominant. While it is common to associate X with Unix, X servers also exist natively within other graphical environments. VMS Software Inc.'s OpenVMS operating system includes a version of X with Common Desktop Environment (CDE), known as DECwindows, as its standard desktop environment. Apple originally ported X to macOS in the form of X11.app, but that has been deprecated in favor of the XQuartz implementation. Third-party servers under Apple's older operating systems in the 1990s, System 7, and Mac OS 8 and 9, included Apple's MacX and White Pine Software's eXodus. Microsoft Windows is not shipped with support for X, but many third-party implementations exist, as free and open source software such as Cygwin/X, and proprietary products such as Exceed, MKS X/Server, Reflection X, X-Win32 and Xming. There are also Java implementations of X servers. WeirdX runs on any platform supporting Swing 1.1, and will run as an applet within most browsers. The Android X Server is an open source Java implementation that runs on Android devices. When an operating system with a native windowing system hosts X in addition, the X system can either use its own normal desktop in a separate host window or it can run rootless, meaning the X desktop is hidden and the host windowing environment manages the geometry and appearance of the hosted X windows within the host screen. X terminals An X terminal is a thin client that only runs an X server. This architecture became popular for building inexpensive terminal parks for many users to simultaneously use the same large computer server to execute application programs as clients of each user's X terminal. This use is very much aligned with the original intention of the MIT project. X terminals explore the network (the local broadcast domain) using the X Display Manager Control Protocol to generate a list of available hosts that are allowed as clients. One of the client hosts should run an X display manager. A limitation of X terminals and most thin clients is that they are not capable of any input or output other than the keyboard, mouse, and display. All relevant data is assumed to exist solely on the remote server, and the X terminal user has no methods available to save or load data from a local peripheral device. Dedicated (hardware) X terminals have fallen out of use; a PC or modern thin client with an X server typically provides the same functionality at the same, or lower, cost. Limitations and criticism The Unix-Haters Handbook (1994) devoted a full chapter to the problems of X. Why X Is Not Our Ideal Window System (1990) by Gajewska, Manasse and McCormack detailed problems in the protocol with recommendations for improvement. User interface issues The lack of design guidelines in X has resulted in several vastly different interfaces, and in applications that have not always worked well together. The Inter-Client Communication Conventions Manual (ICCCM), a specification for client interoperability, has a reputation for being difficult to implement correctly. Further standards efforts such as Motif and CDE did not alleviate problems. This has frustrated users and programmers. Graphics programmers now generally address consistency of application look and feel and communication by coding to a specific desktop environment or to a specific widget toolkit, which also avoids having to deal directly with the ICCCM. X also lacks native support for user-defined stored procedures on the X server, in the manner of NeWSthere is no Turing-complete scripting facility. Various desktop environments may thus offer their own (usually mutually incompatible) facilities. Computer accessibility related issues Systems built upon X may have accessibility issues that make utilization of a computer difficult for disabled users, including right click, double click, middle click, mouse-over, and focus stealing. Some X11 clients deal with accessibility issues better than others, so persons with accessibility problems are not locked out of using X11. However, there is no accessibility standard or accessibility guidelines for X11. Within the X11 standards process there is no working group on accessibility; however, accessibility needs are being addressed by software projects to provide these features on top of X. The Orca project adds accessibility support to the X Window System, including implementing an API (AT-SPI). This is coupled with GNOME's ATK to allow for accessibility features to be implemented in X programs using the GNOME/GTK APIs. KDE provides a different set of accessibility software, including a text-to-speech converter and a screen magnifier. The other major desktops (LXDE, Xfce and Enlightenment) attempt to be compatible with ATK. Network An X client cannot generally be detached from one server and reattached to another unless its code specifically provides for it (Emacs is one of the few common programs with this ability). As such, moving an entire session from one X server to another is generally not possible. However, approaches like Virtual Network Computing (VNC), NX and Xpra allow a virtual session to be reached from different X servers (in a manner similar to GNU Screen in relation to terminals), and other applications and toolkits provide related facilities. Workarounds like x11vnc (VNC :0 viewers), Xpra's shadow mode and NX's nxagent shadow mode also exist to make the current X-server screen available. This ability allows the user interface (mouse, keyboard, monitor) of a running application to be switched from one location to another without stopping and restarting the application. Network traffic between an X server and remote X clients is not encrypted by default. An attacker with a packet sniffer can intercept it, making it possible to view anything displayed to or sent from the user's screen. The most common way to encrypt X traffic is to establish a Secure Shell (SSH) tunnel for communication. Like all thin clients, when using X across a network, bandwidth limitations can impede the use of bitmap-intensive applications that require rapidly updating large portions of the screen with low latency, such as 3D animation or photo editing. Even a relatively small uncompressed 640×480×24 bit 30 fps video stream (~211 Mbit/s) can easily outstrip the bandwidth of a 100 Mbit/s network for a single client. In contrast, modern versions of X generally have extensions such as Mesa allowing local display of a local program's graphics to be optimized to bypass the network model and directly control the video card, for use with full-screen video, rendered 3D applications, and other such applications. Client–server separation X's design requires the clients and server to operate separately, and device independence and the separation of client and server incur overhead. Most of the overhead comes from network round-trip delay time between client and server (latency) rather than from the protocol itself: the best solutions to performance issues depend on efficient application design. A common criticism of X is that its network features result in excessive complexity and decreased performance if only used locally. Modern X implementations use Unix domain sockets for efficient connections on the same host. Additionally shared memory (via the MIT-SHM extension) can be employed for faster client–server communication. However, the programmer must still explicitly activate and use the shared memory extension. It is also necessary to provide fallback paths in order to stay compatible with older implementations, and in order to communicate with non-local X servers. Competitors Some people have attempted writing alternatives to and replacements for X. Historical alternatives include Sun's NeWS and NeXT's Display PostScript, both PostScript-based systems supporting user-definable display-side procedures, which X lacked. Current alternatives include: macOS (and its mobile counterpart, iOS) implements its windowing system, which is known as Quartz. When Apple Computer bought NeXT, and used NeXTSTEP to construct Mac OS X, it replaced Display PostScript with Quartz. Mike Paquette, one of the authors of Quartz, explained that if Apple had added support for all the features it wanted to include into X11, it would not bear much resemblance to X11 nor be compatible with other servers anyway. Wayland is being developed by several X.Org developers as a prospective replacement for X. It works directly with the GPU hardware, via DRI. Wayland can run an X server as a Wayland compositor, which can be rootless. The project reached version 1.0 in 2012. Like Android, Wayland is EGL-based. Mir was a project from Canonical Ltd. with goals similar to Wayland. Mir was intended to work with mobile devices using ARM chipsets (a stated goal was compatibility with Android device-drivers) as well as x86 desktops. Like Android, Mir/UnityNext were EGL-based. Backwards compatibility with X client-applications was accomplished via Xmir. The project has since moved to being a Wayland compositor instead of being an alternative display server. Other alternatives attempt to avoid the overhead of X by working directly with the hardware; such projects include DirectFB. The Direct Rendering Infrastructure (DRI) provides a kernel-level interface to the framebuffer. Additional ways to achieve a functional form of the "network transparency" feature of X, via network transmissibility of graphical services, include: Virtual Network Computing (VNC), a very low-level system which sends compressed bitmaps across the network; the Unix implementation includes an X server Remote Desktop Protocol (RDP), which is similar to VNC in purpose, but originated on Microsoft Windows before being ported to Unix-like systems, e.g. NX Citrix XenApp, an X-like protocol and application stack for Microsoft Windows Tarantella, which provides a Java-based remote-gui-client for use in web browsers History Predecessors Several bitmap display systems preceded X. From Xerox came the Alto (1973) and the Star (1981). From Apollo Computer came Display Manager (1981). From Apple came the Lisa (1983) and the Macintosh (1984). The Unix world had the Andrew Project (1982) and Rob Pike's Blit terminal (1982). Carnegie Mellon University produced a remote-access application called Alto Terminal, that displayed overlapping windows on the Xerox Alto, and made remote hosts (typically DEC VAX systems running Unix) responsible for handling window-exposure events and refreshing window contents as necessary. X derives its name as a successor to a pre-1983 window system called W (the letter preceding X in the English alphabet). W ran under the V operating system. W used a network protocol supporting terminal and graphics windows, the server maintaining display lists. Origin and early development The original idea of X emerged at MIT in 1984 as a collaboration between Jim Gettys (of Project Athena) and Bob Scheifler (of the MIT Laboratory for Computer Science). Scheifler needed a usable display environment for debugging the Argus system. Project Athena (a joint project between DEC, MIT and IBM to provide easy access to computing resources for all students) needed a platform-independent graphics system to link together its heterogeneous multiple-vendor systems; the window system then under development in Carnegie Mellon University's Andrew Project did not make licenses available, and no alternatives existed. The project solved this by creating a protocol that could both run local applications and call on remote resources. In mid-1983 an initial port of W to Unix ran at one-fifth of its speed under V; in May 1984, Scheifler replaced the synchronous protocol of W with an asynchronous protocol and the display lists with immediate mode graphics to make X version 1. X became the first windowing system environment to offer true hardware independence and vendor independence. Scheifler, Gettys and Ron Newman set to work and X progressed rapidly. They released Version 6 in January 1985. DEC, then preparing to release its first Ultrix workstation, judged X the only windowing system likely to become available in time. DEC engineers ported X6 to DEC's QVSS display on MicroVAX. In the second quarter of 1985, X acquired color support to function in the DEC VAXstation-II/GPX, forming what became version 9. A group at Brown University ported version 9 to the IBM RT PC, but problems with reading unaligned data on the RT forced an incompatible protocol change, leading to version 10 in late 1985. X10R1 was released in 1985. By 1986, outside organizations had begun asking for X. X10R2 was released in January 1986, then X10R3 in February 1986. Although MIT had licensed X6 to some outside groups for a fee, it decided at this time to license X10R3 and future versions under what became known as the MIT License, intending to popularize X further and, in return, hoping that many more applications would become available. X10R3 became the first version to achieve wide deployment, with both DEC and Hewlett-Packard releasing products based on it. Other groups ported X10 to Apollo and to Sun workstations and even to the IBM PC/AT. Demonstrations of the first commercial application for X (a mechanical computer-aided engineering system from Cognition Inc. that ran on VAXes and remotely displayed on PCs running an X server ported by Jim Fulton and Jan Hardenbergh) took place at the Autofact trade show at that time. The last version of X10, X10R4, appeared in December 1986. Attempts were made to enable X servers as real-time collaboration devices, much as Virtual Network Computing (VNC) would later allow a desktop to be shared. One such early effort was Philip J. Gust's SharedX tool. Although X10 offered interesting and powerful functionality, it had become obvious that the X protocol could use a more hardware-neutral redesign before it became too widely deployed, but MIT alone would not have the resources available for such a complete redesign. As it happened, DEC's Western Software Laboratory found itself between projects with an experienced team. Smokey Wallace of DEC WSL and Jim Gettys proposed that DEC WSL build X11 and make it freely available under the same terms as X9 and X10. This process started in May 1986, with the protocol finalized in August. Alpha testing of the software started in February 1987, beta-testing in May; the release of X11 finally occurred on 15 September 1987. The X11 protocol design, led by Scheifler, was extensively discussed on open mailing lists on the nascent Internet that were bridged to USENET newsgroups. Gettys moved to California to help lead the X11 development work at WSL from DEC's Systems Research Center, where Phil Karlton and Susan Angebrandt led the X11 sample server design and implementation. X therefore represents one of the first very large-scale distributed free and open source software projects. The MIT X Consortium and the X Consortium, Inc. By the late 1980s X was, Simson Garfinkel wrote in 1989, "Athena's most important single achievement to date". DEC reportedly believed that its development alone had made the company's donation to MIT worthwhile. Gettys joined the design team for the VAXstation 2000 to ensure that X—which DEC called DECwindows—would run on it, and the company assigned 1,200 employees to port X to both Ultrix and VMS. In 1987, with the success of X11 becoming apparent, MIT wished to relinquish the stewardship of X, but at a June 1987 meeting with nine vendors, the vendors told MIT that they believed in the need for a neutral party to keep X from fragmenting in the marketplace. In January 1988, the MIT X Consortium formed as a non-profit vendor group, with Scheifler as director, to direct the future development of X in a neutral atmosphere inclusive of commercial and educational interests. Jim Fulton joined in January 1988 and Keith Packard in March 1988 as senior developers, with Jim focusing on Xlib, fonts, window managers, and utilities; and Keith re-implementing the server. Donna Converse, Chris D. Peterson, and Stephen Gildea joined later that year, focusing on toolkits and widget sets, working closely with Ralph Swick of MIT Project Athena. The MIT X Consortium produced several significant revisions to X11, the first (Release 2X11R2) in February 1988. Jay Hersh joined the staff in January 1991 to work on the PEX and X113D functionality. He was followed soon after by Ralph Mor (who also worked on PEX) and Dave Sternlicht. In 1993, as the MIT X Consortium prepared to depart from MIT, the staff were joined by R. Gary Cutbill, Kaleb Keithley, and David Wiggins. In 1993, the X Consortium, Inc. (a non-profit corporation) formed as the successor to the MIT X Consortium. It released X11R6 on 16 May 1994. In 1995 it took on the development of the Motif toolkit and of the Common Desktop Environment for Unix systems. The X Consortium dissolved at the end of 1996, producing a final revision, X11R6.3, and a legacy of increasing commercial influence in the development. The Open Group In January 1997, the X Consortium passed stewardship of X to The Open Group, a vendor group formed in early 1996 by the merger of the Open Software Foundation and X/Open. The Open Group released X11R6.4 in early 1998. Controversially, X11R6.4 departed from the traditional liberal licensing terms, as the Open Group sought to assure funding for the development of X, and specifically cited XFree86 as not significantly contributing to X. The new terms would have made X no longer free software: zero-cost for noncommercial use, but a fee otherwise. After XFree86 seemed poised to fork, the Open Group relicensed X11R6.4 under the traditional license in September 1998. The Open Group's last release came as X11R6.4 patch 3. X.Org and XFree86 XFree86 originated in 1992 from the X386 server for IBM PC compatibles included with X11R5 in 1991, written by Thomas Roell and Mark W. Snitily and donated to the MIT X Consortium by Snitily Graphics Consulting Services (SGCS). XFree86 evolved over time from just one port of X to the leading and most popular implementation and the de facto standard of X's development. In May 1999, The Open Group formed X.Org. X.Org supervised the release of versions X11R6.5.1 onward. X development at this time had become moribund; most technical innovation since the X Consortium had dissolved had taken place in the XFree86 project. In 1999, the XFree86 team joined X.Org as an honorary (non-paying) member, encouraged by various hardware companies interested in using XFree86 with Linux and in its status as the most popular version of X. By 2003, while the popularity of Linux (and hence the installed base of X) surged, X.Org remained inactive, and active development took place largely within XFree86. However, considerable dissent developed within XFree86. The XFree86 project suffered from a perception of a far too cathedral-like development model; developers could not get CVS commit access and vendors had to maintain extensive patch sets. In March 2003, the XFree86 organization expelled Keith Packard, who had joined XFree86 after the end of the original MIT X Consortium, with considerable ill feeling. X.Org and XFree86 began discussing a reorganisation suited to properly nurturing the development of X. Jim Gettys had been pushing strongly for an open development model since at least 2000. Gettys, Packard and several others began discussing in detail the requirements for the effective governance of X with open development. Finally, in an echo of the X11R6.4 licensing dispute, XFree86 released version 4.4 in February 2004 under a more restrictive license which many projects relying on X found unacceptable. The added clause to the license was based on the original BSD license's advertising clause, which was viewed by the Free Software Foundation and Debian as incompatible with the GNU General Public License. Other groups saw it as against the spirit of the original X. Theo de Raadt of OpenBSD, for instance, threatened to fork XFree86 citing license concerns. The license issue, combined with the difficulties in getting changes in, left many feeling the time was ripe for a fork. The X.Org Foundation In early 2004, various people from X.Org and freedesktop.org formed the X.Org Foundation, and the Open Group gave it control of the x.org domain name. This marked a radical change in the governance of X. Whereas the stewards of X since 1988 (including the prior X.Org) had been vendor organizations, the Foundation was led by software developers and used community development based on the bazaar model, which relies on outside involvement. Membership was opened to individuals, with corporate membership being in the form of sponsorship. Several major corporations such as Hewlett-Packard currently support the X.Org Foundation. The Foundation takes an oversight role over X development: technical decisions are made on their merits by achieving rough consensus among community members. Technical decisions are not made by the board of directors; in this sense, it is strongly modelled on the technically non-interventionist GNOME Foundation. The Foundation employs no developers. The Foundation released X11R6.7, the X.Org Server, in April 2004, based on XFree86 4.4RC2 with X11R6.6 changes merged. Gettys and Packard had taken the last version of XFree86 under the old license and, by making a point of an open development model and retaining GPL compatibility, brought many of the old XFree86 developers on board. While X11 had received extensions such as OpenGL support during the 1990s, its architecture had remained fundamentally unchanged during the decade. In the early part of the 2000s, however, it was overhauled to resolve a number of problems that had surfaced over the years, including a "flawed" font architecture, a 2D graphics system "which had always been intended to be augmented and/or replaced", and latency issues. X11R6.8 came out in September 2004. It added significant new features, including preliminary support for translucent windows and other sophisticated visual effects, screen magnifiers and thumbnailers, and facilities to integrate with 3D immersive display systems such as Sun's Project Looking Glass and the Croquet project. External applications called compositing window managers provide policy for the visual appearance. On 21 December 2005, X.Org released X11R6.9, the monolithic source tree for legacy users, and X11R7.0, the same source code separated into independent modules, each maintainable in separate projects. The Foundation released X11R7.1 on 22 May 2006, about four months after 7.0, with considerable feature improvements. XFree86 development continued for a few more years, 4.8.0 being released on 15 December 2008. Nomenclature The proper names for the system are listed in the manual page as X; X Window System; X Version 11; X Window System, Version 11; or X11. The term "X-Windows" (in the manner of the subsequently released "Microsoft Windows") is not officially endorsedwith X Consortium release manager Matt Landau stating in 1993, "There is no such thing as 'X Windows' or 'X Window', despite the repeated misuse of the forms by the trade rags"though it has been in common informal use since early in the history of X and has been used deliberately for provocative effect, for example in the Unix-Haters Handbook. Key terms The X Window System has nuanced usage of a number of terms when compared to common usage, particularly "display" and "screen", a subset of which is given here for convenience: device A graphics device such as a computer graphics card or a computer motherboard's integrated graphics chipset. monitor A physical device such as a CRT or a flat screen computer display. screen An area into which graphics may be rendered, either through software alone into system memory as with VNC, or within a graphics device, some of which can render into more than one screen simultaneously, either viewable simultaneously or interchangeably. Interchangeable screens are often set up to be notionally left and right from one another, flipping from one to the next as the mouse pointer reaches the edge of the monitor. virtual screen Two different meanings are associated with this term: A technique allowing panning a monitor around a screen running at a larger resolution than the monitor is currently displaying. An effect simulated by a window manager by maintaining window position information in a larger coordinate system than the screen and allowing panning by simply moving the windows in response to the user. display A collection of screens, often involving multiple monitors, generally configured to allow the mouse to move the pointer to any position within them. Linux-based workstations are usually capable of having multiple displays, among which the user can switch with a special keyboard combination such as control-alt-function-key, simultaneously flipping all the monitors from showing the screens of one display to the screens in another. The term "display" should not be confused with the more specialized jargon "Zaphod display". The latter is a rare configuration allowing multiple users of a single computer to each have an independent set of display, mouse, and keyboard, as though they were using separate computers, but at a lower per-seat cost. Release history On the prospect of future versions, the X.org website states:
Technology
User interface
null
34151
https://en.wikipedia.org/wiki/X-ray%20crystallography
X-ray crystallography
X-ray crystallography is the experimental science of determining the atomic and molecular structure of a crystal, in which the crystalline structure causes a beam of incident X-rays to diffract in specific directions. By measuring the angles and intensities of the X-ray diffraction, a crystallographer can produce a three-dimensional picture of the density of electrons within the crystal and the positions of the atoms, as well as their chemical bonds, crystallographic disorder, and other information. X-ray crystallography has been fundamental in the development of many scientific fields. In its first decades of use, this method determined the size of atoms, the lengths and types of chemical bonds, and the atomic-scale differences between various materials, especially minerals and alloys. The method has also revealed the structure and function of many biological molecules, including vitamins, drugs, proteins and nucleic acids such as DNA. X-ray crystallography is still the primary method for characterizing the atomic structure of materials and in differentiating materials that appear similar in other experiments. X-ray crystal structures can also help explain unusual electronic or elastic properties of a material, shed light on chemical interactions and processes, or serve as the basis for designing pharmaceuticals against diseases. Modern work involves a number of steps all of which are important. The preliminary steps include preparing good quality samples, careful recording of the diffracted intensities, and processing of the data to remove artifacts. A variety of different methods are then used to obtain an estimate of the atomic structure, generically called direct methods. With an initial estimate further computational techniques such as those involving difference maps are used to complete the structure. The final step is a numerical refinement of the atomic positions against the experimental data, sometimes assisted by ab-initio calculations. In almost all cases new structures are deposited in databases available to the international community. History Crystals, though long admired for their regularity and symmetry, were not investigated scientifically until the 17th century. Johannes Kepler hypothesized in his work Strena seu de Nive Sexangula (A New Year's Gift of Hexagonal Snow) (1611) that the hexagonal symmetry of snowflake crystals was due to a regular packing of spherical water particles. The Danish scientist Nicolas Steno (1669) pioneered experimental investigations of crystal symmetry. Steno showed that the angles between the faces are the same in every exemplar of a particular type of crystal (law of constancy of interfacial angles). René Just Haüy (1784) discovered that every face of a crystal can be described by simple stacking patterns of blocks of the same shape and size (law of decrements). Hence, William Hallowes Miller in 1839 was able to give each face a unique label of three small integers, the Miller indices which remain in use for identifying crystal faces. Haüy's study led to the idea that crystals are a regular three-dimensional array (a Bravais lattice) of atoms and molecules; a single unit cell is repeated indefinitely along three principal directions. In the 19th century, a complete catalog of the possible symmetries of a crystal was worked out by Johan Hessel, Auguste Bravais, Evgraf Fedorov, Arthur Schönflies and (belatedly) William Barlow (1894). Barlow proposed several crystal structures in the 1880s that were validated later by X-ray crystallography; however, the available data were too scarce in the 1880s to accept his models as conclusive. Wilhelm Röntgen discovered X-rays in 1895. Physicists were uncertain of the nature of X-rays, but suspected that they were waves of electromagnetic radiation. The Maxwell theory of electromagnetic radiation was well accepted, and experiments by Charles Glover Barkla showed that X-rays exhibited phenomena associated with electromagnetic waves, including transverse polarization and spectral lines akin to those observed in the visible wavelengths. Barkla created the x-ray notation for sharp spectral lines, noting in 1909 two separate energies, at first naming them "A" and "B" and then supposing that there may be lines prior to "A", he started an alphabet numbering beginning with "K." Single-slit experiments in the laboratory of Arnold Sommerfeld suggested that X-rays had a wavelength of about 1 angstrom. X-rays are not only waves but also have particle properties causing Sommerfeld to coin the name Bremsstrahlung for the continuous spectra when they were formed when electrons bombarded a material. Albert Einstein introduced the photon concept in 1905, but it was not broadly accepted until 1922, when Arthur Compton confirmed it by the scattering of X-rays from electrons. The particle-like properties of X-rays, such as their ionization of gases, had prompted William Henry Bragg to argue in 1907 that X-rays were not electromagnetic radiation. Bragg's view proved unpopular and the observation of X-ray diffraction by Max von Laue in 1912 confirmed that X-rays are a form of electromagnetic radiation. The idea that crystals could be used as a diffraction grating for X-rays arose in 1912 in a conversation between Paul Peter Ewald and Max von Laue in the English Garden in Munich. Ewald had proposed a resonator model of crystals for his thesis, but this model could not be validated using visible light, since the wavelength was much larger than the spacing between the resonators. Von Laue realized that electromagnetic radiation of a shorter wavelength was needed, and suggested that X-rays might have a wavelength comparable to the unit-cell spacing in crystals. Von Laue worked with two technicians, Walter Friedrich and his assistant Paul Knipping, to shine a beam of X-rays through a copper sulfate crystal and record its diffraction on a photographic plate. After being developed, the plate showed a large number of well-defined spots arranged in a pattern of intersecting circles around the spot produced by the central beam. The results were presented to the Bavarian Academy of Sciences and Humanities in June 1912 as "Interferenz-Erscheinungen bei Röntgenstrahlen" (Interference phenomena in X-rays). Von Laue developed a law that connects the scattering angles and the size and orientation of the unit-cell spacings in the crystal, for which he was awarded the Nobel Prize in Physics in 1914. After Von Laue's pioneering research, the field developed rapidly, most notably by physicists William Lawrence Bragg and his father William Henry Bragg. In 1912–1913, the younger Bragg developed Bragg's law, which connects the scattering with evenly spaced planes within a crystal. The Braggs, father and son, shared the 1915 Nobel Prize in Physics for their work in crystallography. The earliest structures were generally simple; as computational and experimental methods improved over the next decades, it became feasible to deduce reliable atomic positions for more complicated arrangements of atoms. The earliest structures were simple inorganic crystals and minerals, but even these revealed fundamental laws of physics and chemistry. The first atomic-resolution structure to be "solved" (i.e., determined) in 1914 was that of table salt. The distribution of electrons in the table-salt structure showed that crystals are not necessarily composed of covalently bonded molecules, and proved the existence of ionic compounds. The structure of diamond was solved in the same year, proving the tetrahedral arrangement of its chemical bonds and showing that the length of C–C single bond was about 1.52 angstroms. Other early structures included copper, calcium fluoride (CaF2, also known as fluorite), calcite (CaCO3) and pyrite (FeS2) in 1914; spinel (MgAl2O4) in 1915; the rutile and anatase forms of titanium dioxide (TiO2) in 1916; pyrochroite (Mn(OH)2) and, by extension, brucite (Mg(OH)2) in 1919. Also in 1919, sodium nitrate (NaNO3) and caesium dichloroiodide (CsICl2) were determined by Ralph Walter Graystone Wyckoff, and the wurtzite (hexagonal ZnS) structure was determined in 1920. The structure of graphite was solved in 1916 by the related method of powder diffraction, which was developed by Peter Debye and Paul Scherrer and, independently, by Albert Hull in 1917. The structure of graphite was determined from single-crystal diffraction in 1924 by two groups independently. Hull also used the powder method to determine the structures of various metals, such as iron and magnesium. Contributions in different areas Chemistry X-ray crystallography has led to a better understanding of chemical bonds and non-covalent interactions. The initial studies revealed the typical radii of atoms, and confirmed many theoretical models of chemical bonding, such as the tetrahedral bonding of carbon in the diamond structure, the octahedral bonding of metals observed in ammonium hexachloroplatinate (IV), and the resonance observed in the planar carbonate group and in aromatic molecules. Kathleen Lonsdale's 1928 structure of hexamethylbenzene established the hexagonal symmetry of benzene and showed a clear difference in bond length between the aliphatic C–C bonds and aromatic C–C bonds; this finding led to the idea of resonance between chemical bonds, which had profound consequences for the development of chemistry. Her conclusions were anticipated by William Henry Bragg, who published models of naphthalene and anthracene in 1921 based on other molecules, an early form of molecular replacement. The first structure of an organic compound, hexamethylenetetramine, was solved in 1923. This was rapidly followed by several studies of different long-chain fatty acids, which are an important component of biological membranes. In the 1930s, the structures of much larger molecules with two-dimensional complexity began to be solved. A significant advance was the structure of phthalocyanine, a large planar molecule that is closely related to porphyrin molecules important in biology, such as heme, corrin and chlorophyll. In the 1920s, Victor Moritz Goldschmidt and later Linus Pauling developed rules for eliminating chemically unlikely structures and for determining the relative sizes of atoms. These rules led to the structure of brookite (1928) and an understanding of the relative stability of the rutile, brookite and anatase forms of titanium dioxide. The distance between two bonded atoms is a sensitive measure of the bond strength and its bond order; thus, X-ray crystallographic studies have led to the discovery of even more exotic types of bonding in inorganic chemistry, such as metal-metal double bonds, metal-metal quadruple bonds, and three-center, two-electron bonds. X-ray crystallography—or, strictly speaking, an inelastic Compton scattering experiment—has also provided evidence for the partly covalent character of hydrogen bonds. In the field of organometallic chemistry, the X-ray structure of ferrocene initiated scientific studies of sandwich compounds, while that of Zeise's salt stimulated research into "back bonding" and metal-pi complexes. Finally, X-ray crystallography had a pioneering role in the development of supramolecular chemistry, particularly in clarifying the structures of the crown ethers and the principles of host–guest chemistry. Materials science and mineralogy The application of X-ray crystallography to mineralogy began with the structure of garnet, which was determined in 1924 by Menzer. A systematic X-ray crystallographic study of the silicates was undertaken in the 1920s. This study showed that, as the Si/O ratio is altered, the silicate crystals exhibit significant changes in their atomic arrangements. Machatschki extended these insights to minerals in which aluminium substitutes for the silicon atoms of the silicates. The first application of X-ray crystallography to metallurgy also occurred in the mid-1920s. Most notably, Linus Pauling's structure of the alloy Mg2Sn led to his theory of the stability and structure of complex ionic crystals. Many complicated inorganic and organometallic systems have been analyzed using single-crystal methods, such as fullerenes, metalloporphyrins, and other complicated compounds. Single-crystal diffraction is also used in the pharmaceutical industry. The Cambridge Structural Database contains over 1,000,000 structures as of June 2019; most of these structures were determined by X-ray crystallography. On October 17, 2012, the Curiosity rover on the planet Mars at "Rocknest" performed the first X-ray diffraction analysis of Martian soil. The results from the rover's CheMin analyzer revealed the presence of several minerals, including feldspar, pyroxenes and olivine, and suggested that the Martian soil in the sample was similar to the "weathered basaltic soils" of Hawaiian volcanoes. Biological macromolecular crystallography X-ray crystallography of biological molecules took off with Dorothy Crowfoot Hodgkin, who solved the structures of cholesterol (1937), penicillin (1946) and vitamin B12 (1956), for which she was awarded the Nobel Prize in Chemistry in 1964. In 1969, she succeeded in solving the structure of insulin, on which she worked for over thirty years. Crystal structures of proteins (which are irregular and hundreds of times larger than cholesterol) began to be solved in the late 1950s, beginning with the structure of sperm whale myoglobin by Sir John Cowdery Kendrew, for which he shared the Nobel Prize in Chemistry with Max Perutz in 1962. Since that success, over 130,000 X-ray crystal structures of proteins, nucleic acids and other biological molecules have been determined. The nearest competing method in number of structures analyzed is nuclear magnetic resonance (NMR) spectroscopy, which has resolved less than one tenth as many. Crystallography can solve structures of arbitrarily large molecules, whereas solution-state NMR is restricted to relatively small ones (less than 70 kDa). X-ray crystallography is used routinely to determine how a pharmaceutical drug interacts with its protein target and what changes might improve it. However, intrinsic membrane proteins remain challenging to crystallize because they require detergents or other denaturants to solubilize them in isolation, and such detergents often interfere with crystallization. Membrane proteins are a large component of the genome, and include many proteins of great physiological importance, such as ion channels and receptors. Helium cryogenics are used to prevent radiation damage in protein crystals. Methods Overview Two limiting cases of X-ray crystallography—"small-molecule" (which includes continuous inorganic solids) and "macromolecular" crystallography—are often used. Small-molecule crystallography typically involves crystals with fewer than 100 atoms in their asymmetric unit; such crystal structures are usually so well resolved that the atoms can be discerned as isolated "blobs" of electron density. In contrast, macromolecular crystallography often involves tens of thousands of atoms in the unit cell. Such crystal structures are generally less well-resolved; the atoms and chemical bonds appear as tubes of electron density, rather than as isolated atoms. In general, small molecules are also easier to crystallize than macromolecules; however, X-ray crystallography has proven possible even for viruses and proteins with hundreds of thousands of atoms, through improved crystallographic imaging and technology. The technique of single-crystal X-ray crystallography has three basic steps. The first—and often most difficult—step is to obtain an adequate crystal of the material under study. The crystal should be sufficiently large (typically larger than 0.1 mm in all dimensions), pure in composition and regular in structure, with no significant internal imperfections such as cracks or twinning. In the second step, the crystal is placed in an intense beam of X-rays, usually of a single wavelength (monochromatic X-rays), producing the regular pattern of reflections. The angles and intensities of diffracted X-rays are measured, with each compound having a unique diffraction pattern. As the crystal is gradually rotated, previous reflections disappear and new ones appear; the intensity of every spot is recorded at every orientation of the crystal. Multiple data sets may have to be collected, with each set covering slightly more than half a full rotation of the crystal and typically containing tens of thousands of reflections. In the third step, these data are combined computationally with complementary chemical information to produce and refine a model of the arrangement of atoms within the crystal. The final, refined model of the atomic arrangement—now called a crystal structure—is usually stored in a public database. Crystallization Although crystallography can be used to characterize the disorder in an impure or irregular crystal, crystallography generally requires a pure crystal of high regularity to solve the structure of a complicated arrangement of atoms. Pure, regular crystals can sometimes be obtained from natural or synthetic materials, such as samples of metals, minerals or other macroscopic materials. The regularity of such crystals can sometimes be improved with macromolecular crystal annealing and other methods. However, in many cases, obtaining a diffraction-quality crystal is the chief barrier to solving its atomic-resolution structure. Small-molecule and macromolecular crystallography differ in the range of possible techniques used to produce diffraction-quality crystals. Small molecules generally have few degrees of conformational freedom, and may be crystallized by a wide range of methods, such as chemical vapor deposition and recrystallization. By contrast, macromolecules generally have many degrees of freedom and their crystallization must be carried out so as to maintain a stable structure. For example, proteins and larger RNA molecules cannot be crystallized if their tertiary structure has been unfolded; therefore, the range of crystallization conditions is restricted to solution conditions in which such molecules remain folded. Protein crystals are almost always grown in solution. The most common approach is to lower the solubility of its component molecules very gradually; if this is done too quickly, the molecules will precipitate from solution, forming a useless dust or amorphous gel on the bottom of the container. Crystal growth in solution is characterized by two steps: nucleation of a microscopic crystallite (possibly having only 100 molecules), followed by growth of that crystallite, ideally to a diffraction-quality crystal. The solution conditions that favor the first step (nucleation) are not always the same conditions that favor the second step (subsequent growth). The solution conditions should disfavor the first step (nucleation) but favor the second (growth), so that only one large crystal forms per droplet. If nucleation is favored too much, a shower of small crystallites will form in the droplet, rather than one large crystal; if favored too little, no crystal will form whatsoever. Other approaches involve crystallizing proteins under oil, where aqueous protein solutions are dispensed under liquid oil, and water evaporates through the layer of oil. Different oils have different evaporation permeabilities, therefore yielding changes in concentration rates from different percipient/protein mixture. It is difficult to predict good conditions for nucleation or growth of well-ordered crystals. In practice, favorable conditions are identified by screening; a very large batch of the molecules is prepared, and a wide variety of crystallization solutions are tested. Hundreds, even thousands, of solution conditions are generally tried before finding the successful one. The various conditions can use one or more physical mechanisms to lower the solubility of the molecule; for example, some may change the pH, some contain salts of the Hofmeister series or chemicals that lower the dielectric constant of the solution, and still others contain large polymers such as polyethylene glycol that drive the molecule out of solution by entropic effects. It is also common to try several temperatures for encouraging crystallization, or to gradually lower the temperature so that the solution becomes supersaturated. These methods require large amounts of the target molecule, as they use high concentration of the molecule(s) to be crystallized. Due to the difficulty in obtaining such large quantities (milligrams) of crystallization-grade protein, robots have been developed that are capable of accurately dispensing crystallization trial drops that are in the order of 100 nanoliters in volume. This means that 10-fold less protein is used per experiment when compared to crystallization trials set up by hand (in the order of 1 microliter). Several factors are known to inhibit crystallization. The growing crystals are generally held at a constant temperature and protected from shocks or vibrations that might disturb their crystallization. Impurities in the molecules or in the crystallization solutions are often inimical to crystallization. Conformational flexibility in the molecule also tends to make crystallization less likely, due to entropy. Molecules that tend to self-assemble into regular helices are often unwilling to assemble into crystals. Crystals can be marred by twinning, which can occur when a unit cell can pack equally favorably in multiple orientations; although recent advances in computational methods may allow solving the structure of some twinned crystals. Having failed to crystallize a target molecule, a crystallographer may try again with a slightly modified version of the molecule; even small changes in molecular properties can lead to large differences in crystallization behavior. Data collection Mounting the crystal The crystal is mounted for measurements so that it may be held in the X-ray beam and rotated. There are several methods of mounting. In the past, crystals were loaded into glass capillaries with the crystallization solution (the mother liquor). Crystals of small molecules are typically attached with oil or glue to a glass fiber or a loop, which is made of nylon or plastic and attached to a solid rod. Protein crystals are scooped up by a loop, then flash-frozen with liquid nitrogen. This freezing reduces the radiation damage of the X-rays, as well as thermal motion (the Debye-Waller effect). However, untreated protein crystals often crack if flash-frozen; therefore, they are generally pre-soaked in a cryoprotectant solution before freezing. This pre-soak may itself cause the crystal to crack, ruining it for crystallography. Generally, successful cryo-conditions are identified by trial and error. The capillary or loop is mounted on a goniometer, which allows it to be positioned accurately within the X-ray beam and rotated. Since both the crystal and the beam are often very small, the crystal must be centered within the beam to within ~25 micrometers accuracy, which is aided by a camera focused on the crystal. The most common type of goniometer is the "kappa goniometer", which offers three angles of rotation: the ω angle, which rotates about an axis perpendicular to the beam; the κ angle, about an axis at ~50° to the ω axis; and, finally, the φ angle about the loop/capillary axis. When the κ angle is zero, the ω and φ axes are aligned. The κ rotation allows for convenient mounting of the crystal, since the arm in which the crystal is mounted may be swung out towards the crystallographer. The oscillations carried out during data collection (mentioned below) involve the ω axis only. An older type of goniometer is the four-circle goniometer, and its relatives such as the six-circle goniometer. Recording the reflections The relative intensities of the reflections provides information to determine the arrangement of molecules within the crystal in atomic detail. The intensities of these reflections may be recorded with photographic film, an area detector (such as a pixel detector) or with a charge-coupled device (CCD) image sensor. The peaks at small angles correspond to low-resolution data, whereas those at high angles represent high-resolution data; thus, an upper limit on the eventual resolution of the structure can be determined from the first few images. Some measures of diffraction quality can be determined at this point, such as the mosaicity of the crystal and its overall disorder, as observed in the peak widths. Some pathologies of the crystal that would render it unfit for solving the structure can also be diagnosed quickly at this point. One set of spots is insufficient to reconstruct the whole crystal; it represents only a small slice of the full three dimensional set. To collect all the necessary information, the crystal must be rotated step-by-step through 180°, with an image recorded at every step; actually, slightly more than 180° is required to cover reciprocal space, due to the curvature of the Ewald sphere. However, if the crystal has a higher symmetry, a smaller angular range such as 90° or 45° may be recorded. The rotation axis should be changed at least once, to avoid developing a "blind spot" in reciprocal space close to the rotation axis. It is customary to rock the crystal slightly (by 0.5–2°) to catch a broader region of reciprocal space. Multiple data sets may be necessary for certain phasing methods. For example, multi-wavelength anomalous dispersion phasing requires that the scattering be recorded at least three (and usually four, for redundancy) wavelengths of the incoming X-ray radiation. A single crystal may degrade too much during the collection of one data set, owing to radiation damage; in such cases, data sets on multiple crystals must be taken. Crystal symmetry, unit cell, and image scaling The recorded series of two-dimensional diffraction patterns, each corresponding to a different crystal orientation, is converted into a three-dimensional set. Data processing begins with indexing the reflections. This means identifying the dimensions of the unit cell and which image peak corresponds to which position in reciprocal space. A byproduct of indexing is to determine the symmetry of the crystal, i.e., its space group. Some space groups can be eliminated from the beginning. For example, reflection symmetries cannot be observed in chiral molecules; thus, only 65 space groups of 230 possible are allowed for protein molecules which are almost always chiral. Indexing is generally accomplished using an autoindexing routine. Having assigned symmetry, the data is then integrated. This converts the hundreds of images containing the thousands of reflections into a single file, consisting of (at the very least) records of the Miller index of each reflection, and an intensity for each reflection (at this state the file often also includes error estimates and measures of partiality (what part of a given reflection was recorded on that image)). A full data set may consist of hundreds of separate images taken at different orientations of the crystal. These have to be merged and scaled usingpeaks appear in two or more images (merging) and scaling so there is a consistent intensity scale. Optimizing the intensity scale is critical because the relative intensity of the peaks is the key information from which the structure is determined. The repetitive technique of crystallographic data collection and the often high symmetry of crystalline materials cause the diffractometer to record many symmetry-equivalent reflections multiple times. This allows calculating the symmetry-related R-factor, a reliability index based upon how similar are the measured intensities of symmetry-equivalent reflections, thus assessing the quality of the data. Initial phasing The intensity of each diffraction 'spot' is proportional to the modulus squared of the structure factor. The structure factor is a complex number containing information relating to both the amplitude and phase of a wave. In order to obtain an interpretable electron density map, both amplitude and phase must be known (an electron density map allows a crystallographer to build a starting model of the molecule). The phase cannot be directly recorded during a diffraction experiment: this is known as the phase problem. Initial phase estimates can be obtained in a variety of ways: Ab initio phasing or direct methods – This is usually the method of choice for small molecules (<1000 non-hydrogen atoms), and has been used successfully to solve the phase problems for small proteins. If the resolution of the data is better than 1.4 Å (140 pm), direct methods can be used to obtain phase information, by exploiting known phase relationships between certain groups of reflections. Molecular replacement – if a related structure is known, it can be used as a search model in molecular replacement to determine the orientation and position of the molecules within the unit cell. The phases obtained this way can be used to generate electron density maps. Anomalous X-ray scattering (MAD or SAD phasing) – the X-ray wavelength may be scanned past an absorption edge of an atom, which changes the scattering in a known way. By recording full sets of reflections at three different wavelengths (far below, far above and in the middle of the absorption edge) one can solve for the substructure of the anomalously diffracting atoms and hence the structure of the whole molecule. The most popular method of incorporating anomalous scattering atoms into proteins is to express the protein in a methionine auxotroph (a host incapable of synthesizing methionine) in a media rich in seleno-methionine, which contains selenium atoms. A multi-wavelength anomalous dispersion (MAD) experiment can then be conducted around the absorption edge, which should then yield the position of any methionine residues within the protein, providing initial phases. Heavy atom methods (multiple isomorphous replacement) – If electron-dense metal atoms can be introduced into the crystal, direct methods or Patterson-space methods can be used to determine their location and to obtain initial phases. Such heavy atoms can be introduced either by soaking the crystal in a heavy atom-containing solution, or by co-crystallization (growing the crystals in the presence of a heavy atom). As in multi-wavelength anomalous dispersion phasing, the changes in the scattering amplitudes can be interpreted to yield the phases. Although this is the original method by which protein crystal structures were solved, it has largely been superseded by multi-wavelength anomalous dispersion phasing with selenomethionine. Model building and phase refinement Having obtained initial phases, an initial model can be built. The atomic positions in the model and their respective Debye-Waller factors (or B-factors, accounting for the thermal motion of the atom) can be refined to fit the observed diffraction data, ideally yielding a better set of phases. A new model can then be fit to the new electron density map and successive rounds of refinement are carried out. This iterative process continues until the correlation between the diffraction data and the model is maximized. The agreement is measured by an R-factor defined as where F is the structure factor. A similar quality criterion is Rfree, which is calculated from a subset (~10%) of reflections that were not included in the structure refinement. Both R factors depend on the resolution of the data. As a rule of thumb, Rfree should be approximately the resolution in angstroms divided by 10; thus, a data-set with 2 Å resolution should yield a final Rfree ~ 0.2. Chemical bonding features such as stereochemistry, hydrogen bonding and distribution of bond lengths and angles are complementary measures of the model quality. In iterative model building, it is common to encounter phase bias or model bias: because phase estimations come from the model, each round of calculated map tends to show density wherever the model has density, regardless of whether there truly is a density. This problem can be mitigated by maximum-likelihood weighting and checking using omit maps. It may not be possible to observe every atom in the asymmetric unit. In many cases, crystallographic disorder smears the electron density map. Weakly scattering atoms such as hydrogen are routinely invisible. It is also possible for a single atom to appear multiple times in an electron density map, e.g., if a protein sidechain has multiple (<4) allowed conformations. In still other cases, the crystallographer may detect that the covalent structure deduced for the molecule was incorrect, or changed. For example, proteins may be cleaved or undergo post-translational modifications that were not detected prior to the crystallization. Disorder A common challenge in refinement of crystal structures results from crystallographic disorder. Disorder can take many forms but in general involves the coexistence of two or more species or conformations. Failure to recognize disorder results in flawed interpretation. Pitfalls from improper modeling of disorder are illustrated by the discounted hypothesis of bond stretch isomerism. Disorder is modelled with respect to the relative population of the components, often only two, and their identity. In structures of large molecules and ions, solvent and counterions are often disordered. Applied computational data analysis The use of computational methods for the powder X-ray diffraction data analysis is now generalized. It typically compares the experimental data to the simulated diffractogram of a model structure, taking into account the instrumental parameters, and refines the structural or microstructural parameters of the model using least squares based minimization algorithm. Most available tools allowing phase identification and structural refinement are based on the Rietveld method, some of them being open and free software such as FullProf Suite, Jana2006, MAUD, Rietan, GSAS, etc. while others are available under commercial licenses such as Diffrac.Suite TOPAS, Match!, etc. Most of these tools also allow Le Bail refinement (also referred to as profile matching), that is, refinement of the cell parameters based on the Bragg peaks positions and peak profiles, without taking into account the crystallographic structure by itself. More recent tools allow the refinement of both structural and microstructural data, such as the FAULTS program included in the FullProf Suite, which allows the refinement of structures with planar defects (e.g. stacking faults, twinnings, intergrowths). Deposition of the structure Once the model of a molecule's structure has been finalized, it is often deposited in a crystallographic database such as the Cambridge Structural Database (for small molecules), the Inorganic Crystal Structure Database (ICSD) (for inorganic compounds) or the Protein Data Bank (for protein and sometimes nucleic acids). Many structures obtained in private commercial ventures to crystallize medicinally relevant proteins are not deposited in public crystallographic databases. Contribution of women to X-ray crystallography A number of women were pioneers in X-ray crystallography at a time when they were excluded from most other branches of physical science. Kathleen Lonsdale was a research student of William Henry Bragg, who had 11 women research students out of a total of 18. She is known for both her experimental and theoretical work. Lonsdale joined his crystallography research team at the Royal Institution in London in 1923, and after getting married and having children, went back to work with Bragg as a researcher. She confirmed the structure of the benzene ring, carried out studies of diamond, was one of the first two women to be elected to the Royal Society in 1945, and in 1949 was appointed the first female tenured professor of chemistry and head of the Department of crystallography at University College London. Lonsdale always advocated greater participation of women in science and said in 1970: "Any country that wants to make full use of all its potential scientists and technologists could do so, but it must not expect to get the women quite so simply as it gets the men.... It is utopian, then, to suggest that any country that really wants married women to return to a scientific career, when her children no longer need her physical presence, should make special arrangements to encourage her to do so?". During this period, Lonsdale began a collaboration with William T. Astbury on a set of 230 space group tables which was published in 1924 and became an essential tool for crystallographers. In 1932 Dorothy Hodgkin joined the laboratory of the physicist John Desmond Bernal, who was a former student of Bragg, in Cambridge, UK. She and Bernal took the first X-ray photographs of crystalline proteins. Hodgkin also played a role in the foundation of the International Union of Crystallography. She was awarded the Nobel Prize in Chemistry in 1964 for her work using X-ray techniques to study the structures of penicillin, insulin and vitamin B12. Her work on penicillin began in 1942 during the war and on vitamin B12 in 1948. While her group slowly grew, their predominant focus was on the X-ray analysis of natural products. She is the only British woman ever to have won a Nobel Prize in a science subject. Rosalind Franklin took the X-ray photograph of a DNA fibre that proved key to James Watson and Francis Crick's discovery of the double helix, for which they both won the Nobel Prize for Physiology or Medicine in 1962. Watson revealed in his autobiographic account of the discovery of the structure of DNA, The Double Helix, that he had used Franklin's X-ray photograph without her permission. Franklin died of cancer in her 30s, before Watson received the Nobel Prize. Franklin also carried out important structural studies of carbon in coal and graphite, and of plant and animal viruses. Isabella Karle of the United States Naval Research Laboratory developed an experimental approach to the mathematical theory of crystallography. Her work improved the speed and accuracy of chemical and biomedical analysis. Yet only her husband Jerome shared the 1985 Nobel Prize in Chemistry with Herbert Hauptman, "for outstanding achievements in the development of direct methods for the determination of crystal structures". Other prize-giving bodies have showered Isabella with awards in her own right. Women have written many textbooks and research papers in the field of X-ray crystallography. For many years Lonsdale edited the International Tables for Crystallography, which provide information on crystal lattices, symmetry, and space groups, as well as mathematical, physical and chemical data on structures. Olga Kennard of the University of Cambridge, founded and ran the Cambridge Crystallographic Data Centre, an internationally recognized source of structural data on small molecules, from 1965 until 1997. Jenny Pickworth Glusker, a British scientist, co-authored Crystal Structure Analysis: A Primer, first published in 1971 and as of 2010 in its third edition. Eleanor Dodson, an Australian-born biologist, who began as Dorothy Hodgkin's technician, was the main instigator behind CCP4, the collaborative computing project that currently shares more than 250 software tools with protein crystallographers worldwide. Nobel Prizes involving X-ray crystallography
Physical sciences
Crystallography
Physics
34197
https://en.wikipedia.org/wiki/X-ray
X-ray
An X-ray (also known in many languages as Röntgen radiation) is a form of high-energy electromagnetic radiation with a wavelength shorter than those of ultraviolet rays and longer than those of gamma rays. Roughly, X-rays have a wavelength ranging from 10 nanometers to 10 picometers, corresponding to frequencies in the range of 30 petahertz to 30 exahertz ( to ) and photon energies in the range of 100 eV to 100 keV, respectively. X-rays were discovered in 1895 by the German scientist Wilhelm Conrad Röntgen, who named it X-radiation to signify an unknown type of radiation. X-rays can penetrate many solid substances such as construction materials and living tissue, so X-ray radiography is widely used in medical diagnostics (e.g., checking for broken bones) and material science (e.g., identification of some chemical elements and detecting weak points in construction materials). However X-rays are ionizing radiation and exposure can be hazardous to health, causing DNA damage, cancer and, at higher intensities, burns and radiation sickness. Their generation and use is strictly controlled by public health authorities. History Pre-Röntgen observations and research X-rays were originally noticed in science as a type of unidentified radiation emanating from discharge tubes by experimenters investigating cathode rays produced by such tubes, which are energetic electron beams that were first observed in 1869. Early researchers noticed effects that were attributable to them in many of the early Crookes tubes (invented around 1875). Crookes tubes created free electrons by ionization of the residual air in the tube by a high DC voltage of anywhere between a few kilovolts and 100 kV. This voltage accelerated the electrons coming from the cathode to a high enough velocity that they created X-rays when they struck the anode or the glass wall of the tube. The earliest experimenter thought to have (unknowingly) produced X-rays was William Morgan. In 1785, he presented a paper to the Royal Society of London describing the effects of passing electrical currents through a partially evacuated glass tube, producing a glow created by X-rays. This work was further explored by Humphry Davy and his assistant Michael Faraday. Starting in 1888, Philipp Lenard conducted experiments to see whether cathode rays could pass out of the Crookes tube into the air. He built a Crookes tube with a "window" at the end made of thin aluminium, facing the cathode so the cathode rays would strike it (later called a "Lenard tube"). He found that something came through, that would expose photographic plates and cause fluorescence. He measured the penetrating power of these rays through various materials. It has been suggested that at least some of these "Lenard rays" were actually X-rays. Helmholtz formulated mathematical equations for X-rays. He postulated a dispersion theory before Röntgen made his discovery and announcement. He based it on the electromagnetic theory of light. However, he did not work with actual X-rays. In early 1890, photographer William Jennings and associate professor of the University of Pennsylvania Arthur W. Goodspeed were making photographs of coins with electric sparks. On 22 February after the end of their experiments two coins were left on a stack of photographic plates before Goodspeed demonstrated to Jennings the operation of Crookes tubes. While developing the plates, Jennings noticed disks of unknown origin on some of the plates, but nobody could explain them, and they moved on. Only in 1896 they realized that they accidentally made an X-ray photograph (they didn't claim a discovery). Also in 1890, Roentgen's assistant Ludwig Zehnder noticed a flash of light from a fluorescent screen immediately before the covered tube he was switching on punctured. When Stanford University physics professor Fernando Sanford conducted his "electric photography" experiments in 1891–1893 by photographing coins in the light of electric sparks, like Jennings and Goodspeed, he may have unknowingly generated and detected X-rays. His letter of 6 January 1893 to the Physical Review was duly published and an article entitled Without Lens or Light, Photographs Taken With Plate and Object in Darkness appeared in the San Francisco Examiner. In 1894, Nikola Tesla noticed damaged film in his lab that seemed to be associated with Crookes tube experiments and began investigating this invisible, radiant energy. After Röntgen identified the X-ray, Tesla began making X-ray images of his own using high voltages and tubes of his own design, as well as Crookes tubes. Discovery by Röntgen On 8 November 1895, German physics professor Wilhelm Röntgen stumbled on X-rays while experimenting with Lenard tubes and Crookes tubes and began studying them. He wrote an initial report "On a new kind of ray: A preliminary communication" and on 28 December 1895, submitted it to Würzburg's Physical-Medical Society journal. This was the first paper written on X-rays. Röntgen referred to the radiation as "X", to indicate that it was an unknown type of radiation. Some early texts refer to them as Chi-rays, having interpreted "X" as the uppercase Greek letter Chi, Χ. There are conflicting accounts of his discovery because Röntgen had his lab notes burned after his death, but this is a likely reconstruction by his biographers: Röntgen was investigating cathode rays from a Crookes tube which he had wrapped in black cardboard so that the visible light from the tube would not interfere, using a fluorescent screen painted with barium platinocyanide. He noticed a faint green glow from the screen, about away. Röntgen realized some invisible rays coming from the tube were passing through the cardboard to make the screen glow. He found they could also pass through books and papers on his desk. Röntgen threw himself into investigating these unknown rays systematically. Two months after his initial discovery, he published his paper. Röntgen discovered their medical use when he made a picture of his wife's hand on a photographic plate formed due to X-rays. The photograph of his wife's hand was the first photograph of a human body part using X-rays. When she saw the picture, she said "I have seen my death." The discovery of X-rays generated significant interest. Röntgen's biographer Otto Glasser estimated that, in 1896 alone, as many as 49 essays and 1044 articles about the new rays were published. This was probably a conservative estimate, if one considers that nearly every paper around the world extensively reported about the new discovery, with a magazine such as Science dedicating as many as 23 articles to it in that year alone. Sensationalist reactions to the new discovery included publications linking the new kind of rays to occult and paranormal theories, such as telepathy. The name X-rays stuck, although (over Röntgen's great objections) many of his colleagues suggested calling them Röntgen rays. They are still referred to as such in many languages, including German, Hungarian, Ukrainian, Danish, Polish, Czech, Bulgarian, Swedish, Finnish, Portuguese, Estonian, Slovak, Slovenian, Turkish, Russian, Latvian, Lithuanian, Albanian, Japanese, Dutch, Georgian, Hebrew, Icelandic, and Norwegian. Röntgen received the first Nobel Prize in Physics for his discovery. Advances in radiology Röntgen immediately noticed X-rays could have medical applications. Along with his 28 December Physical-Medical Society submission, he sent a letter to physicians he knew around Europe (1 January 1896). News (and the creation of "shadowgrams") spread rapidly with Scottish electrical engineer Alan Archibald Campbell-Swinton being the first after Röntgen to create an X-ray photograph (of a hand). Through February, there were 46 experimenters taking up the technique in North America alone. The first use of X-rays under clinical conditions was by John Hall-Edwards in Birmingham, England on 11 January 1896, when he radiographed a needle stuck in the hand of an associate. On 14 February 1896, Hall-Edwards was also the first to use X-rays in a surgical operation. In early 1896, several weeks after Röntgen's discovery, Ivan Romanovich Tarkhanov irradiated frogs and insects with X-rays, concluding that the rays "not only photograph, but also affect the living function". At around the same time, the zoological illustrator James Green began to use X-rays to examine fragile specimens. George Albert Boulenger first mentioned this work in a paper he delivered before the Zoological Society of London in May 1896. The book Sciagraphs of British Batrachians and Reptiles (sciagraph is an obsolete name for an X-ray photograph), by Green and James H. Gardiner, with a foreword by Boulenger, was published in 1897. The first medical X-ray made in the United States was obtained using a discharge tube of Ivan Puluj's design. In January 1896, on reading of Röntgen's discovery, Frank Austin of Dartmouth College tested all of the discharge tubes in the physics laboratory and found that only the Puluj tube produced X-rays. This was a result of Puluj's inclusion of an oblique "target" of mica, used for holding samples of fluorescent material, within the tube. On 3 February 1896, Gilman Frost, professor of medicine at the college, and his brother Edwin Frost, professor of physics, exposed the wrist of Eddie McCarthy, whom Gilman had treated some weeks earlier for a fracture, to the X-rays and collected the resulting image of the broken bone on gelatin photographic plates obtained from Howard Langill, a local photographer also interested in Röntgen's work. Many experimenters, including Röntgen himself in his original experiments, came up with methods to view X-ray images "live" using some form of luminescent screen. Röntgen used a screen coated with barium platinocyanide. On 5 February 1896, live imaging devices were developed by both Italian scientist Enrico Salvioni (his "cryptoscope") and William Francis Magie of Princeton University (his "Skiascope"), both using barium platinocyanide. American inventor Thomas Edison started research soon after Röntgen's discovery and investigated materials' ability to fluoresce when exposed to X-rays, finding that calcium tungstate was the most effective substance. In May 1896, he developed the first mass-produced live imaging device, his "Vitascope", later called the fluoroscope, which became the standard for medical X-ray examinations. Edison dropped X-ray research around 1903, before the death of Clarence Madison Dally, one of his glassblowers. Dally had a habit of testing X-ray tubes on his own hands, developing a cancer in them so tenacious that both arms were amputated in a futile attempt to save his life; in 1904, he became the first known death attributed to X-ray exposure. During the time the fluoroscope was being developed, Serbian American physicist Mihajlo Pupin, using a calcium tungstate screen developed by Edison, found that using a fluorescent screen decreased the exposure time it took to create an X-ray for medical imaging from an hour to a few minutes. In 1901, U.S. President William McKinley was shot twice in an assassination attempt while attending the Pan American Exposition in Buffalo, New York. While one bullet only grazed his sternum, another had lodged somewhere deep inside his abdomen and could not be found. A worried McKinley aide sent word to inventor Thomas Edison to rush an X-ray machine to Buffalo to find the stray bullet. It arrived but was not used. While the shooting itself had not been lethal, gangrene had developed along the path of the bullet, and McKinley died of septic shock due to bacterial infection six days later. Hazards discovered With the widespread experimentation with X‑rays after their discovery in 1895 by scientists, physicians, and inventors came many stories of burns, hair loss, and worse in technical journals of the time. In February 1896, Professor John Daniel and William Lofland Dudley of Vanderbilt University reported hair loss after Dudley was X-rayed. A child who had been shot in the head was brought to the Vanderbilt laboratory in 1896. Before trying to find the bullet, an experiment was attempted, for which Dudley "with his characteristic devotion to science" volunteered. Daniel reported that 21 days after taking a picture of Dudley's skull (with an exposure time of one hour), he noticed a bald spot in diameter on the part of his head nearest the X-ray tube: "A plate holder with the plates towards the side of the skull was fastened and a coin placed between the skull and the head. The tube was fastened at the other side at a distance of one-half-inch [] from the hair." Beyond burns, hair loss, and cancer, X-rays can be linked to infertility in males based on the amount of radiation used. In August 1896, H. D. Hawks, a graduate of Columbia College, suffered severe hand and chest burns from an X-ray demonstration. It was reported in Electrical Review and led to many other reports of problems associated with X-rays being sent in to the publication. Many experimenters including Elihu Thomson at Edison's lab, William J. Morton, and Nikola Tesla also reported burns. Elihu Thomson deliberately exposed a finger to an X-ray tube over a period of time and suffered pain, swelling, and blistering. Other effects were sometimes blamed for the damage including ultraviolet rays and (according to Tesla) ozone. Many physicians claimed there were no effects from X-ray exposure at all. On 3 August 1905, in San Francisco, California, Elizabeth Fleischman, an American X-ray pioneer, died from complications as a result of her work with X-rays. Hall-Edwards developed a cancer (then called X-ray dermatitis) sufficiently advanced by 1904 to cause him to write papers and give public addresses on the dangers of X-rays. His left arm had to be amputated at the elbow in 1908, and four fingers on his right arm soon thereafter, leaving only a thumb. He died of cancer in 1926. His left hand is kept at Birmingham University. 20th century and beyond The many applications of X-rays immediately generated enormous interest. Workshops began making specialized versions of Crookes tubes for generating X-rays and these first-generation cold cathode or Crookes X-ray tubes were used until about 1920. A typical early 20th-century medical X-ray system consisted of a Ruhmkorff coil connected to a cold cathode Crookes X-ray tube. A spark gap was typically connected to the high voltage side in parallel to the tube and used for diagnostic purposes. The spark gap allowed detecting the polarity of the sparks, measuring voltage by the length of the sparks thus determining the "hardness" of the vacuum of the tube, and it provided a load in the event the X-ray tube was disconnected. To detect the hardness of the tube, the spark gap was initially opened to the widest setting. While the coil was operating, the operator reduced the gap until sparks began to appear. A tube in which the spark gap began to spark at around was considered soft (low vacuum) and suitable for thin body parts such as hands and arms. A spark indicated the tube was suitable for shoulders and knees. An spark would indicate a higher vacuum suitable for imaging the abdomen of larger individuals. Since the spark gap was connected in parallel to the tube, the spark gap had to be opened until the sparking ceased to operate the tube for imaging. Exposure time for photographic plates was around half a minute for a hand to a couple of minutes for a thorax. The plates may have a small addition of fluorescent salt to reduce exposure times. Crookes tubes were unreliable. They had to contain a small quantity of gas (invariably air) as a current will not flow in such a tube if they are fully evacuated. However, as time passed, the X-rays caused the glass to absorb the gas, causing the tube to generate "harder" X-rays until it soon stopped operating. Larger and more frequently used tubes were provided with devices for restoring the air, known as "softeners". These often took the form of a small side tube that contained a small piece of mica, a mineral that traps relatively large quantities of air within its structure. A small electrical heater heated the mica, causing it to release a small amount of air, thus restoring the tube's efficiency. However, the mica had a limited life, and the restoration process was difficult to control. In 1904, John Ambrose Fleming invented the thermionic diode, the first kind of vacuum tube. This used a hot cathode that caused an electric current to flow in a vacuum. This idea was quickly applied to X-ray tubes, and hence heated-cathode X-ray tubes, called "Coolidge tubes", completely replaced the troublesome cold cathode tubes by about 1920. In about 1906, the physicist Charles Barkla discovered that X-rays could be scattered by gases, and that each element had a characteristic X-ray spectrum. He won the 1917 Nobel Prize in Physics for this discovery. In 1912, Max von Laue, Paul Knipping, and Walter Friedrich first observed the diffraction of X-rays by crystals. This discovery, along with the early work of Paul Peter Ewald, William Henry Bragg, and William Lawrence Bragg, gave birth to the field of X-ray crystallography. In 1913, Henry Moseley performed crystallography experiments with X-rays emanating from various metals and formulated Moseley's law which relates the frequency of the X-rays to the atomic number of the metal. The Coolidge X-ray tube was invented the same year by William D. Coolidge. It made possible the continuous emissions of X-rays. Modern X-ray tubes are based on this design, often employing the use of rotating targets which allow for significantly higher heat dissipation than static targets, further allowing higher quantity X-ray output for use in high-powered applications such as rotational CT scanners. The use of X-rays for medical purposes (which developed into the field of radiation therapy) was pioneered by Major John Hall-Edwards in Birmingham, England. Then in 1908, he had to have his left arm amputated because of the spread of X-ray dermatitis on his arm. Medical science also used the motion picture to study human physiology. In 1913, a motion picture was made in Detroit showing a hard-boiled egg inside a human stomach. This early X-ray movie was recorded at a rate of one still image every four seconds. Dr Lewis Gregory Cole of New York was a pioneer of the technique, which he called "serial radiography". In 1918, X-rays were used in association with motion picture cameras to capture the human skeleton in motion. In 1920, it was used to record the movements of tongue and teeth in the study of languages by the Institute of Phonetics in England. In 1914, Marie Curie developed radiological cars to support soldiers injured in World War I. The cars would allow for rapid X-ray imaging of wounded soldiers so battlefield surgeons could quickly and more accurately operate. From the early 1920s through to the 1950s, X-ray machines were developed to assist in the fitting of shoes and were sold to commercial shoe stores. Concerns regarding the impact of frequent or poorly controlled use were expressed in the 1950s, leading to the practice's eventual end that decade. The X-ray microscope was developed during the 1950s. The Chandra X-ray Observatory, launched on 23 July 1999, has been allowing the exploration of the very violent processes in the universe that produce X-rays. Unlike visible light, which gives a relatively stable view of the universe, the X-ray universe is unstable. It features stars being torn apart by black holes, galactic collisions, and novae, and neutron stars that build up layers of plasma that then explode into space. An X-ray laser device was proposed as part of the Reagan Administration's Strategic Defense Initiative in the 1980s, but the only test of the device (a sort of laser "blaster" or death ray, powered by a thermonuclear explosion) gave inconclusive results. For technical and political reasons, the overall project (including the X-ray laser) was defunded (though was later revived by the second Bush Administration as National Missile Defense using different technologies). Phase-contrast X-ray imaging refers to a variety of techniques that use phase information of an X-ray beam to form the image. Due to its good sensitivity to density differences, it is especially useful for imaging soft tissues. It has become an important method for visualizing cellular and histological structures in a wide range of biological and medical studies. There are several technologies being used for X-ray phase-contrast imaging, all using different principles to convert phase variations in the X-rays emerging from an object into intensity variations. These include propagation-based phase contrast, Talbot interferometry, refraction-enhanced imaging, and X-ray interferometry. These methods provide higher contrast compared to normal absorption-based X-ray imaging, making it possible to distinguish from each other details that have almost similar density. A disadvantage is that these methods require more sophisticated equipment, such as synchrotron or microfocus X-ray sources, X-ray optics, and high resolution X-ray detectors. Energy ranges Soft and hard X-rays X-rays with high photon energies above 5–10 keV (below 0.2–0.1 nm wavelength) are called hard X-rays, while those with lower energy (and longer wavelength) are called soft X-rays. The intermediate range with photon energies of several keV is often referred to as tender X-rays. Due to their penetrating ability, hard X-rays are widely used to image the inside of objects (e.g. in medical radiography and airport security). The term X-ray is metonymically used to refer to a radiographic image produced using this method, in addition to the method itself. Since the wavelengths of hard X-rays are similar to the size of atoms, they are also useful for determining crystal structures by X-ray crystallography. By contrast, soft X-rays are easily absorbed in air; the attenuation length of 600 eV (~2 nm) X-rays in water is less than 1 micrometer. Gamma rays There is no consensus for a definition distinguishing between X-rays and gamma rays. One common practice is to distinguish between the two types of radiation based on their source: X-rays are emitted by electrons, while gamma rays are emitted by the atomic nucleus. This definition has several problems: other processes can also generate these high-energy photons, or sometimes the method of generation is not known. One common alternative is to distinguish X- and gamma radiation on the basis of wavelength (or, equivalently, frequency or photon energy), with radiation shorter than some arbitrary wavelength, such as 10−11 m (0.1 Å), defined as gamma radiation. This criterion assigns a photon to an unambiguous category, but is only possible if wavelength is known. (Some measurement techniques do not distinguish between detected wavelengths.) However, these two definitions often coincide since the electromagnetic radiation emitted by X-ray tubes generally has a longer wavelength and lower photon energy than the radiation emitted by radioactive nuclei. Occasionally, one term or the other is used in specific contexts due to historical precedent, based on measurement (detection) technique, or based on their intended use rather than their wavelength or source. Thus, gamma-rays generated for medical and industrial uses, for example radiotherapy, in the ranges of 6–20 MeV, can in this context also be referred to as X-rays. Properties X-ray photons carry enough energy to ionize atoms and disrupt molecular bonds. This makes it a type of ionizing radiation, and therefore harmful to living tissue. A very high radiation dose over a short period of time causes burns and radiation sickness, while lower doses can give an increased risk of radiation-induced cancer. In medical imaging, this increased cancer risk is generally greatly outweighed by the benefits of the examination. The ionizing capability of X-rays can be used in cancer treatment to kill malignant cells using radiation therapy. It is also used for material characterization using X-ray spectroscopy. Hard X-rays can traverse relatively thick objects without being much absorbed or scattered. For this reason, X-rays are widely used to image the inside of visually opaque objects. The most often seen applications are in medical radiography and airport security scanners, but similar techniques are also important in industry (e.g. industrial radiography and industrial CT scanning) and research (e.g. small animal CT). The penetration depth varies with several orders of magnitude over the X-ray spectrum. This allows the photon energy to be adjusted for the application so as to give sufficient transmission through the object and at the same time provide good contrast in the image. X-rays have much shorter wavelengths than visible light, which makes it possible to probe structures much smaller than can be seen using a normal microscope. This property is used in X-ray microscopy to acquire high-resolution images, and also in X-ray crystallography to determine the positions of atoms in crystals. Interaction with matter X-rays interact with matter in three main ways, through photoabsorption, Compton scattering, and Rayleigh scattering. The strength of these interactions depends on the energy of the X-rays and the elemental composition of the material, but not much on chemical properties, since the X-ray photon energy is much higher than chemical binding energies. Photoabsorption or photoelectric absorption is the dominant interaction mechanism in the soft X-ray regime and for the lower hard X-ray energies. At higher energies, Compton scattering dominates. Photoelectric absorption The probability of a photoelectric absorption per unit mass is approximately proportional to , where is the atomic number and is the energy of the incident photon. This rule is not valid close to inner shell electron binding energies where there are abrupt changes in interaction probability, so called absorption edges. However, the general trend of high absorption coefficients and thus short penetration depths for low photon energies and high atomic numbers is very strong. For soft tissue, photoabsorption dominates up to about 26 keV photon energy where Compton scattering takes over. For higher atomic number substances, this limit is higher. The high amount of calcium () in bones, together with their high density, is what makes them show up so clearly on medical radiographs. A photoabsorbed photon transfers all its energy to the electron with which it interacts, thus ionizing the atom to which the electron was bound and producing a photoelectron that is likely to ionize more atoms in its path. An outer electron will fill the vacant electron position and produce either a characteristic X-ray or an Auger electron. These effects can be used for elemental detection through X-ray spectroscopy or Auger electron spectroscopy. Compton scattering Compton scattering is the predominant interaction between X-rays and soft tissue in medical imaging. Compton scattering is an inelastic scattering of the X-ray photon by an outer shell electron. Part of the energy of the photon is transferred to the scattering electron, thereby ionizing the atom and increasing the wavelength of the X-ray. The scattered photon can go in any direction, but a direction similar to the original direction is more likely, especially for high-energy X-rays. The probability for different scattering angles is described by the Klein–Nishina formula. The transferred energy can be directly obtained from the scattering angle from the conservation of energy and momentum. Rayleigh scattering Rayleigh scattering is the dominant elastic scattering mechanism in the X-ray regime. Inelastic forward scattering gives rise to the refractive index, which for X-rays is only slightly below 1. Production Whenever charged particles (electrons or ions) of sufficient energy hit a material, X-rays are produced. Production by electrons X-rays can be generated by an X-ray tube, a vacuum tube that uses a high voltage to accelerate the electrons released by a hot cathode to a high velocity. The high velocity electrons collide with a metal target, the anode, creating the X-rays. In medical X-ray tubes the target is usually tungsten or a more crack-resistant alloy of rhenium (5%) and tungsten (95%), but sometimes molybdenum for more specialized applications, such as when softer X-rays are needed as in mammography. In crystallography, a copper target is most common, with cobalt often being used when fluorescence from iron content in the sample might otherwise present a problem. When even lower energies are needed, as in X-ray photoelectron spectroscopy, the Kα X-rays from an aluminium or magnesium target are often used. The maximum energy of the produced X-ray photon is limited by the energy of the incident electron, which is equal to the voltage on the tube times the electron charge, so an 80 kV tube cannot create X-rays with an energy greater than 80 keV. When the electrons hit the target, X-rays are created by two different atomic processes: Characteristic X-ray emission (X-ray electroluminescence): If the electron has enough energy, it can knock an orbital electron out of the inner electron shell of the target atom. After that, electrons from higher energy levels fill the vacancies, and X-ray photons are emitted. This process produces an emission spectrum of X-rays at a few discrete frequencies, sometimes referred to as spectral lines. Usually, these are transitions from the upper shells to the K shell (called K lines), to the L shell (called L lines) and so on. If the transition is from 2p to 1s, it is called Kα, while if it is from 3p to 1s it is Kβ. The frequencies of these lines depend on the material of the target and are therefore called characteristic lines. The Kα line usually has greater intensity than the Kβ one and is more desirable in diffraction experiments. Thus the Kβ line is filtered out by a filter. The filter is usually made of a metal having one proton less than the anode material (e.g. Ni filter for Cu anode or Nb filter for Mo anode). Bremsstrahlung: This is radiation given off by the electrons as they are scattered by the strong electric field near the nuclei. These X-rays have a continuous spectrum. The frequency of Bremsstrahlung is limited by the energy of incident electrons. So, the resulting output of a tube consists of a continuous Bremsstrahlung spectrum falling off to zero at the tube voltage, plus several spikes at the characteristic lines. The voltages used in diagnostic X-ray tubes range from roughly 20 kV to 150 kV and thus the highest energies of the X-ray photons range from roughly 20 keV to 150 keV. Both of these X-ray production processes are inefficient, with only about one percent of the electrical energy used by the tube converted into X-rays, and thus most of the electric power consumed by the tube is released as waste heat. When producing a usable flux of X-rays, the X-ray tube must be designed to dissipate the excess heat. A specialized source of X-rays which is becoming widely used in research is synchrotron radiation, which is generated by particle accelerators. Its unique features are X-ray outputs many orders of magnitude greater than those of X-ray tubes, wide X-ray spectra, excellent collimation, and linear polarization. Short nanosecond bursts of X-rays peaking at 15 keV in energy may be reliably produced by peeling pressure-sensitive adhesive tape from its backing in a moderate vacuum. This is likely to be the result of recombination of electrical charges produced by triboelectric charging. The intensity of X-ray triboluminescence is sufficient for it to be used as a source for X-ray imaging. Production by fast positive ions X-rays can also be produced by fast protons or other positive ions. The proton-induced X-ray emission or particle-induced X-ray emission is widely used as an analytical procedure. For high energies, the production cross section is proportional to Z12Z2−4, where Z1 refers to the atomic number of the ion, Z2 refers to that of the target atom. An overview of these cross sections is given in the same reference. Production in lightning and laboratory discharges X-rays are also produced in lightning accompanying terrestrial gamma-ray flashes. The underlying mechanism is the acceleration of electrons in lightning related electric fields and the subsequent production of photons through Bremsstrahlung. This produces photons with energies of some few keV and several tens of MeV. In laboratory discharges with a gap size of approximately 1 meter length and a peak voltage of 1 MV, X-rays with a characteristic energy of 160 keV are observed. A possible explanation is the encounter of two streamers and the production of high-energy run-away electrons; however, microscopic simulations have shown that the duration of electric field enhancement between two streamers is too short to produce a significant number of run-away electrons. Recently, it has been proposed that air perturbations in the vicinity of streamers can facilitate the production of run-away electrons and hence of X-rays from discharges. Detectors X-ray detectors vary in shape and function depending on their purpose. Imaging detectors such as those used for radiography were originally based on photographic plates and later photographic film, but are now mostly replaced by various digital detector types such as image plates and flat panel detectors. For radiation protection direct exposure hazard is often evaluated using ionization chambers, while dosimeters are used to measure the radiation dose the person has been exposed to. X-ray spectra can be measured either by energy dispersive or wavelength dispersive spectrometers. For X-ray diffraction applications, such as X-ray crystallography, hybrid photon counting detectors are widely used. Medical uses Since Röntgen's discovery that X-rays can identify bone structures, X-rays have been used for medical imaging. The first medical use was less than a month after his paper on the subject. Up to 2010, five billion medical imaging examinations had been conducted worldwide. Radiation exposure from medical imaging in 2006 made up about 50% of total ionizing radiation exposure in the United States. Projectional radiographs Projectional radiography is the practice of producing two-dimensional images using X-ray radiation. Bones contain a high concentration of calcium, which, due to its relatively high atomic number, absorbs X-rays efficiently. This reduces the amount of X-rays reaching the detector in the shadow of the bones, making them clearly visible on the radiograph. The lungs and trapped gas also show up clearly because of lower absorption compared to tissue, while differences between tissue types are harder to see. Projectional radiographs are useful in the detection of pathology of the skeletal system as well as for detecting some disease processes in soft tissue. Some notable examples are the very common chest X-ray, which can be used to identify lung diseases such as pneumonia, lung cancer, or pulmonary edema, and the abdominal x-ray, which can detect bowel (or intestinal) obstruction, free air (from visceral perforations), and free fluid (in ascites). X-rays may also be used to detect pathology such as gallstones (which are rarely radiopaque) or kidney stones which are often (but not always) visible. Traditional plain X-rays are less useful in the imaging of soft tissues such as the brain or muscle. One area where projectional radiographs are used extensively is in evaluating how an orthopedic implant, such as a knee, hip or shoulder replacement, is situated in the body with respect to the surrounding bone. This can be assessed in two dimensions from plain radiographs, or it can be assessed in three dimensions if a technique called '2D to 3D registration' is used. This technique purportedly negates projection errors associated with evaluating implant position from plain radiographs. Dental radiography is commonly used in the diagnoses of common oral problems, such as cavities. In medical diagnostic applications, the low energy (soft) X-rays are unwanted, since they are totally absorbed by the body, increasing the radiation dose without contributing to the image. Hence, a thin metal sheet, often of aluminium, called an X-ray filter, is usually placed over the window of the X-ray tube, absorbing the low energy part in the spectrum. This is called hardening the beam since it shifts the center of the spectrum towards higher energy (or harder) X-rays. To generate an image of the cardiovascular system, including the arteries and veins (angiography) an initial image is taken of the anatomical region of interest. A second image is then taken of the same region after an iodinated contrast agent has been injected into the blood vessels within this area. These two images are then digitally subtracted, leaving an image of only the iodinated contrast outlining the blood vessels. The radiologist or surgeon then compares the image obtained to normal anatomical images to determine whether there is any damage or blockage of the vessel. Computed tomography Computed tomography (CT scanning) is a medical imaging modality where tomographic images or slices of specific areas of the body are obtained from a large series of two-dimensional X-ray images taken in different directions. These cross-sectional images can be combined into a three-dimensional image of the inside of the body. CT scans are a quicker and more cost effective imaging modality that can be used for diagnostic and therapeutic purposes in various medical disciplines. Fluoroscopy Fluoroscopy is an imaging technique commonly used by physicians or radiation therapists to obtain real-time moving images of the internal structures of a patient through the use of a fluoroscope. In its simplest form, a fluoroscope consists of an X-ray source and a fluorescent screen, between which a patient is placed. However, modern fluoroscopes couple the screen to an X-ray image intensifier and CCD video camera allowing the images to be recorded and played on a monitor. This method may use a contrast material. Examples include cardiac catheterization (to examine for coronary artery blockages), embolization procedures (to stop bleeding during hemorrhoidal artery embolization), and barium swallow (to examine for esophageal disorders and swallowing disorders). As of recent, modern fluoroscopy utilizes short bursts of x-rays, rather than a continuous beam, to effectively lower radiation exposure for both the patient and operator. Radiotherapy The use of X-rays as a treatment is known as radiation therapy and is largely used for the management (including palliation) of cancer; it requires higher radiation doses than those received for imaging alone. X-rays beams are used for treating skin cancers using lower energy X-ray beams while higher energy beams are used for treating cancers within the body such as brain, lung, prostate, and breast. Adverse effects X-rays are a form of ionizing radiation, and are classified as a carcinogen by both the World Health Organization's International Agency for Research on Cancer and the U.S. government. Diagnostic X-rays (primarily from CT scans due to the large dose used) increase the risk of developmental problems and cancer in those exposed. It is estimated that 0.4% of current cancers in the United States are due to computed tomography (CT scans) performed in the past and that this may increase to as high as 1.5–2% with 2007 rates of CT usage. Experimental and epidemiological data currently do not support the proposition that there is a threshold dose of radiation below which there is no increased risk of cancer. However, this is under increasing doubt. Cancer risk can start at the exposure of 1100 mGy. It is estimated that the additional radiation from diagnostic X-rays will increase the average person's cumulative risk of getting cancer by age 75 by 0.6–3.0%. The amount of absorbed radiation depends upon the type of X-ray test and the body part involved. CT and fluoroscopy entail higher doses of radiation than do plain X-rays. To place the increased risk in perspective, a plain chest X-ray will expose a person to the same amount from background radiation that people are exposed to (depending upon location) every day over 10 days, while exposure from a dental X-ray is approximately equivalent to 1 day of environmental background radiation. Each such X-ray would add less than 1 per 1,000,000 to the lifetime cancer risk. An abdominal or chest CT would be the equivalent to 2–3 years of background radiation to the whole body, or 4–5 years to the abdomen or chest, increasing the lifetime cancer risk between 1 per 1,000 to 1 per 10,000. This is compared to the roughly 40% chance of a US citizen developing cancer during their lifetime. For instance, the effective dose to the torso from a CT scan of the chest is about 5 mSv, and the absorbed dose is about 14 mGy. A head CT scan (1.5 mSv, 64 mGy) that is performed once with and once without contrast agent, would be equivalent to 40 years of background radiation to the head. Accurate estimation of effective doses due to CT is difficult with the estimation uncertainty range of about ±19% to ±32% for adult head scans depending upon the method used. The risk of radiation is greater to a fetus, so in pregnant patients, the benefits of the investigation (X-ray) should be balanced with the potential hazards to the fetus. If there is 1 scan in 9 months, it can be harmful to the fetus. Therefore, women who are pregnant get ultrasounds as their diagnostic imaging because this does not use radiation. If there is too much radiation exposure there could be harmful effects on the fetus or the reproductive organs of the mother. In the US, there are an estimated 62 million CT scans performed annually, including more than 4 million on children. Avoiding unnecessary X-rays (especially CT scans) reduces radiation dose and any associated cancer risk. Medical X-rays are a significant source of human-made radiation exposure. In 1987, they accounted for 58% of exposure from human-made sources in the United States. Since human-made sources accounted for only 18% of the total radiation exposure, most of which came from natural sources (82%), medical X-rays only accounted for 10% of total American radiation exposure; medical procedures as a whole (including nuclear medicine) accounted for 14% of total radiation exposure. By 2006, however, medical procedures in the United States were contributing much more ionizing radiation than was the case in the early 1980s. In 2006, medical exposure constituted nearly half of the total radiation exposure of the U.S. population from all sources. The increase is traceable to the growth in the use of medical imaging procedures, in particular computed tomography (CT), and to the growth in the use of nuclear medicine. Dosage due to dental X-rays varies significantly depending on the procedure and the technology (film or digital). Depending on the procedure and the technology, a single dental X-ray of a human results in an exposure of 5 to 40 μSv. A full mouth series of X-rays may result in an exposure of up to 60 (digital) to 180 (film) μSv, for a yearly average of up to 400 μSv. Financial incentives have been shown to have a significant impact on X-ray use with doctors who are paid a separate fee for each X-ray providing more X-rays. Early photon tomography or EPT (as of 2015) along with other techniques are being researched as potential alternatives to X-rays for imaging applications. Other uses Other notable uses of X-rays include: X-ray crystallography in which the pattern produced by the diffraction of X-rays through the closely spaced lattice of atoms in a crystal is recorded and then analysed to reveal the nature of that lattice. A related technique, fiber diffraction, was used by Rosalind Franklin to discover the double helical structure of DNA. X-ray astronomy, which is an observational branch of astronomy, which deals with the study of X-ray emission from celestial objects. X-ray microscopic analysis, which uses electromagnetic radiation in the soft X-ray band to produce images of very small objects. X-ray fluorescence, a technique in which X-rays are generated within a specimen and detected. The outgoing energy of the X-ray can be used to identify the composition of the sample. Industrial radiography uses X-rays for inspection of industrial parts, particularly welds. Radiography of cultural objects, most often X-rays of paintings to reveal underdrawing, pentimenti alterations in the course of painting or by later restorers, and sometimes previous paintings on the support. Many pigments such as lead white show well in radiographs. X-ray spectromicroscopy has been used to analyse the reactions of pigments in paintings. For example, in analysing colour degradation in the paintings of van Gogh. Authentication and quality control of packaged items. Industrial CT (computed tomography), a process that uses X-ray equipment to produce three-dimensional representations of components both externally and internally. This is accomplished through computer processing of projection images of the scanned object in many directions. Airport security luggage scanners use X-rays for inspecting the interior of luggage for security threats before loading on aircraft. truck scanners and domestic police departments use X-rays for inspecting the interior of trucks. X-ray art and fine art photography, artistic use of X-rays, for example the works by Stane Jagodič X-ray hair removal, a method popular in the 1920s but now banned by the FDA. Shoe-fitting fluoroscopes were popularized in the 1920s, banned in the US in the 1960s, in the UK in the 1970s, and later in continental Europe. Roentgen stereophotogrammetry is used to track movement of bones based on the implantation of markers X-ray photoelectron spectroscopy is a chemical analysis technique relying on the photoelectric effect, usually employed in surface science. Radiation implosion is the use of high energy X-rays generated from a fission explosion (an A-bomb) to compress nuclear fuel to the point of fusion ignition (an H-bomb). Visibility While generally considered invisible to the human eye, in special circumstances X-rays can be visible. Brandes, in an experiment a short time after Röntgen's landmark 1895 paper, reported after dark adaptation and placing his eye close to an X-ray tube, seeing a faint "blue-gray" glow which seemed to originate within the eye itself. Upon hearing this, Röntgen reviewed his record books and found he too had seen the effect. When placing an X-ray tube on the opposite side of a wooden door Röntgen had noted the same blue glow, seeming to emanate from the eye itself, but thought his observations to be spurious because he only saw the effect when he used one type of tube. Later he realized that the tube which had created the effect was the only one powerful enough to make the glow plainly visible and the experiment was thereafter readily repeatable. The knowledge that X-rays are actually faintly visible to the dark-adapted naked eye has largely been forgotten today; this is probably due to the desire not to repeat what would now be seen as a recklessly dangerous and potentially harmful experiment with ionizing radiation. It is not known what exact mechanism in the eye produces the visibility: it could be due to conventional detection (excitation of rhodopsin molecules in the retina), direct excitation of retinal nerve cells, or secondary detection via, for instance, X-ray induction of phosphorescence in the eyeball with conventional retinal detection of the secondarily produced visible light. Though X-rays are otherwise invisible, it is possible to see the ionization of the air molecules if the intensity of the X-ray beam is high enough. The beamline from the wiggler at the European Synchrotron Radiation Facility is one example of such high intensity. Units of measure and exposure The measure of X-rays ionizing ability is called the exposure: The coulomb per kilogram (C/kg) is the SI unit of ionizing radiation exposure, and it is the amount of radiation required to create one coulomb of charge of each polarity in one kilogram of matter. The roentgen (R) is an obsolete traditional unit of exposure, which represented the amount of radiation required to create one electrostatic unit of charge of each polarity in one cubic centimeter of dry air. 1 roentgen = . However, the effect of ionizing radiation on matter (especially living tissue) is more closely related to the amount of energy deposited into them rather than the charge generated. This measure of energy absorbed is called the absorbed dose: The gray (Gy), which has units of (joules/kilogram), is the SI unit of absorbed dose, and it is the amount of radiation required to deposit one joule of energy in one kilogram of any kind of matter. The rad is the (obsolete) corresponding traditional unit, equal to 10 millijoules of energy deposited per kilogram. 100 rad = 1 gray. The equivalent dose is the measure of the biological effect of radiation on human tissue. For X-rays it is equal to the absorbed dose. The Roentgen equivalent man (rem) is the traditional unit of equivalent dose. For X-rays it is equal to the rad, or, in other words, 10 millijoules of energy deposited per kilogram. 100 rem = 1 Sv. The sievert (Sv) is the SI unit of equivalent dose, and also of effective dose. For X-rays the "equivalent dose" is numerically equal to a Gray (Gy). 1 Sv = 1 Gy. For the "effective dose" of X-rays, it is usually not equal to the Gray (Gy).
Physical sciences
Electrodynamics
null
34198
https://en.wikipedia.org/wiki/X86
X86
x86 (also known as 80x86 or the 8086 family) is a family of complex instruction set computer (CISC) instruction set architectures initially developed by Intel, based on the 8086 microprocessor and its 8-bit-external-bus variant, the 8088. The 8086 was introduced in 1978 as a fully 16-bit extension of 8-bit Intel's 8080 microprocessor, with memory segmentation as a solution for addressing more memory than can be covered by a plain 16-bit address. The term "x86" came into being because the names of several successors to Intel's 8086 processor end in "86", including the 80186, 80286, 80386 and 80486. Colloquially, their names were "186", "286", "386" and "486". The term is not synonymous with IBM PC compatibility, as this implies a multitude of other computer hardware. Embedded systems and general-purpose computers used x86 chips before the PC-compatible market started, some of them before the IBM PC (1981) debut. , most desktop and laptop computers sold are based on the x86 architecture family, while mobile categories such as smartphones or tablets are dominated by ARM. At the high end, x86 continues to dominate computation-intensive workstation and cloud computing segments. Overview In the 1980s and early 1990s, when the 8088 and 80286 were still in common use, the term x86 usually represented any 8086-compatible CPU. Today, however, x86 usually implies binary compatibility with the 32-bit instruction set of the 80386. This is due to the fact that this instruction set has become something of a lowest common denominator for many modern operating systems and also probably because the term became common after the introduction of the 80386 in 1985. A few years after the introduction of the 8086 and 8088, Intel added some complexity to its naming scheme and terminology as the "iAPX" of the ambitious but ill-fated Intel iAPX 432 processor was tried on the more successful 8086 family of chips, applied as a kind of system-level prefix. An 8086 system, including coprocessors such as 8087 and 8089, and simpler Intel-specific system chips, was thereby described as an iAPX 86 system. There were also terms iRMX (for operating systems), iSBC (for single-board computers), and iSBX (for multimodule boards based on the 8086 architecture), all together under the heading Microsystem 80. However, this naming scheme was quite temporary, lasting for a few years during the early 1980s. Although the 8086 was primarily developed for embedded systems and small multi-user or single-user computers, largely as a response to the successful 8080-compatible Zilog Z80, the x86 line soon grew in features and processing power. Today, x86 is ubiquitous in both stationary and portable personal computers, and is also used in midrange computers, workstations, servers, and most new supercomputer clusters of the TOP500 list. A large amount of software, including a large list of are using x86-based hardware. Modern x86 is relatively uncommon in embedded systems, however; small low power applications (using tiny batteries), and low-cost microprocessor markets, such as home appliances and toys, lack significant x86 presence. Simple 8- and 16-bit based architectures are common here, as well as simpler RISC architectures like RISC-V, although the x86-compatible VIA C7, VIA Nano, AMD's Geode, Athlon Neo and Intel Atom are examples of 32- and 64-bit designs used in some relatively low-power and low-cost segments. There have been several attempts, including by Intel, to end the market dominance of the "inelegant" x86 architecture designed directly from the first simple 8-bit microprocessors. Examples of this are the iAPX 432 (a project originally named the Intel 8800), the Intel 960, Intel 860 and the Intel/Hewlett-Packard Itanium architecture. However, the continuous refinement of x86 microarchitectures, circuitry and semiconductor manufacturing would make it hard to replace x86 in many segments. AMD's 64-bit extension of x86 (which Intel eventually responded to with a compatible design) and the scalability of x86 chips in the form of modern multi-core CPUs, is underlining x86 as an example of how continuous refinement of established industry standards can resist the competition from completely new architectures. Chronology The table below lists processor models and model series implementing various architectures in the x86 family, in chronological order. Each line item is characterized by significantly improved or commercially successful processor microarchitecture designs. History Designers and manufacturers At various times, companies such as IBM, VIA, NEC, AMD, TI, STM, Fujitsu, OKI, Siemens, Cyrix, Intersil, C&T, NexGen, UMC, and DM&P started to design or manufacture x86 processors (CPUs) intended for personal computers and embedded systems. Other companies that designed or manufactured x86 or x87 processors include ITT Corporation, National Semiconductor, ULSI System Technology, and Weitek. Such x86 implementations were seldom simple copies but often employed different internal microarchitectures and different solutions at the electronic and physical levels. Quite naturally, early compatible microprocessors were 16-bit, while 32-bit designs were developed much later. For the personal computer market, real quantities started to appear around 1990 with i386 and i486 compatible processors, often named similarly to Intel's original chips. After the fully pipelined i486, in 1993 Intel introduced the Pentium brand name (which, unlike numbers, could be trademarked) for their new set of superscalar x86 designs. With the x86 naming scheme now legally cleared, other x86 vendors had to choose different names for their x86-compatible products, and initially some chose to continue with variations of the numbering scheme: IBM partnered with Cyrix to produce the 5x86 and then the very efficient 6x86 (M1) and 6x86MX (MII) lines of Cyrix designs, which were the first x86 microprocessors implementing register renaming to enable speculative execution. AMD meanwhile designed and manufactured the advanced but delayed 5k86 (K5), which, internally, was closely based on AMD's earlier 29K RISC design; similar to NexGen's Nx586, it used a strategy such that dedicated pipeline stages decode x86 instructions into uniform and easily handled micro-operations, a method that has remained the basis for most x86 designs to this day. Some early versions of these microprocessors had heat dissipation problems. The 6x86 was also affected by a few minor compatibility problems, the Nx586 lacked a floating-point unit (FPU) and (the then crucial) pin-compatibility, while the K5 had somewhat disappointing performance when it was (eventually) introduced. Customer ignorance of alternatives to the Pentium series further contributed to these designs being comparatively unsuccessful, despite the fact that the K5 had very good Pentium compatibility and the 6x86 was significantly faster than the Pentium on integer code. AMD later managed to grow into a serious contender with the K6 set of processors, which gave way to the very successful Athlon and Opteron. There were also other contenders, such as Centaur Technology (formerly IDT), Rise Technology, and Transmeta. VIA Technologies' energy efficient C3 and C7 processors, which were designed by the Centaur company, were sold for many years following their release in 2005. Centaur's 2008 design, the VIA Nano, was their first processor with superscalar and speculative execution. It was introduced at about the same time (in 2008) as Intel introduced the Intel Atom, its first "in-order" processor after the P5 Pentium. Many additions and extensions have been added to the original x86 instruction set over the years, almost consistently with full backward compatibility. The architecture family has been implemented in processors from Intel, Cyrix, AMD, VIA Technologies and many other companies; there are also open implementations, such as the Zet SoC platform (currently inactive). Nevertheless, of those, only Intel, AMD, VIA Technologies, and DM&P Electronics hold x86 architectural licenses, and from these, only the first two actively produce modern 64-bit designs, leading to what has been called a "duopoly" of Intel and AMD in x86 processors. However, in 2014 the Shanghai-based Chinese company Zhaoxin, a joint venture between a Chinese company and VIA Technologies, began designing VIA based x86 processors for desktops and laptops. The release of its newest "7" family of x86 processors (e.g. KX-7000), which are not quite as fast as AMD or Intel chips but are still state of the art, had been planned for 2021; as of March 2022 the release had not taken place, however. From 16-bit and 32-bit to 64-bit architecture The instruction set architecture has twice been extended to a larger word size. In 1985, Intel released the 32-bit 80386 (later known as i386) which gradually replaced the earlier 16-bit chips in computers (although typically not in embedded systems) during the following years; this extended programming model was originally referred to as the i386 architecture (like its first implementation) but Intel later dubbed it IA-32 when introducing its (unrelated) IA-64 architecture. In 1999–2003, AMD extended this 32-bit architecture to 64 bits and referred to it as x86-64 in early documents and later as AMD64. Intel soon adopted AMD's architectural extensions under the name IA-32e, later using the name EM64T and finally using Intel 64. Microsoft and Sun Microsystems/Oracle also use term "x64", while many Linux distributions, and the BSDs also use the "amd64" term. Microsoft Windows, for example, designates its 32-bit versions as "x86" and 64-bit versions as "x64", while installation files of 64-bit Windows versions are required to be placed into a directory called "AMD64". In 2023, Intel proposed a major change to the architecture referred to as X86S (formerly known as X86-S). The S in X86S stood for "simplification", which aimed to remove support for legacy execution modes and instructions. A processor implementing this proposal would start execution directly in long mode and would only support 64-bit operating systems. 32-bit code would only be supported for user applications running in ring 3, and would use the same simplified segmentation as long mode. In December 2024 Intel cancelled this project. Basic properties of the architecture The x86 architecture is a variable instruction length, primarily "CISC" design with emphasis on backward compatibility. The instruction set is not typical CISC, however, but basically an extended version of the simple eight-bit 8008 and 8080 architectures. Byte-addressing is enabled and words are stored in memory with little-endian byte order. Memory access to unaligned addresses is allowed for almost all instructions. The largest native size for integer arithmetic and memory addresses (or offsets) is 16, 32 or 64 bits depending on architecture generation (newer processors include direct support for smaller integers as well). Multiple scalar values can be handled simultaneously via the SIMD unit present in later generations, as described below. Immediate addressing offsets and immediate data may be expressed as 8-bit quantities for the frequently occurring cases or contexts where a −128..127 range is enough. Typical instructions are therefore 2 or 3 bytes in length (although some are much longer, and some are single-byte). To further conserve encoding space, most registers are expressed in opcodes using three or four bits, the latter via an opcode prefix in 64-bit mode, while at most one operand to an instruction can be a memory location. However, this memory operand may also be the destination (or a combined source and destination), while the other operand, the source, can be either register or immediate. Among other factors, this contributes to a code size that rivals eight-bit machines and enables efficient use of instruction cache memory. The relatively small number of general registers (also inherited from its 8-bit ancestors) has made register-relative addressing (using small immediate offsets) an important method of accessing operands, especially on the stack. Much work has therefore been invested in making such accesses as fast as register accesses—i.e., a one cycle instruction throughput, in most circumstances where the accessed data is available in the top-level cache. Floating point and SIMD A dedicated floating-point processor with 80-bit internal registers, the 8087, was developed for the original 8086. This microprocessor subsequently developed into the extended 80387, and later processors incorporated a backward compatible version of this functionality on the same microprocessor as the main processor. In addition to this, modern x86 designs also contain a SIMD-unit (see SSE below) where instructions can work in parallel on (one or two) 128-bit words, each containing two or four floating-point numbers (each 64 or 32 bits wide respectively), or alternatively, 2, 4, 8 or 16 integers (each 64, 32, 16 or 8 bits wide respectively). The presence of wide SIMD registers means that existing x86 processors can load or store up to 128 bits of memory data in a single instruction and also perform bitwise operations (although not integer arithmetic) on full 128-bits quantities in parallel. Intel's Sandy Bridge processors added the Advanced Vector Extensions (AVX) instructions, widening the SIMD registers to 256 bits. The Intel Initial Many Core Instructions implemented by the Knights Corner Xeon Phi processors, and the AVX-512 instructions implemented by the Knights Landing Xeon Phi processors and by Skylake-X processors, use 512-bit wide SIMD registers. Current implementations During execution, current x86 processors employ a few extra decoding steps to split most instructions into smaller pieces called micro-operations. These are then handed to a control unit that buffers and schedules them in compliance with x86-semantics so that they can be executed, partly in parallel, by one of several (more or less specialized) execution units. These modern x86 designs are thus pipelined, superscalar, and also capable of out of order and speculative execution (via branch prediction, register renaming, and memory dependence prediction), which means they may execute multiple (partial or complete) x86 instructions simultaneously, and not necessarily in the same order as given in the instruction stream. Some Intel CPUs (Xeon Foster MP, some Pentium 4, and some Nehalem and later Intel Core processors) and AMD CPUs (starting from Zen) are also capable of simultaneous multithreading with two threads per core (Xeon Phi has four threads per core). Some Intel CPUs support transactional memory (TSX). When introduced, in the mid-1990s, this method was sometimes referred to as a "RISC core" or as "RISC translation", partly for marketing reasons, but also because these micro-operations share some properties with certain types of RISC instructions. However, traditional microcode (used since the 1950s) also inherently shares many of the same properties; the new method differs mainly in that the translation to micro-operations now occurs asynchronously. Not having to synchronize the execution units with the decode steps opens up possibilities for more analysis of the (buffered) code stream, and therefore permits detection of operations that can be performed in parallel, simultaneously feeding more than one execution unit. The latest processors also do the opposite when appropriate; they combine certain x86 sequences (such as a compare followed by a conditional jump) into a more complex micro-op which fits the execution model better and thus can be executed faster or with fewer machine resources involved. Another way to try to improve performance is to cache the decoded micro-operations, so the processor can directly access the decoded micro-operations from a special cache, instead of decoding them again. Intel followed this approach with the Execution Trace Cache feature in their NetBurst microarchitecture (for Pentium 4 processors) and later in the Decoded Stream Buffer (for Core-branded processors since Sandy Bridge). Transmeta used a completely different method in their Crusoe x86 compatible CPUs. They used just-in-time translation to convert x86 instructions to the CPU's native VLIW instruction set. Transmeta argued that their approach allows for more power efficient designs since the CPU can forgo the complicated decode step of more traditional x86 implementations. Addressing modes Addressing modes for 16-bit processor modes can be summarized by the formula: Addressing modes for 32-bit x86 processor modes can be summarized by the formula: Addressing modes for the 64-bit processor mode can be summarized by the formula: Instruction relative addressing in 64-bit code (RIP + displacement, where RIP is the instruction pointer register) simplifies the implementation of position-independent code (as used in shared libraries in some operating systems). The 8086 had of eight-bit (or alternatively ) I/O space, and a (one segment) stack in memory supported by computer hardware. Only words (two bytes) can be pushed to the stack. The stack grows toward numerically lower addresses, with pointing to the most recently pushed item. There are 256 interrupts, which can be invoked by both hardware and software. The interrupts can cascade, using the stack to store the return address. x86 registers 16-bit The original Intel 8086 and 8088 have fourteen 16-bit registers. Four of them (AX, BX, CX, DX) are general-purpose registers (GPRs), although each may have an additional purpose; for example, only CX can be used as a counter with the loop instruction. Each can be accessed as two separate bytes (thus BX's high byte can be accessed as BH and low byte as BL). Two pointer registers have special roles: SP (stack pointer) points to the "top" of the stack, and BP (base pointer) is often used to point at some other place in the stack, typically above the local variables (see frame pointer). The registers SI, DI, BX and BP are address registers, and may also be used for array indexing. One of four possible 'segment registers' (CS, DS, SS and ES) is used to form a memory address. In the original 8086 / 8088 / 80186 / 80188 every address was built from a segment register and one of the general purpose registers. For example ds:si is the notation for an address formed as [16 * ds + si] to allow 20-bit addressing rather than 16 bits, although this changed in later processors. At that time only certain combinations were supported. The FLAGS register contains flags such as carry flag, overflow flag and zero flag. Finally, the instruction pointer (IP) points to the next instruction that will be fetched from memory and then executed; this register cannot be directly accessed (read or written) by a program. The Intel 80186 and 80188 are essentially an upgraded 8086 or 8088 CPU, respectively, with on-chip peripherals added, and they have the same CPU registers as the 8086 and 8088 (in addition to interface registers for the peripherals). The 8086, 8088, 80186, and 80188 can use an optional floating-point coprocessor, the 8087. The 8087 appears to the programmer as part of the CPU and adds eight 80-bit wide registers, st(0) to st(7), each of which can hold numeric data in one of seven formats: 32-, 64-, or 80-bit floating point, 16-, 32-, or 64-bit (binary) integer, and 80-bit packed decimal integer. It also has its own 16-bit status register accessible through the instruction, and it is common to simply use some of its bits for branching by copying it into the normal FLAGS. In the Intel 80286, to support protected mode, three special registers hold descriptor table addresses (GDTR, LDTR, IDTR), and a fourth task register (TR) is used for task switching. The 80287 is the floating-point coprocessor for the 80286 and has the same registers as the 8087 with the same data formats. 32-bit With the advent of the 32-bit 80386 processor, the 16-bit general-purpose registers, base registers, index registers, instruction pointer, and FLAGS register, but not the segment registers, were expanded to 32 bits. The nomenclature represented this by prefixing an "E" (for "extended") to the register names in x86 assembly language. Thus, the AX register corresponds to the lower 16 bits of the new 32-bit EAX register, SI corresponds to the lower 16 bits of ESI, and so on. The general-purpose registers, base registers, and index registers can all be used as the base in addressing modes, and all of those registers except for the stack pointer can be used as the index in addressing modes. Two new segment registers (FS and GS) were added. With a greater number of registers, instructions and operands, the machine code format was expanded. To provide backward compatibility, segments with executable code can be marked as containing either 16-bit or 32-bit instructions. Special prefixes allow inclusion of 32-bit instructions in a 16-bit segment or vice versa. The 80386 had an optional floating-point coprocessor, the 80387; it had eight 80-bit wide registers: st(0) to st(7), like the 8087 and 80287. The 80386 could also use an 80287 coprocessor. With the 80486 and all subsequent x86 models, the floating-point processing unit (FPU) is integrated on-chip. The Pentium MMX added eight 64-bit MMX integer vector registers (MM0 to MM7, which share lower bits with the 80-bit-wide FPU stack). With the Pentium III, Intel added a 32-bit Streaming SIMD Extensions (SSE) control/status register (MXCSR) and eight 128-bit SSE floating-point registers (XMM0 to XMM7). 64-bit Starting with the AMD Opteron processor, the x86 architecture extended the 32-bit registers into 64-bit registers in a way similar to how the 16 to 32-bit extension took place. An R-prefix (for "register") identifies the 64-bit registers (RAX, RBX, RCX, RDX, RSI, RDI, RBP, RSP, RFLAGS, RIP), and eight additional 64-bit general registers (R8–R15) were also introduced in the creation of x86-64. Also, eight more SSE vector registers (XMM8–XMM15) were added. However, these extensions are only usable in 64-bit mode, which is one of the two modes only available in long mode. The addressing modes were not dramatically changed from 32-bit mode, except that addressing was extended to 64 bits, virtual addresses are now sign extended to 64 bits (in order to disallow mode bits in virtual addresses), and other selector details were dramatically reduced. In addition, an addressing mode was added to allow memory references relative to RIP (the instruction pointer), to ease the implementation of position-independent code, used in shared libraries in some operating systems. 128-bit SIMD registers XMM0–XMM15 (XMM0–XMM31 when AVX-512 is supported). 256-bit SIMD registers YMM0–YMM15 (YMM0–YMM31 when AVX-512 is supported). Lower half of each of the YMM registers maps onto the corresponding XMM register. 512-bit SIMD registers ZMM0–ZMM31. Lower half of each of the ZMM registers maps onto the corresponding YMM register. Miscellaneous/special purpose x86 processors that have a protected mode, i.e. the 80286 and later processors, also have three descriptor registers (GDTR, LDTR, IDTR) and a task register (TR). 32-bit x86 processors (starting with the 80386) also include various special/miscellaneous registers such as control registers (CR0 through 4, CR8 for 64-bit only), debug registers (DR0 through 3, plus 6 and 7), test registers (TR3 through 7; 80486 only), and model-specific registers (MSRs, appearing with the Pentium). AVX-512 has eight extra 64-bit mask registers K0–K7 for selecting elements in a vector register. Depending on the vector register and element widths, only a subset of bits of the mask register may be used by a given instruction. Purpose Although the main registers (with the exception of the instruction pointer) are "general-purpose" in the 32-bit and 64-bit versions of the instruction set and can be used for anything, it was originally envisioned that they be used for the following purposes: AL/AH/AX/EAX/RAX: Accumulator CL/CH/CX/ECX/RCX: Counter (for use with loops and strings) DL/DH/DX/EDX/RDX: Extend the precision of the accumulator (e.g. combine 32-bit EAX and EDX for 64-bit integer operations in 32-bit code) BL/BH/BX/EBX/RBX: Base index (for use with arrays) SP/ESP/RSP: Stack pointer for top address of the stack. BP/EBP/RBP: Stack base pointer for holding the address of the current stack frame. SI/ESI/RSI: Source index for string operations. DI/EDI/RDI: Destination index for string operations. IP/EIP/RIP: Instruction pointer. Holds the program counter, the address of next instruction. Segment registers: CS: Code DS: Data SS: Stack ES: Extra data FS: Extra data #2 GS: Extra data #3 No particular purposes were envisioned for the other 8 registers available only in 64-bit mode. Some instructions compile and execute more efficiently when using these registers for their designed purpose. For example, using AL as an accumulator and adding an immediate byte value to it produces the efficient add to AL opcode of 04h, whilst using the BL register produces the generic and longer add to register opcode of 80C3h. Another example is double precision division and multiplication that works specifically with the AX and DX registers. Modern compilers benefited from the introduction of the sib byte (scale-index-base byte) that allows registers to be treated uniformly (minicomputer-like). However, using the sib byte universally is non-optimal, as it produces longer encodings than only using it selectively when necessary. (The main benefit of the sib byte is the orthogonality and more powerful addressing modes it provides, which make it possible to save instructions and the use of registers for address calculations such as scaling an index.) Some special instructions lost priority in the hardware design and became slower than equivalent small code sequences. A notable example is the LODSW instruction. Structure Note: The ?PL registers are only available in 64-bit mode. Note: The ?IL registers are only available in 64-bit mode. Operating modes Real mode Real Address mode, commonly called Real mode, is an operating mode of 8086 and later x86-compatible CPUs. Real mode is characterized by a 20-bit segmented memory address space (meaning that only slightly more than 1 MiB of memory can be addressed), direct software access to peripheral hardware, and no concept of memory protection or multitasking at the hardware level. All x86 CPUs in the 80286 series and later start up in real mode at power-on; 80186 CPUs and earlier had only one operational mode, which is equivalent to real mode in later chips. (On the IBM PC platform, direct software access to the IBM BIOS routines is available only in real mode, since BIOS is written for real mode. However, this is not a property of the x86 CPU but of the IBM BIOS design.) In order to use more than 64 KB of memory, the segment registers must be used. This created great complications for compiler implementors who introduced odd pointer modes such as "near", "far" and "huge" to leverage the implicit nature of segmented architecture to different degrees, with some pointers containing 16-bit offsets within implied segments and other pointers containing segment addresses and offsets within segments. It is technically possible to use up to 256 KB of memory for code and data, with up to 64 KB for code, by setting all four segment registers once and then only using 16-bit offsets (optionally with default-segment override prefixes) to address memory, but this puts substantial restrictions on the way data can be addressed and memory operands can be combined, and it violates the architectural intent of the Intel designers, which is for separate data items (e.g. arrays, structures, code units) to be contained in separate segments and addressed by their own segment addresses, in new programs that are not ported from earlier 8-bit processors with 16-bit address spaces. Unreal mode Unreal mode is used by some 16-bit operating systems and some 32-bit boot loaders. System Management Mode The System Management Mode (SMM) is only used by the system firmware (BIOS/UEFI), not by operating systems and applications software. The SMM code is running in SMRAM. Protected mode In addition to real mode, the Intel 80286 supports protected mode, expanding addressable physical memory to 16 MB and addressable virtual memory to 1 GB, and providing protected memory, which prevents programs from corrupting one another. This is done by using the segment registers only for storing an index into a descriptor table that is stored in memory. There are two such tables, the Global Descriptor Table (GDT) and the Local Descriptor Table (LDT), each holding up to 8192 segment descriptors, each segment giving access to 64 KB of memory. In the 80286, a segment descriptor provides a 24-bit base address, and this base address is added to a 16-bit offset to create an absolute address. The base address from the table fulfills the same role that the literal value of the segment register fulfills in real mode; the segment registers have been converted from direct registers to indirect registers. Each segment can be assigned one of four ring levels used for hardware-based computer security. Each segment descriptor also contains a segment limit field which specifies the maximum offset that may be used with the segment. Because offsets are 16 bits, segments are still limited to 64 KB each in 80286 protected mode. Each time a segment register is loaded in protected mode, the 80286 must read a 6-byte segment descriptor from memory into a set of hidden internal registers. Thus, loading segment registers is much slower in protected mode than in real mode, and changing segments very frequently is to be avoided. Actual memory operations using protected mode segments are not slowed much because the 80286 and later have hardware to check the offset against the segment limit in parallel with instruction execution. The Intel 80386 extended offsets and also the segment limit field in each segment descriptor to 32 bits, enabling a segment to span the entire memory space. It also introduced support in protected mode for paging, a mechanism making it possible to use paged virtual memory (with 4 KB page size). Paging allows the CPU to map any page of the virtual memory space to any page of the physical memory space. To do this, it uses additional mapping tables in memory called page tables. Protected mode on the 80386 can operate with paging either enabled or disabled; the segmentation mechanism is always active and generates virtual addresses that are then mapped by the paging mechanism if it is enabled. The segmentation mechanism can also be effectively disabled by setting all segments to have a base address of 0 and size limit equal to the whole address space; this also requires a minimally-sized segment descriptor table of only four descriptors (since the FS and GS segments need not be used). Paging is used extensively by modern multitasking operating systems. Linux, 386BSD and Windows NT were developed for the 386 because it was the first Intel architecture CPU to support paging and 32-bit segment offsets. The 386 architecture became the basis of all further development in the x86 series. x86 processors that support protected mode boot into real mode for backward compatibility with the older 8086 class of processors. Upon power-on (a.k.a. booting), the processor initializes in real mode, and then begins executing instructions. Operating system boot code, which might be stored in read-only memory, may place the processor into the protected mode to enable paging and other features. Conversely, segment arithmetic, a common practice in real mode code, is not allowed in protected mode. Virtual 8086 mode There is also a sub-mode of operation in 32-bit protected mode (a.k.a. 80386 protected mode) called virtual 8086 mode, also known as V86 mode. This is basically a special hybrid operating mode that allows real mode programs and operating systems to run while under the control of a protected mode supervisor operating system. This allows for a great deal of flexibility in running both protected mode programs and real mode programs simultaneously. This mode is exclusively available for the 32-bit version of protected mode; it does not exist in the 16-bit version of protected mode, or in long mode. Long mode In the mid 1990s, it was obvious that the 32-bit address space of the x86 architecture was limiting its performance in applications requiring large data sets. A 32-bit address space would allow the processor to directly address only 4 GB of data, a size surpassed by applications such as video processing and database engines. Using 64-bit addresses, it is possible to directly address 16 EiB of data, although most 64-bit architectures do not support access to the full 64-bit address space; for example, AMD64 supports only 48 bits from a 64-bit address, split into four paging levels. In 1999, AMD published a (nearly) complete specification for a 64-bit extension of the x86 architecture which they called x86-64 with claimed intentions to produce. That design is currently used in almost all x86 processors, with some exceptions intended for embedded systems. Mass-produced x86-64 chips for the general market were available four years later, in 2003, after the time was spent for working prototypes to be tested and refined; about the same time, the initial name x86-64 was changed to AMD64. The success of the AMD64 line of processors coupled with lukewarm reception of the IA-64 architecture forced Intel to release its own implementation of the AMD64 instruction set. Intel had previously implemented support for AMD64 but opted not to enable it in hopes that AMD would not bring AMD64 to market before Itanium's new IA-64 instruction set was widely adopted. It branded its implementation of AMD64 as EM64T, and later rebranded it Intel 64. In its literature and product version names, Microsoft and Sun refer to AMD64/Intel 64 collectively as x64 in the Windows and Solaris operating systems. Linux distributions refer to it either as "x86-64", its variant "x86_64", or "amd64". BSD systems use "amd64" while macOS uses "x86_64". Long mode is mostly an extension of the 32-bit instruction set, but unlike the 16–to–32-bit transition, many instructions were dropped in the 64-bit mode. This does not affect actual binary backward compatibility (which would execute legacy code in other modes that retain support for those instructions), but it changes the way assembler and compilers for new code have to work. This was the first time that a major extension of the x86 architecture was initiated and originated by a manufacturer other than Intel. It was also the first time that Intel accepted technology of this nature from an outside source. Extensions Floating-point unit Early x86 processors could be extended with floating-point hardware in the form of a series of floating-point numerical co-processors with names like 8087, 80287 and 80387, abbreviated x87. This was also known as the NPX (Numeric Processor eXtension), an apt name since the coprocessors, while used mainly for floating-point calculations, also performed integer operations on both binary and decimal formats. With very few exceptions, the 80486 and subsequent x86 processors then integrated this x87 functionality on chip which made the x87 instructions a de facto integral part of the x86 instruction set. Each x87 register, known as ST(0) through ST(7), is 80 bits wide and stores numbers in the IEEE floating-point standard double extended precision format. These registers are organized as a stack with ST(0) as the top. This was done in order to conserve opcode space, and the registers are therefore randomly accessible only for either operand in a register-to-register instruction; ST0 must always be one of the two operands, either the source or the destination, regardless of whether the other operand is ST(x) or a memory operand. However, random access to the stack registers can be obtained through an instruction which exchanges any specified ST(x) with ST(0). The operations include arithmetic and transcendental functions, including trigonometric and exponential functions, and instructions that load common constants (such as 0; 1; e, the base of the natural logarithm; log2(10); and log10(2)) into one of the stack registers. While the integer ability is often overlooked, the x87 can operate on larger integers with a single instruction than the 8086, 80286, 80386, or any x86 CPU without to 64-bit extensions can, and repeated integer calculations even on small values (e.g., 16-bit) can be accelerated by executing integer instructions on the x86 CPU and the x87 in parallel. (The x86 CPU keeps running while the x87 coprocessor calculates, and the x87 sets a signal to the x86 when it is finished or interrupts the x86 if it needs attention because of an error.) MMX MMX is a SIMD instruction set designed by Intel and introduced in 1997 for the Pentium MMX microprocessor. The MMX instruction set was developed from a similar concept first used on the Intel i860. It is supported on most subsequent IA-32 processors by Intel and other vendors. MMX is typically used for video processing (in multimedia applications, for instance). MMX added 8 new registers to the architecture, known as MM0 through MM7 (henceforth referred to as MMn). In reality, these new registers were just aliases for the existing x87 FPU stack registers. Hence, anything that was done to the floating-point stack would also affect the MMX registers. Unlike the FP stack, these MMn registers were fixed, not relative, and therefore they were randomly accessible. The instruction set did not adopt the stack-like semantics so that existing operating systems could still correctly save and restore the register state when multitasking without modifications. Each of the MMn registers are 64-bit integers. However, one of the main concepts of the MMX instruction set is the concept of packed data types, which means instead of using the whole register for a single 64-bit integer (quadword), one may use it to contain two 32-bit integers (doubleword), four 16-bit integers (word) or eight 8-bit integers (byte). Given that the MMX's 64-bit MMn registers are aliased to the FPU stack and each of the floating-point registers are 80 bits wide, the upper 16 bits of the floating-point registers are unused in MMX. These bits are set to all ones by any MMX instruction, which correspond to the floating-point representation of NaNs or infinities. 3DNow! In 1997, AMD introduced 3DNow!. The introduction of this technology coincided with the rise of 3D entertainment applications and was designed to improve the CPU's vector processing performance of graphic-intensive applications. 3D video game developers and 3D graphics hardware vendors use 3DNow! to enhance their performance on AMD's K6 and Athlon series of processors. 3DNow! was designed to be the natural evolution of MMX from integers to floating point. As such, it uses exactly the same register naming convention as MMX, that is MM0 through MM7. The only difference is that instead of packing integers into these registers, two single-precision floating-point numbers are packed into each register. The advantage of aliasing the FPU registers is that the same instruction and data structures used to save the state of the FPU registers can also be used to save 3DNow! register states. Thus no special modifications are required to be made to operating systems which would otherwise not know about them. and AVX In 1999, Intel introduced the Streaming SIMD Extensions (SSE) instruction set, following in 2000 with SSE2. The first addition allowed offloading of basic floating-point operations from the x87 stack and the second made MMX almost obsolete and allowed the instructions to be realistically targeted by conventional compilers. Introduced in 2004 along with the Prescott revision of the Pentium 4 processor, SSE3 added specific memory and thread-handling instructions to boost the performance of Intel's HyperThreading technology. AMD licensed the SSE3 instruction set and implemented most of the SSE3 instructions for its revision E and later Athlon 64 processors. The Athlon 64 does not support HyperThreading and lacks those SSE3 instructions used only for HyperThreading. SSE discarded all legacy connections to the FPU stack. This also meant that this instruction set discarded all legacy connections to previous generations of SIMD instruction sets like MMX. But it freed the designers up, allowing them to use larger registers, not limited by the size of the FPU registers. The designers created eight 128-bit registers, named XMM0 through XMM7. (In AMD64, the number of SSE XMM registers has been increased from 8 to 16.) However, the downside was that operating systems had to have an awareness of this new set of instructions in order to be able to save their register states. So Intel created a slightly modified version of Protected mode, called Enhanced mode which enables the usage of SSE instructions, whereas they stay disabled in regular Protected mode. An OS that is aware of SSE will activate Enhanced mode, whereas an unaware OS will only enter into traditional Protected mode. SSE is a SIMD instruction set that works only on floating-point values, like 3DNow!. However, unlike 3DNow! it severs all legacy connection to the FPU stack. Because it has larger registers than 3DNow!, SSE can pack twice the number of single precision floats into its registers. The original SSE was limited to only single-precision numbers, like 3DNow!. The SSE2 introduced the capability to pack double precision numbers too, which 3DNow! had no possibility of doing since a double precision number is 64-bit in size which would be the full size of a single 3DNow! MMn register. At 128 bits, the SSE XMMn registers could pack two double precision floats into one register. Thus SSE2 is much more suitable for scientific calculations than either SSE1 or 3DNow!, which were limited to only single precision. SSE3 does not introduce any additional registers. The Advanced Vector Extensions (AVX) doubled the size of SSE registers to 256-bit YMM registers. It also introduced the VEX coding scheme to accommodate the larger registers, plus a few instructions to permute elements. AVX2 did not introduce extra registers, but was notable for the addition for masking, gather, and shuffle instructions. AVX-512 features yet another expansion to 32 512-bit ZMM registers and a new EVEX scheme. Unlike its predecessors featuring a monolithic extension, it is divided into many subsets that specific models of CPUs can choose to implement. Physical Address Extension (PAE) Physical Address Extension or PAE was first added in the Intel Pentium Pro, and later by AMD in the Athlon processors, to allow up to 64 GB of RAM to be addressed. Without PAE, physical RAM in 32-bit protected mode is usually limited to 4 GB. PAE defines a different page table structure with wider page table entries and a third level of page table, allowing additional bits of physical address. Although the initial implementations on 32-bit processors theoretically supported up to 64 GB of RAM, chipset and other platform limitations often restricted what could actually be used. x86-64 processors define page table structures that theoretically allow up to 52 bits of physical address, although again, chipset and other platform concerns (like the number of DIMM slots available, and the maximum RAM possible per DIMM) prevent such a large physical address space to be realized. On x86-64 processors PAE mode must be active before the switch to long mode, and must remain active while long mode is active, so while in long mode there is no "non-PAE" mode. PAE mode does not affect the width of linear or virtual addresses. x86-64 By the 2000s, 32-bit x86 processors' limits in memory addressing were an obstacle to their use in high-performance computing clusters and powerful desktop workstations. The aged 32-bit x86 was competing with much more advanced 64-bit RISC architectures which could address much more memory. Intel and the whole x86 ecosystem needed 64-bit memory addressing if x86 was to survive the 64-bit computing era, as workstation and desktop software applications were soon to start hitting the limits of 32-bit memory addressing. However, Intel felt that it was the right time to make a bold step and use the transition to 64-bit desktop computers for a transition away from the x86 architecture in general, an experiment which ultimately failed. In 2001, Intel attempted to introduce a non-x86 64-bit architecture named IA-64 in its Itanium processor, initially aiming for the high-performance computing market, hoping that it would eventually replace the 32-bit x86. While IA-64 was incompatible with x86, the Itanium processor did provide emulation abilities for translating x86 instructions into IA-64, but this affected the performance of x86 programs so badly that it was rarely, if ever, actually useful to the users: programmers should rewrite x86 programs for the IA-64 architecture or their performance on Itanium would be orders of magnitude worse than on a true x86 processor. The market rejected the Itanium processor since it broke backward compatibility and preferred to continue using x86 chips, and very few programs were rewritten for IA-64. AMD decided to take another path toward 64-bit memory addressing, making sure backward compatibility would not suffer. In April 2003, AMD released the first x86 processor with 64-bit general-purpose registers, the Opteron, capable of addressing much more than 4 GB of virtual memory using the new x86-64 extension (also known as AMD64 or x64). The 64-bit extensions to the x86 architecture were enabled only in the newly introduced long mode, therefore 32-bit and 16-bit applications and operating systems could simply continue using an AMD64 processor in protected or other modes, without even the slightest sacrifice of performance and with full compatibility back to the original instructions of the 16-bit Intel 8086. The market responded positively, adopting the 64-bit AMD processors for both high-performance applications and business or home computers. Seeing the market rejecting the incompatible Itanium processor and Microsoft supporting AMD64, Intel had to respond and introduced its own x86-64 processor, the Prescott Pentium 4, in July 2004. As a result, the Itanium processor with its IA-64 instruction set is rarely used and x86, through its x86-64 incarnation, is still the dominant CPU architecture in non-embedded computers. x86-64 also introduced the NX bit, which offers some protection against security bugs caused by buffer overruns. As a result of AMD's 64-bit contribution to the x86 lineage and its subsequent acceptance by Intel, the 64-bit RISC architectures ceased to be a threat to the x86 ecosystem and almost disappeared from the workstation market. x86-64 began to be utilized in powerful supercomputers (in its AMD Opteron and Intel Xeon incarnations), a market which was previously the natural habitat for 64-bit RISC designs (such as the IBM Power microprocessors or SPARC processors). The great leap toward 64-bit computing and the maintenance of backward compatibility with 32-bit and 16-bit software enabled the x86 architecture to become an extremely flexible platform today, with x86 chips being utilized from small low-power systems (for example, Intel Quark and Intel Atom) to fast gaming desktop computers (for example, Intel Core i7 and AMD FX/Ryzen), and even dominate large supercomputing clusters, effectively leaving only the ARM 32-bit and 64-bit RISC architecture as a competitor in the smartphone and tablet market. Virtualization Prior to 2005, x86 architecture processors were unable to meet the Popek and Goldberg requirements – a specification for virtualization created in 1974 by Gerald J. Popek and Robert P. Goldberg. However, both proprietary and open-source x86 virtualization hypervisor products were developed using software-based virtualization. Proprietary systems include Hyper-V, Parallels Workstation, VMware ESX, VMware Workstation, VMware Workstation Player and Windows Virtual PC, while free and open-source systems include QEMU, Kernel-based Virtual Machine, VirtualBox, and Xen. The introduction of the AMD-V and Intel VT-x instruction sets in 2005 allowed x86 processors to meet the Popek and Goldberg virtualization requirements. AES APX (Advanced Performance Extensions) APX (Advanced Performance Extensions) are extensions to double the number of general-purpose registers from 16 to 32 and add new features to improve general-purpose performance. These extensions have been called "generational" and "the biggest x86 addition since 64 bits". Intel contributed APX support to GNU Compiler Collection (GCC) 14. According to the architecture specification, the main features of APX are: 16 additional general-purpose registers, called the Extended GPRs (EGPRs) Three-operand instruction formats for many integer instructions New conditional instructions for loads, stores, and comparisons with common instructions that do not modify flags Optimized register save/restore operations A 64-bit absolute direct jump instruction Extended GPRs for general purpose instructions are encoded using 2-byte REX2 prefix, while new instructions and extended operands for existing AVX/AVX2/AVX-512 instructions are encoded with extended EVEX prefix which has four variants used for different groups of instructions.
Technology
Computer architecture concepts
null
34240
https://en.wikipedia.org/wiki/Ytterbium
Ytterbium
Ytterbium is a chemical element; it has symbol Yb and atomic number 70. It is a metal, the fourteenth and penultimate element in the lanthanide series, which is the basis of the relative stability of its +2 oxidation state. Like the other lanthanides, its most common oxidation state is +3, as in its oxide, halides, and other compounds. In aqueous solution, like compounds of other late lanthanides, soluble ytterbium compounds form complexes with nine water molecules. Because of its closed-shell electron configuration, its density, melting point and boiling point are much lower than those of most other lanthanides. In 1878, Swiss chemist Jean Charles Galissard de Marignac separated from the rare earth "erbia" (another independent component) which he called "ytterbia", for Ytterby, the village in Sweden near where he found the new component of erbium. He suspected that ytterbia was a compound of a new element that he called "ytterbium". (In total, four elements were named after the village, the others being yttrium, terbium, and erbium.) In 1907, the new earth "lutecia" was separated from ytterbia, from which the element "lutecium" (now lutetium) was extracted by Georges Urbain, Carl Auer von Welsbach, and Charles James. After some discussion, Marignac's name "ytterbium" was retained. A relatively pure sample of the metal was not obtained until 1953. At present, ytterbium is mainly used as a dopant of stainless steel or active laser media, and less often as a gamma ray source. Natural ytterbium is a mixture of seven stable isotopes, which altogether are present at concentrations of 0.3 parts per million. This element is mined in China, the United States, Brazil, and India in form of the minerals monazite, euxenite, and xenotime. The ytterbium concentration is low because it is found only among many other rare-earth elements; moreover, it is among the least abundant. Once extracted and prepared, ytterbium is somewhat hazardous as an eye and skin irritant. The metal is a fire and explosion hazard. Characteristics Physical properties Ytterbium is a soft, malleable and ductile chemical element. When freshly prepared, it is less golden than cesium. It is a rare-earth element, and it is readily dissolved by the strong mineral acids. Ytterbium has three allotropes labeled by the Greek letters alpha, beta and gamma. Their transformation temperatures are −13 °C and 795 °C, although the exact transformation temperature depends on the pressure and stress. The beta allotrope (6.966 g/cm3) exists at room temperature, and it has a face-centered cubic crystal structure. The high-temperature gamma allotrope (6.57 g/cm3) has a body-centered cubic crystalline structure. The alpha allotrope (6.903 g/cm3) has a hexagonal crystalline structure and is stable at low temperatures. The beta allotrope has a metallic electrical conductivity at normal atmospheric pressure, but it becomes a semiconductor when exposed to a pressure of about 16,000 atmospheres (1.6 GPa). Its electrical resistivity increases ten times upon compression to 39,000 atmospheres (3.9 GPa), but then drops to about 10% of its room-temperature resistivity at about 40,000 atm (4.0 GPa). In contrast to the other rare-earth metals, which usually have antiferromagnetic and/or ferromagnetic properties at low temperatures, ytterbium is paramagnetic at temperatures above 1.0 kelvin. However, the alpha allotrope is diamagnetic. With a melting point of 824 °C and a boiling point of 1196 °C, ytterbium has the smallest liquid range of all the metals. Contrary to most other lanthanides, which have a close-packed hexagonal lattice, ytterbium crystallizes in the face-centered cubic system. Ytterbium has a density of 6.973 g/cm3, which is significantly lower than those of the neighboring lanthanides, thulium (9.32 g/cm3) and lutetium (9.841 g/cm3). Its melting and boiling points are also significantly lower than those of thulium and lutetium. This is due to the closed-shell electron configuration of ytterbium ([Xe] 4f14 6s2), which causes only the two 6s electrons to be available for metallic bonding (in contrast to the other lanthanides where three electrons are available) and increases ytterbium's metallic radius. Chemical properties Ytterbium metal tarnishes slowly in air, taking on a golden or brown hue. Finely dispersed ytterbium readily oxidizes in air and under oxygen. Mixtures of powdered ytterbium with polytetrafluoroethylene or hexachloroethane burn with an emerald-green flame. Ytterbium reacts with hydrogen to form various non-stoichiometric hydrides. Ytterbium dissolves slowly in water, but quickly in acids, liberating hydrogen. Ytterbium is quite electropositive, and it reacts slowly with cold water and quite quickly with hot water to form ytterbium(III) hydroxide: 2 Yb (s) + 6 H2O (l) → 2 Yb(OH)3 (aq) + 3 H2 (g) Ytterbium reacts with all the halogens: 2 Yb (s) + 3 F2 (g) → 2 YbF3 (s) [white] 2 Yb (s) + 3 Cl2 (g) → 2 YbCl3 (s) [white] 2 Yb (s) + 3 Br2 (l) → 2 YbBr3 (s) [white] 2 Yb (s) + 3 I2 (s) → 2 YbI3 (s) [white] The ytterbium(III) ion absorbs light in the near-infrared range of wavelengths, but not in visible light, so ytterbia, Yb2O3, is white in color and the salts of ytterbium are also colorless. Ytterbium dissolves readily in dilute sulfuric acid to form solutions that contain the colorless Yb(III) ions, which exist as nonahydrate complexes: 2 Yb (s) + 3 H2SO4 (aq) + 18 (l) → 2 [Yb(H2O)9]3+ (aq) + 3 (aq) + 3 H2 (g) Yb(II) vs. Yb(III) Although usually trivalent, ytterbium readily forms divalent compounds. This behavior is unusual for lanthanides, which almost exclusively form compounds with an oxidation state of +3. The +2 state has a valence electron configuration of 4f14 because the fully filled f-shell gives more stability. The yellow-green ytterbium(II) ion is a very strong reducing agent and decomposes water, releasing hydrogen, and thus only the colorless ytterbium(III) ion occurs in aqueous solution. Samarium and thulium also behave this way in the +2 state, but europium(II) is stable in aqueous solution. Ytterbium metal behaves similarly to europium metal and the alkaline earth metals, dissolving in ammonia to form blue electride salts. Isotopes Natural ytterbium is composed of seven stable isotopes: 168Yb, 170Yb, 171Yb, 172Yb, 173Yb, 174Yb, and 176Yb, with 174Yb being the most common, at 31.8% of the natural abundance). Thirty-two radioisotopes have been observed, with the most stable ones being 169Yb with a half-life of 32.0 days, 175Yb with a half-life of 4.18 days, and 166Yb with a half-life of 56.7 hours. All of the remaining radioactive isotopes have half-lives that are less than two hours, and most of these have half-lives under 20 minutes. Ytterbium also has 12 meta states, with the most stable being 169mYb (t1/2 46 seconds). The isotopes of ytterbium range from 149Yb to 187Yb. The primary decay mode of ytterbium isotopes lighter than the most abundant stable isotope, 174Yb, is electron capture, and the primary decay mode for those heavier than 174Yb is beta decay. The primary decay products of ytterbium isotopes lighter than 174Yb are thulium isotopes, and the primary decay products of ytterbium isotopes with heavier than 174Yb are lutetium isotopes. Occurrence Ytterbium is found with other rare-earth elements in several rare minerals. It is most often recovered commercially from monazite sand (0.03% ytterbium). The element is also found in euxenite and xenotime. The main mining areas are China, the United States, Brazil, India, Sri Lanka, and Australia. Reserves of ytterbium are estimated as one million tonnes. Ytterbium is normally difficult to separate from other rare earths, but ion-exchange and solvent extraction techniques developed in the mid- to late 20th century have simplified separation. Compounds of ytterbium are rare and have not yet been well characterized. The abundance of ytterbium in the Earth's crust is about 3 mg/kg. As an even-numbered lanthanide, in accordance with the Oddo–Harkins rule, ytterbium is significantly more abundant than its immediate neighbors, thulium and lutetium, which occur in the same concentrate at levels of about 0.5% each. The world production of ytterbium is only about 50 tonnes per year, reflecting that it has few commercial applications. Microscopic traces of ytterbium are used as a dopant in the Yb:YAG laser, a solid-state laser in which ytterbium is the element that undergoes stimulated emission of electromagnetic radiation. Ytterbium is often the most common substitute in yttrium minerals. In very few known cases/occurrences ytterbium prevails over yttrium, as, e.g., in xenotime-(Yb). A report of native ytterbium from the Moon's regolith is known. Production It is relatively difficult to separate ytterbium from other lanthanides due to its similar properties. As a result, the process is somewhat long. First, minerals such as monazite or xenotime are dissolved into various acids, such as sulfuric acid. Ytterbium can then be separated from other lanthanides by ion exchange, as can other lanthanides. The solution is then applied to a resin, to which different lanthanides bind with different affinities. This is then dissolved using complexing agents, and due to the different types of bonding exhibited by the different lanthanides, it is possible to isolate the compounds. Ytterbium is separated from other rare earths either by ion exchange or by reduction with sodium amalgam. In the latter method, a buffered acidic solution of trivalent rare earths is treated with molten sodium-mercury alloy, which reduces and dissolves Yb3+. The alloy is treated with hydrochloric acid. The metal is extracted from the solution as oxalate and converted to oxide by heating. The oxide is reduced to metal by heating with lanthanum, aluminium, cerium or zirconium in high vacuum. The metal is purified by sublimation and collected over a condensed plate. Compounds The chemical behavior of ytterbium is similar to that of the rest of the lanthanides. Most ytterbium compounds are found in the +3 oxidation state, and its salts in this oxidation state are nearly colorless. Like europium, samarium, and thulium, the trihalides of ytterbium can be reduced to the dihalides by hydrogen, zinc dust, or by the addition of metallic ytterbium. The +2 oxidation state occurs only in solid compounds and reacts in some ways similarly to the alkaline earth metal compounds; for example, ytterbium(II) oxide (YbO) shows the same structure as calcium oxide (CaO). Halides Ytterbium forms both dihalides and trihalides with the halogens fluorine, chlorine, bromine, and iodine. The dihalides are susceptible to oxidation to the trihalides at room temperature and disproportionate to the trihalides and metallic ytterbium at high temperature: 3 YbX2 → 2 YbX3 + Yb (X = F, Cl, Br, I) Some ytterbium halides are used as reagents in organic synthesis. For example, ytterbium(III) chloride (YbCl3) is a Lewis acid and can be used as a catalyst in the Aldol and Diels–Alder reactions. Ytterbium(II) iodide (YbI2) may be used, like samarium(II) iodide, as a reducing agent for coupling reactions. Ytterbium(III) fluoride (YbF3) is used as an inert and non-toxic tooth filling as it continuously releases fluoride ions, which are good for dental health, and is also a good X-ray contrast agent. Oxides Ytterbium reacts with oxygen to form ytterbium(III) oxide (Yb2O3), which crystallizes in the "rare-earth C-type sesquioxide" structure which is related to the fluorite structure with one quarter of the anions removed, leading to ytterbium atoms in two different six coordinate (non-octahedral) environments. Ytterbium(III) oxide can be reduced to ytterbium(II) oxide (YbO) with elemental ytterbium, which crystallizes in the same structure as sodium chloride. Borides Ytterbium dodecaboride (YbB12) is a crystalline material that has been studied to understand various electronic and structural properties of many chemically related substances. It is a Kondo insulator. It is a quantum material; under normal conditions, the interior of the bulk crystal is an insulator whereas the surface is highly conductive. Among the rare earth elements, ytterbium is one of the few that can form a stable dodecaboride, a property attributed to its comparatively small atomic radius. History Ytterbium was discovered by the Swiss chemist Jean Charles Galissard de Marignac in the year 1878. While examining samples of gadolinite, Marignac found a new component in the earth then known as erbia, and he named it ytterbia, for Ytterby, the Swedish village near where he found the new component of erbium. Marignac suspected that ytterbia was a compound of a new element that he called "ytterbium". In 1907, the French chemist Georges Urbain separated Marignac's ytterbia into two components: neoytterbia and lutecia. Neoytterbia later became known as the element ytterbium, and lutecia became known as the element lutetium. The Austrian chemist Carl Auer von Welsbach independently isolated these elements from ytterbia at about the same time, but he called them aldebaranium (Ad; after Aldebaran) and cassiopeium; the American chemist Charles James also independently isolated these elements at about the same time. Urbain and Welsbach accused each other of publishing results based on the other party. The Commission on Atomic Mass, consisting of Frank Wigglesworth Clarke, Wilhelm Ostwald, and Georges Urbain, which was then responsible for the attribution of new element names, settled the dispute in 1909 by granting priority to Urbain and adopting his names as official ones, based on the fact that the separation of lutetium from Marignac's ytterbium was first described by Urbain. After Urbain's names were recognized, neoytterbium was reverted to ytterbium. The chemical and physical properties of ytterbium could not be determined with any precision until 1953, when the first nearly pure ytterbium metal was produced by using ion-exchange processes. The price of ytterbium was relatively stable between 1953 and 1998 at about US$1,000/kg. Applications Source of gamma rays The 169Yb isotope (with a half-life of 32 days), which is created along with the short-lived 175Yb isotope (half-life 4.2 days) by neutron activation during the irradiation of ytterbium in nuclear reactors, has been used as a radiation source in portable X-ray machines. Like X-rays, the gamma rays emitted by the source pass through soft tissues of the body, but are blocked by bones and other dense materials. Thus, small 169Yb samples (which emit gamma rays) act like tiny X-ray machines useful for radiography of small objects. Experiments show that radiographs taken with a 169Yb source are roughly equivalent to those taken with X-rays having energies between 250 and 350 keV. 169Yb is also used in nuclear medicine. High-stability atomic clocks In 2013, ytterbium clocks held the record for stability with ticks stable to within less than two parts in 1 quintillion (). These clocks developed at the National Institute of Standards and Technology (NIST) rely on about 10,000 ytterbium atoms laser-cooled to 10 microkelvin (10 millionths of a degree above absolute zero) and trapped in an optical lattice—a series of pancake-shaped wells made of laser light. Another laser that "ticks" 518 trillion times per second (518 THz) provokes a transition between two energy levels in the atoms. The large number of atoms is key to the clocks' high stability. Visible light waves oscillate faster than microwaves, hence optical clocks can be more precise than caesium atomic clocks. The Physikalisch-Technische Bundesanstalt is working on several such optical clocks. The model with one single ytterbium ion caught in an ion trap is highly accurate. The optical clock based on it is exact to 17 digits after the decimal point. A pair of experimental atomic clocks based on ytterbium atoms at the National Institute of Standards and Technology has set a record for stability. NIST physicists reported in the August 22, 2013 issue of Science Express that the ytterbium clocks' ticks are stable to within less than two parts in 1 quintillion (1 followed by 18 zeros), roughly 10 times better than the previous best published results for other atomic clocks. The clocks would be accurate within a second for a period comparable to the age of the universe. Doping of stainless steel Ytterbium can also be used as a dopant to help improve the grain refinement, strength, and other mechanical properties of stainless steel. Some ytterbium alloys have rarely been used in dentistry. Ytterbium as dopant of active media The Yb3+ ion is used as a doping material in active laser media, specifically in solid state lasers and double clad fiber lasers. Ytterbium lasers are highly efficient, have long lifetimes and can generate short pulses; ytterbium can also easily be incorporated into the material used to make the laser. Ytterbium lasers commonly radiate in the 1.03–1.12 μm band being optically pumped at wavelength 900 nm–1 μm, dependently on the host and application. The small quantum defect makes ytterbium a prospective dopant for efficient lasers and power scaling. The kinetic of excitations in ytterbium-doped materials is simple and can be described within the concept of effective cross-sections; for most ytterbium-doped laser materials (as for many other optically pumped gain media), the McCumber relation holds, although the application to the ytterbium-doped composite materials was under discussion. Usually, low concentrations of ytterbium are used. At high concentrations, the ytterbium-doped materials show photodarkening (glass fibers) or even a switch to broadband emission (crystals and ceramics) instead of efficient laser action. This effect may be related with not only overheating, but also with conditions of charge compensation at high concentrations of ytterbium ions. Much progress has been made in the power scaling lasers and amplifiers produced with ytterbium (Yb) doped optical fibers. Power levels have increased from the 1 kW regimes due to the advancements in components as well as the Yb-doped fibers. Fabrication of Low NA, Large Mode Area fibers enable achievement of near perfect beam qualities (M2<1.1) at power levels of 1.5 kW to greater than 2 kW at ~1064 nm in a broadband configuration. Ytterbium-doped LMA fibers also have the advantages of a larger mode field diameter, which negates the impacts of nonlinear effects such as stimulated Brillouin scattering and stimulated Raman scattering, which limit the achievement of higher power levels, and provide a distinct advantage over single mode ytterbium-doped fibers. To achieve even higher power levels in ytterbium-based fiber systems, all factors of the fiber must be considered. These can be achieved only through optimization of all ytterbium fiber parameters, ranging from the core background losses to the geometrical properties, to reduce the splice losses within the cavity. Power scaling also requires optimization of matching passive fibers within the optical cavity. The optimization of the ytterbium-doped glass itself through host glass modification of various dopants also plays a large part in reducing the background loss of the glass, improvements in slope efficiency of the fiber, and improved photodarkening performance, all of which contribute to increased power levels in 1 μm systems. Ion qubits for quantum computing The charged ion 171Yb+ is used by multiple academic groups and companies as the trapped-ion qubit for quantum computing. Entangling gates, such as the Mølmer–Sørensen gate, have been achieved by addressing the ions with mode-locked pulse lasers. Others Ytterbium metal increases its electrical resistivity when subjected to high stresses. This property is used in stress gauges to monitor ground deformations from earthquakes and explosions. Currently, ytterbium is being investigated as a possible replacement for magnesium in high density pyrotechnic payloads for kinematic infrared decoy flares. As ytterbium(III) oxide has a significantly higher emissivity in the infrared range than magnesium oxide, a higher radiant intensity is obtained with ytterbium-based payloads in comparison to those commonly based on magnesium/Teflon/Viton (MTV). Precautions Although ytterbium is fairly stable chemically, it is stored in airtight containers and in an inert atmosphere such as a nitrogen-filled dry box to protect it from air and moisture. All compounds of ytterbium are treated as highly toxic, although studies appear to indicate that the danger is minimal. However, ytterbium compounds cause irritation to human skin and eyes, and some might be teratogenic. Metallic ytterbium dust can spontaneously combust.
Physical sciences
Chemical elements_2
null
34242
https://en.wikipedia.org/wiki/Yard
Yard
The yard (symbol: yd) is an English unit of length in both the British imperial and US customary systems of measurement equalling 3 feet or 36 inches. Since 1959 it has been by international agreement standardized as exactly 0.9144 meter. A distance of 1,760 yards is equal to 1 mile. The US survey yard is very slightly longer. Name The term, yard derives from the Old English , etc., which was used for branches, staves and measuring rods. It is first attested in the late 7th century laws of Ine of Wessex, where the "yard of land" mentioned is the yardland, an old English unit of tax assessment equal to  hide. Around the same time the Lindisfarne Gospels account of the messengers from John the Baptist in the Gospel of Matthew used it for a branch swayed by the wind. In addition to the yardland, Old and Middle English both used their forms of "yard" to denote the surveying lengths of or , used in computing acres, a distance now usually known as the "rod". A unit of three English feet is attested in a statute of (see below), but there it is called an ell (,  "arm"), a separate and usually longer unit of around . The use of the word ‘yard’ ( or ) to describe this length is first attested in William Langland's poem on Piers Plowman. The usage seems to derive from the prototype standard rods held by the king and his magistrates (see below). The word ‘yard’ is a homonym of ‘yard’ in the sense of an enclosed area of land. This second meaning of ‘yard’ has an etymology related to the word ‘garden’ and is not related to the unit of measurement. In India Yard is colloquially known as Gauge or Guz. 1 Gauge is 3 feet. History Origin The origin of the yard measure is uncertain. Both the Romans and the Welsh used multiples of a shorter foot, but  Roman feet was a "step" () and 3 Welsh feet was a "pace" (). The Proto-Germanic cubit or arm's-length has been reconstructed as *alinô, which developed into the Old English , Middle English , and modern ell of . This has led some to derive the yard of three English feet from pacing; others from the ell or cubit; and still others from Henry I's arm standard (see below). Based on the etymology of the other "yard", some suggest it originally derived from the girth of a person's waist, while others believe it originated as a cubic measure. One official British report writes: From ell to yard The earliest record of a prototype measure is the statute II Edgar Cap. 8 (AD 959  963), which survives in several variant manuscripts. In it, Edgar the Peaceful directed the Witenagemot at Andover that "the measure held at Winchester" should be observed throughout his realm. (Some manuscripts read "at London and at Winchester".) The statutes of William I similarly refer to and uphold the standard measures of his predecessors without naming them. William of Malmesbury's Deeds of the Kings of England records that during the reign of Henry I (1100 - 1135), "the measure of his arm was applied to correct the false ell of the traders and enjoined on all throughout England." The folktale that the length was bounded by the king's nose was added some centuries later. Charles Moore Watson dismisses William's account as "childish", but William was among the most conscientious and trustworthy medieval historians. The French "king's foot" was supposed to have derived from Charlemagne, and the English kings subsequently repeatedly intervened to impose shorter units with the aim of increasing tax revenue. The earliest surviving definition of this shorter unit appears in the Act on the Composition of Yards and Perches, one of the statutes of uncertain date tentatively dated to the reign of Edward I or II . Its wording varies in surviving accounts. One reads: It is ordained that 3 grains of barley dry and round do make an inch, 12 inches make 1 foot, 3 feet make 1 yard, 5 yards and a half make a perch, and 40 perches in length and 4 in breadth make an acre. The Liber Horn compilation (1311) includes that statute with slightly different wording and adds: And be it remembered that the iron yard of our Lord the King containeth 3 feet and no more, and a foot ought to contain 12 inches by the right measure of this yard measured, to wit, the 36th part of this yard rightly measured maketh 1 inch neither more nor less and 5 yards and a half make a perch that is 16 feet and a half measured by the aforesaid yard of our Lord the King. In some early books, this act was appended to another statute of uncertain date titled the Statute for the Measuring of Land. The act was not repealed until the Weights and Measures Act 1824. Yard and inch In a law of 1439 (18 Hen. 6. c. 16) the sale of cloth by the "yard and handful" was abolished, and the "yard and inch" instituted (see ell). There shall be but one Measure of Cloth through the Realm by the Yard and the Inch, and not by the Yard and Handful, according to the London Measure. According to Connor, cloth merchants had previously sold cloth by the yard and handful to evade high taxes on cloth (the extra handful being essentially a black-market transaction). Enforcement efforts resulted in cloth merchants switching over to the yard and inch, at which point the government gave up and made the yard and inch official. In 1552, the yard and inch for cloth measurement was again sanctioned in law (5 & 6 Edw. 6. c. 6. An Act for the true making of Woolen Cloth.) The yard and inch for cloth measurement was also sanctioned again in legislation of 1557–1558 (4 & 5 Ph. & M. c. 5. An act touching the making of woolen clothes. par. IX.) IX. Item, That every ordinary kersie mentioned in the said act shall contain in length in the water betwixt xvi. and xvii. yards, yard and inch; and being well scoured thicked, milled, dressed and fully dried, shall weigh nineteen pounds the piece at the least:... As recently as 1593, the same principle is found mentioned once again (35 Eliz. 1. c. 10 An act for the reformation of sundry abuses in clothes, called Devonshire kerjies or dozens, according to a proclamation of the thirty-fourth year of the reign of our sovereign lady the Queen that now is. par. III.) (2) and each and every of the same Devonshire kersies or dozens, so being raw, and as it cometh forth off the weaver's loom (without racking, stretching, straining or other device to encrease the length thereof) shall contain in length between fifteen and sixteen yards by the measure of yard and inch by the rule,... Physical standards One of the oldest yard-rods in existence is the clothyard of the Worshipful Company of Merchant Taylors. It consists of a hexagonal iron rod in diameter and short of a yard, encased within a silver rod bearing the hallmark 1445. In the early 15th century, the Merchant Taylors Company was authorized to "make search" at the opening of the annual St. Bartholomew's Day Cloth Fair. In the mid-18th century, Graham compared the standard yard of the Royal Society to other existing standards. These were a "long-disused" standard made in 1490 during the reign of Henry VII, and a brass yard and a brass ell from 1588 in the time of Queen Elizabeth and still in use at the time, held at the Exchequer; a brass yard and a brass ell at the Guildhall; and a brass yard presented to the Clock-Makers' Company by the Exchequer in 1671. The Exchequer yard was taken as "true"; the variation was found to be + to − of an inch, and an additional graduation for the Exchequer yard was made on the Royal Society's standard. In 1758 the legislature required the construction of a standard yard, which was made from the Royal Society's standard and was deposited with the clerk of the House of Commons; it was divided into feet, one of the feet into inches, and one of the inches into tenths. A copy of it, but with upright cheeks between which other measuring rods could be placed, was made for the Exchequer for commercial use. 19th-century Britain Following Royal Society investigations by John Playfair, Hyde Wollaston and John Warner in 1814 a committee of parliament proposed defining the standard yard based upon the length of a seconds pendulum. This idea was examined but not approved. The Weights and Measures Act 1824 (5 Geo. 4. c. 74) An Act for ascertaining and establishing Uniformity of Weights and Measures stipulates that: {{boxquote|From and after the First Day of May One thousand eight hundred and twenty five the Straight Line or Distance between the Centres of the Two Points in the Gold Studs of the Straight Brass Rod now in the Custody of the Clerk of the House of Commons whereon the Words and Figures "Standard Yard 1760" are engraved shall be and the same is hereby declared to be the original and genuine Standard of that Measure of Length or lineal Extension called a Yard; and that the same Straight Line or Distance between the Centres of the said Two Points in the said Gold Studs in the said Brass Rod the Brass being at the Temperature of Sixty two Degrees by Fahrenheit'''s Thermometer shall be and is hereby denominated the Imperial Standard Yard and shall be and is hereby declared to be the Unit or only Standard Measure of Extension, wherefrom or whereby all other Measures of Extension whatsoever, whether the same be lineal, superficial or solid, shall be derived, computed and ascertained; and that all Measures of Length shall be taken in Parts or Multiples, or certain Proportions of the said Standard Yard; and that One third Part of the said Standard Yard shall be a Foot, and the Twelfth Part of such Foot shall be an Inch; and that the Pole or Perch in Length shall contain Five such Yards and a Half, the Furlong Two hundred and twenty such Yards, and the Mile One thousand seven hundred and sixty such Yards.}} In 1834, the primary Imperial yard standard was partially destroyed in a fire known as the Burning of Parliament.. In 1838, a commission was formed to reconstruct the lost standards, including the troy pound, which had also been destroyed. In 1845, a new yard standard was constructed based on two previously existing standards known as A1 and A2, both of which had been made for the Ordnance Survey, and R.S. 46, the yard of the Royal Astronomical Society. All three had been compared to the Imperial standard before the fire. The new standard was made of Baily's metal No. 4 consisting of 16 parts copper, parts tin, and 1 part zinc. It was 38 inches long and 1 inch square. The Weights and Measures Act 1855 granted official recognition to the new standards. Between 1845 and 1855 forty yard standards were constructed, one of which was selected as the new Imperial standard. Four others, known as 'parliamentary copies', were distributed to the Royal Mint, the Royal Society of London, the Royal Observatory at Greenwich, and the New Palace at Westminster, commonly called the Houses of Parliament. The other 35 yard standards were distributed to the cities of London, Edinburgh, and Dublin, as well as the United States and other countries (although only the first five had official status). The imperial standard received by the United States is known as "Bronze Yard No. 11" The Weights and Measures Act 1878 (41 & 42 Vict. c. 49) confirmed the status of the existing yard standard, mandated regular intercomparisons between the several yard standards, and authorized the construction of one additional Parliamentary Copy (made in 1879 and known as Parliamentary Copy VI). Definition of the yard in terms of the meter Subsequent measurements revealed that the yard standard and its copies were shrinking at the rate of one part per million every twenty years due to the gradual release of strain incurred during the fabrication process. The international prototype meter, on the other hand, was comparatively stable. A measurement made in 1895 determined the length of the meter at inches relative to the imperial standard yard. The Weights and Measures (Metric System) Act 1897 (60 & 61 Vict. c. 46) in conjunction with Order in Council 411 (1898) made this relationship official. After 1898, the de facto legal definition of the yard came to be accepted as of a meter. The yard (known as the "international yard" in the United States) was legally defined to be exactly 0.9144 meter in 1959 under an agreement in 1959 between Australia, Canada, New Zealand, South Africa, the United Kingdom and the United States. In the UK, the provisions of the treaty were ratified by the Weights and Measures Act 1963. The Imperial Standard Yard of 1855 was renamed the United Kingdom Primary Standard Yard and retained its official status as the national prototype yard.Weights and Measures Act 1985 Baily's Metal. Parliamentary Copy (VI) of the Imperial Standard Yard. 41 & 42 Victoria, Chapter 49. Standard Yard at 62° Faht. Cast in 1878 Current use The yard is used to define the dimensions of the playing area in American football, Canadian football, association football, cricket, and in some countries golf. There are corresponding units of area and volume, the square yard and cubic yard respectively. These are sometimes referred to simply as "yards" when no ambiguity is possible, for example an American or Canadian concrete mixer may be marked with a capacity of "9 yards" or "1.5 yards", where cubic yards are obviously referred to. Yards are also used and are the legal requirement on road signs for shorter distances in the United Kingdom, and are also frequently found in conversation between Britons much like in the United States for distance. Textiles and fat quarters The yard, subdivided into eighths, is used for the purchase of fabrics in the United States and United Kingdom and was previously used elsewhere. In the United States the term "fat quarter" is used for a piece of fabric which is half a yard in length cut from a roll and then cut again along the width so that it is only half the width of the roll, thus the same area as a piece of one quarter yard cut from the full width of the roll; these pieces are popular for patchwork and quilting. The term "fat eighth" is also used, for a piece of one quarter yard from half the roll width, the same area as one eighth cut from the roll. Equivalences For purposes of measuring cloth, the early yard was divided by the binary method into two, four, eight and sixteen parts. The two most common divisions were the fourth and sixteenth parts. The quarter of a yard (9 inches) was known as the "quarter" without further qualification, while the sixteenth of a yard (2.25 inches) was called a nail. The eighth of a yard (4.5 inches) was sometimes called a finger, but was more commonly referred to simply as an eighth of a yard, while the half-yard (18 inches) was called "half a yard". Other units related to the yard, but not specific to cloth measurement: two yards are a fathom, a quarter of a yard (when not referring to cloth) is a span. Conversions international yard (defined 1959): 1250 (international) yards = 1143 meters 1 (international) yard = 0.9144 meters (exact) 1 (international) statute mile = 8 international furlongs = 80 international chains = 1760 (international) yards pre-1959 US yard'' – defined 1869, implemented 1893, deprecated 2023 For survey purposes, certain pre-1959 units were retained, usually prefaced by the word "survey," among them the survey inch, survey foot, and survey mile, also known as the statute mile. The rod and furlong existed only in their pre-1959 form and are thus not prefaced by the word "survey", and were deprecated at the same time as the survey foot. New conversion factors for the rod and furlong as 16.5 international feet and 660 international feet respectively have been published by NIST. However, it is not clear if a "survey yard" actually exists. If it did, its hypothetical values would be as follows: 3937 survey yards = 3600 meters 1 survey yard ≈ meters 1 survey mile = 8 furlongs = 80 chains = 1760 survey yards Comparing international yards and survey yards (international) yards = survey yards = meters 1 (international) yard = survey yards (exact) 1 (international) mile = survey miles(exact)
Physical sciences
English
Basics and measurement
34254
https://en.wikipedia.org/wiki/Yellow%20fever
Yellow fever
Yellow fever is a viral disease of typically short duration. In most cases, symptoms include fever, chills, loss of appetite, nausea, muscle pains—particularly in the back—and headaches. Symptoms typically improve within five days. In about 15% of people, within a day of improving the fever comes back, abdominal pain occurs, and liver damage begins causing yellow skin. If this occurs, the risk of bleeding and kidney problems is increased. The disease is caused by the yellow fever virus and is spread by the bite of an infected mosquito. It infects humans, other primates, and several types of mosquitoes. In cities, it is spread primarily by Aedes aegypti, a type of mosquito found throughout the tropics and subtropics. The virus is an RNA virus of the genus Flavivirus. The disease may be difficult to tell apart from other illnesses, especially in the early stages. To confirm a suspected case, blood-sample testing with a polymerase chain reaction is required. A safe and effective vaccine against yellow fever exists, and some countries require vaccinations for travelers. Other efforts to prevent infection include reducing the population of the transmitting mosquitoes. In areas where yellow fever is common, early diagnosis of cases and immunization of large parts of the population are important to prevent outbreaks. Once a person is infected, management is symptomatic; no specific measures are effective against the virus. Death occurs in up to half of those who get severe disease. In 2013, yellow fever was estimated to have caused 130,000 severe infections and 78,000 deaths in Africa. Approximately 90 percent of an estimated 200,000 cases of yellow fever per year occur in Africa. Nearly a billion people live in an area of the world where the disease is common. It is common in tropical areas of the continents of South America and Africa, but not in Asia. Since the 1980s, the number of cases of yellow fever has been increasing. This is believed to be due to fewer people being immune, more people living in cities, people moving frequently, and changing climate increasing the habitat for mosquitoes. The disease originated in Africa and spread to the Americas starting in the 17th century with the European trafficking of enslaved Africans from sub-Saharan Africa. Since the 17th century, several major outbreaks of the disease have occurred in the Americas, Africa, and Europe. In the 18th and 19th centuries, yellow fever was considered one of the most dangerous infectious diseases; numerous epidemics swept through major cities of the US and in other parts of the world. In 1927, the yellow fever virus became the first human virus to be isolated. Signs and symptoms Yellow fever begins after an incubation period of three to six days. Most cases cause only mild infection with fever, headache, chills, back pain, fatigue, loss of appetite, muscle pain, nausea, and vomiting. In these cases, the infection lasts only three to six days. But in 15% of cases, people enter a second, toxic phase of the disease characterized by recurring fever, this time accompanied by jaundice due to liver damage, as well as abdominal pain. Bleeding in the mouth, nose, eyes, and the gastrointestinal tract cause vomit containing blood, hence one of the names in Spanish for yellow fever, ("black vomit"). There may also be kidney failure, hiccups, and delirium. Among those who develop jaundice, the fatality rate is 20 to 50%, while the overall fatality rate is about 3 to 7.5%. Severe cases may have a mortality rate greater than 50%. Surviving the infection provides lifelong immunity, and normally results in no permanent organ damage. Complication Yellow fever can lead to death for 20% to 50% of those who develop severe disease. Jaundice, fatigue, heart rhythm problems, seizures and internal bleeding may also appear as complications of yellow fever during recovery time. Cause Yellow fever is caused by yellow fever virus (YFV), an enveloped RNA virus 40–50 in width, the type species and namesake of the family Flaviviridae. It was the first illness shown to be transmissible by filtered human serum and transmitted by mosquitoes, by American doctor Walter Reed around 1900. The positive-sense, single-stranded RNA is around 10,862 nucleotides long and has a single open reading frame encoding a polyprotein. Host proteases cut this polyprotein into three structural (C, prM, E) and seven nonstructural proteins (NS1, NS2A, NS2B, NS3, NS4A, NS4B, NS5); the enumeration corresponds to the arrangement of the protein coding genes in the genome. Minimal YFV 3UTR region is required for stalling of the host 5-3 exonuclease XRN1. The UTR contains PKS3 pseudoknot structure, which serves as a molecular signal to stall the exonuclease and is the only viral requirement for subgenomic flavivirus RNA (sfRNA) production. The sfRNAs are a result of incomplete degradation of the viral genome by the exonuclease and are important for viral pathogenicity. Yellow fever belongs to the group of hemorrhagic fevers. The viruses infect, amongst others, monocytes, macrophages, Schwann cells, and dendritic cells. They attach to the cell surfaces via specific receptors and are taken up by an endosomal vesicle. Inside the endosome, the decreased pH induces the fusion of the endosomal membrane with the virus envelope. The capsid enters the cytosol, decays, and releases the genome. Receptor binding, as well as membrane fusion, are catalyzed by the protein E, which changes its conformation at low pH, causing a rearrangement of the 90 homodimers to 60 homotrimers. After entering the host cell, the viral genome is replicated in the rough endoplasmic reticulum (ER) and in the so-called vesicle packets. At first, an immature form of the virus particle is produced inside the ER, whose M-protein is not yet cleaved to its mature form, so is denoted as precursor M (prM) and forms a complex with protein E. The immature particles are processed in the Golgi apparatus by the host protein furin, which cleaves prM to M. This releases E from the complex, which can now take its place in the mature, infectious virion. Transmission Yellow fever virus is mainly transmitted through the bite of the yellow fever mosquito Aedes aegypti, but other mostly Aedes mosquitoes such as the tiger mosquito (Aedes albopictus) can also serve as a vector for this virus. Like other arboviruses, which are transmitted by mosquitoes, yellow fever virus is taken up by a female mosquito when it ingests the blood of an infected human or another primate. Viruses reach the stomach of the mosquito, and if the virus concentration is high enough, the virions can infect epithelial cells and replicate there. From there, they reach the haemocoel (the blood system of mosquitoes) and from there the salivary glands. When the mosquito next sucks blood, it injects its saliva into the wound, and the virus reaches the bloodstream of the bitten person. Transovarial transmissionial and transstadial transmission of yellow fever virus within A. aegypti, that is, the transmission from a female mosquito to its eggs and then larvae, are indicated. This infection of vectors without a previous blood meal seems to play a role in single, sudden breakouts of the disease. Three epidemiologically different infectious cycles occur in which the virus is transmitted from mosquitoes to humans or other primates. In the "urban cycle", only the yellow fever mosquito A. aegypti is involved. It is well adapted to urban areas, and can also transmit other diseases, including Zika fever, dengue fever, and chikungunya. The urban cycle is responsible for the major outbreaks of yellow fever that occur in Africa. Except for an outbreak in Bolivia in 1999, this urban cycle no longer exists in South America. Besides the urban cycle, both in Africa and South America, a sylvatic cycle (forest or jungle cycle) is present, where Aedes africanus (in Africa) or mosquitoes of the genus Haemagogus and Sabethes (in South America) serve as vectors. In the jungle, the mosquitoes infect mainly nonhuman primates; the disease is mostly asymptomatic in African primates. In South America, the sylvatic cycle is currently the only way unvaccinated humans can become infected, which explains the low incidence of yellow fever cases on the continent. People who become infected in the jungle can carry the virus to urban areas, where A. aegypti acts as a vector. Because of this sylvatic cycle, yellow fever cannot be eradicated except by completely eradicating the mosquitoes that serve as vectors. In Africa, a third infectious cycle known as "savannah cycle" or intermediate cycle, occurs between the jungle and urban cycles. Different mosquitoes of the genus Aedes are involved. In recent years, this has been the most common form of transmission of yellow fever in Africa. Concern exists about yellow fever spreading to southeast Asia, where its vector A. aegypti already occurs. Pathogenesis After transmission from a mosquito, the viruses replicate in the lymph nodes and infect dendritic cells in particular. From there, they reach the liver and infect hepatocytes (probably indirectly via Kupffer cells), which leads to eosinophilic degradation of these cells and to the release of cytokines. Apoptotic masses known as Councilman bodies appear in the cytoplasm of hepatocytes. Fatality may occur when cytokine storm, shock, and multiple organ failure follow. Diagnosis Yellow fever is most frequently a clinical diagnosis, based on symptomatology and travel history. Mild cases of the disease can only be confirmed virologically. Since mild cases of yellow fever can also contribute significantly to regional outbreaks, every suspected case of yellow fever (involving symptoms of fever, pain, nausea, and vomiting 6–10 days after leaving the affected area) is treated seriously. If yellow fever is suspected, the virus cannot be confirmed until 6–10 days following the illness. A direct confirmation can be obtained by reverse transcription polymerase chain reaction, where the genome of the virus is amplified. Another direct approach is the isolation of the virus and its growth in cell culture using blood plasma; this can take 1–4 weeks. Serologically, an enzyme-linked immunosorbent assay during the acute phase of the disease using specific IgM against yellow fever or an increase in specific IgG titer (compared to an earlier sample) can confirm yellow fever. Together with clinical symptoms, the detection of IgM or a four-fold increase in IgG titer is considered sufficient indication for yellow fever. As these tests can cross-react with other flaviviruses, such as dengue virus, these indirect methods cannot conclusively prove yellow fever infection. Liver biopsy can verify inflammation and necrosis of hepatocytes and detect viral antigens. Because of the bleeding tendency of yellow fever patients, a biopsy is only advisable post mortem to confirm the cause of death. In a differential diagnosis, infections with yellow fever must be distinguished from other feverish illnesses such as malaria. Other viral hemorrhagic fevers, such as Ebola virus, Lassa virus, Marburg virus, and Junin virus, must be excluded as the cause. Prevention Personal prevention of yellow fever includes vaccination and avoidance of mosquito bites in areas where yellow fever is endemic. Institutional measures for the prevention of yellow fever include vaccination programmes and measures to control mosquitoes. Programmes for distribution of mosquito nets for use in homes produce reductions in malaria and yellow fever. EPA-registered insect repellent is recommended when outdoors. Exposure for even a short time is enough for a potential mosquito bite. Long-sleeved clothing, long pants, and socks are useful for prevention. Applying larvicides to water-storage containers can help eliminate potential mosquito breeding sites. EPA-registered insecticide spray decreases the transmission of yellow fever. Use insect repellent when outdoors such as those containing DEET, picaridin, ethyl butylacetylaminopropionate (IR3535), or oil of lemon eucalyptus on exposed skin. Mosquitoes may bite through thin clothing, so spraying clothes with repellent containing permethrin or another EPA-registered repellent gives extra protection. Clothing treated with permethrin is commercially available. Mosquito repellents containing permethrin are not approved for application directly to the skin. The peak biting times for many mosquito species are dusk to dawn. However, A. aegypti, one of the mosquitoes that transmit yellow fever virus, feeds during the daytime. Staying in accommodations with screened or air-conditioned rooms, particularly during peak biting times, also reduces the risk of mosquito bites. Vaccination Vaccination is recommended for those traveling to affected areas, because non-native people tend to develop more severe illness when infected. Protection begins by the 10th day after vaccine administration in 95% of people, and had been reported to last for at least 10 years. The World Health Organization (WHO) now states that a single dose of vaccine is sufficient to confer lifelong immunity against yellow fever disease. The attenuated live vaccine stem 17D was developed in 1937 by Max Theiler. The WHO recommends routine vaccination for people living in affected areas between the 9th and 12th month after birth. Up to one in four people experience fever, aches, local soreness, and redness at the injection site. In rare cases (less than one in 200,000 to 300,000), the vaccination can cause yellow fever vaccine-associated viscerotropic disease, which is fatal in 60% of cases. It is probably due to the genetic morphology of the immune system. Another possible side effect is an infection of the nervous system, which occurs in one in 200,000 to 300,000 cases, causing yellow fever vaccine-associated neurotropic disease, which can lead to meningoencephalitis and is fatal in less than 5% of cases. The Yellow Fever Initiative, launched by the WHO in 2006, vaccinated more than 105 million people in 14 countries in West Africa. No outbreaks were reported during 2015. The campaign was supported by the GAVI alliance and governmental organizations in Europe and Africa. According to the WHO, mass vaccination cannot eliminate yellow fever because of the vast number of infected mosquitoes in urban areas of the target countries, but it will significantly reduce the number of people infected. Demand for yellow fever vaccines has continued to increase due to the growing number of countries implementing yellow fever vaccination as part of their routine immunization programmes. Recent upsurges in yellow fever outbreaks in Angola (2015), the Democratic Republic of Congo (2016), Uganda (2016), and more recently in Nigeria and Brazil in 2017 have further increased demand, while straining global vaccine supply. Therefore, to vaccinate susceptible populations in preventive mass immunization campaigns during outbreaks, fractional dosing of the vaccine is being considered as a dose-sparing strategy to maximize limited vaccine supplies. Fractional dose yellow fever vaccination refers to administration of a reduced volume of vaccine dose, which has been reconstituted as per manufacturer recommendations. The first practical use of fractional dose yellow fever vaccination was in response to a large yellow fever outbreak in the Democratic Republic of the Congo in mid-2016. Available evidence shows that fractional dose yellow fever vaccination induces a level of immune response similar to that of the standard full dose. In March 2017, the WHO launched a vaccination campaign in Brazil with 3.5 million doses from an emergency stockpile. In March 2017 the WHO recommended vaccination for travellers to certain parts of Brazil. In March 2018, Brazil shifted its policy and announced it planned to vaccinate all 77.5 million currently unvaccinated citizens by April 2019. Compulsory vaccination Some countries in Asia are considered to be potentially in danger of yellow fever epidemics, as both mosquitoes with the capability to transmit yellow fever as well as susceptible monkeys are present. The disease does not yet occur in Asia. To prevent the introduction of the virus, some countries demand previous vaccination of foreign visitors who have passed through yellow fever areas. Vaccination has to be proved by a vaccination certificate, which is valid 10 days after the vaccination and lasts for 10 years. Although the WHO on 17 May 2013 advised that subsequent booster vaccinations are unnecessary, an older (than 10 years) certificate may not be acceptable at all border posts in all affected countries. A list of the countries that require yellow fever vaccination is published by the WHO. If the vaccination cannot be given for some reason, dispensation may be possible. In this case, an exemption certificate issued by a WHO-approved vaccination center is required. Although 32 of 44 countries where yellow fever occurs endemically do have vaccination programmes, in many of these countries, less than 50% of their population is vaccinated. Vector control Control of the yellow fever mosquito A. aegypti is of major importance, especially because the same mosquito can also transmit dengue fever and chikungunya disease. A. aegypti breeds preferentially in water, for example, in installations by inhabitants of areas with precarious drinking water supplies, or in domestic refuse, especially tires, cans, and plastic bottles. These conditions are common in urban areas in developing countries. Two main strategies are employed to reduce A. aegypti populations. One approach is to kill the developing larvae. Measures are taken to reduce the water accumulations in which the larvae develop. Larvicides are used, along with larvae-eating fish and copepods, which reduce the number of larvae. For many years, copepods of the genus Mesocyclops have been used in Vietnam for preventing dengue fever. This eradicated the mosquito vector in several areas. Similar efforts may prove effective against yellow fever. Pyriproxyfen is recommended as a chemical larvicide, mainly because it is safe for humans and effective in small doses. The second strategy is to reduce populations of the adult yellow fever mosquito. Lethal ovitraps can reduce Aedes populations, using lesser amounts of pesticide because it targets the pest directly. Curtains and lids of water tanks can be sprayed with insecticides, but application inside houses is not recommended by the WHO. Insecticide-treated mosquito nets are effective, just as they are against the Anopheles mosquito that carries malaria. Treatment As with other Flavivirus infections, no cure is known for yellow fever. Hospitalization is advisable and intensive care may be necessary because of rapid deterioration in some cases. Certain acute treatment methods lack efficacy: passive immunization after the emergence of symptoms is probably without effect; ribavirin and other antiviral drugs, as well as treatment with interferons, are ineffective in yellow fever patients. Symptomatic treatment includes rehydration and pain relief with drugs such as paracetamol (acetaminophen). However, aspirin and other non-steroidal anti-inflammatory drugs (NSAIDs) are often avoided because of an increased risk of gastrointestinal bleeding due to their anticoagulant effects. Epidemiology Yellow fever is common in tropical and subtropical areas of South America and Africa. Worldwide, about 600 million people live in endemic areas. The WHO estimates 200,000 cases of yellow fever worldwide each year. About 15% of people infected with yellow fever progress to a severe form of the illness, and up to half of those will die, as there is no cure for yellow fever. Africa An estimated 90% of yellow fever infections occur on the African continent. In 2016, a large outbreak originated in Angola and spread to neighboring countries before being contained by a massive vaccination campaign. In March and April 2016, 11 imported cases of the Angola genotype in unvaccinated Chinese nationals were reported in China, the first appearance of the disease in Asia in recorded history. Phylogenetic analysis has identified seven genotypes of yellow fever viruses, and they are assumed to be differently adapted to humans and to the vector A. aegypti. Five genotypes (Angola, Central/East Africa, East Africa, West Africa I, and West Africa II) occur only in Africa. West Africa genotype I is found in Nigeria and the surrounding region. West Africa genotype I appears to be especially infectious, as it is often associated with major outbreaks. The three genotypes found outside of Nigeria and Angola occur in areas where outbreaks are rare. Two outbreaks, in Kenya (1992–1993) and Sudan (2003 and 2005), involved the East African genotype, which had remained undetected in the previous 40 years. South America In South America, two genotypes have been identified (South American genotypes I and II). Based on phylogenetic analysis these two genotypes appear to have originated in West Africa and were first introduced into Brazil. The date of introduction of the predecessor African genotype which gave rise to the South American genotypes appears to be 1822 (95% confidence interval 1701 to 1911). The historical record shows an outbreak of yellow fever occurred in Recife, Brazil, between 1685 and 1690. The disease seems to have disappeared, with the next outbreak occurring in 1849. It was likely introduced with the trafficking of slaves through the slave trade from Africa. Genotype I has been divided into five subclades, A through E. In late 2016, a large outbreak began in Minas Gerais state of Brazil that was characterized as a sylvatic or jungle epizootic. Real-time phylogenetic investigations at the epicentre of the outbreak revealed that the outbreak was caused by the introduction of a virus lineage from the Amazon region into the southeast region around July 2016, spreading rapidly across several neotropical monkey species, including brown howler monkeys, which serve as a sentinel species for yellow fever. No cases had been transmitted between humans by the A. aegypti mosquito, which can sustain urban outbreaks that can spread rapidly. In April 2017, the sylvatic outbreak continued moving toward the Brazilian coast, where most people were unvaccinated. By the end of May the outbreak appeared to be declining after more than 3,000 suspected cases, 758 confirmed and 264 deaths confirmed to be yellow fever. The Health Ministry launched a vaccination campaign and was concerned about spread during the Carnival season in February and March. The CDC issued a Level 2 alert (practice enhanced precautions.) A Bayesian analysis of genotypes I and II has shown that genotype I accounts for virtually all the current infections in Brazil, Colombia, Venezuela, and Trinidad and Tobago, while genotype II accounted for all cases in Peru. Genotype I originated in the northern Brazilian region around 1908 (95% highest posterior density interval [HPD]: 1870–1936). Genotype II originated in Peru in 1920 (95% HPD: 1867–1958). The estimated rate of mutation for both genotypes was about 5 × 10−4 substitutions/site/year, similar to that of other RNA viruses. Asia The main vector (A. aegypti) also occurs in tropical and subtropical regions of Asia, the Pacific, and Australia, but yellow fever had never occurred there until jet travel introduced 11 cases from the 2016 Angola and DR Congo yellow fever outbreak in Africa. Proposed explanations include: That the strains of the mosquito in the east are less able to transmit yellow fever virus. That immunity is present in the populations because of other diseases caused by related viruses (for example, dengue). That the disease was never introduced because the shipping trade was insufficient. But none is considered satisfactory. Another proposal is the absence of a slave trade to Asia on the scale of that to the Americas. The trans-Atlantic slave trade probably introduced yellow fever into the Western Hemisphere from Africa. History Early history The evolutionary origins of yellow fever most likely lie in Africa, with transmission of the disease from nonhuman primates to humans. The virus is thought to have originated in East or Central Africa and spread from there to West Africa. As it was endemic in Africa, local populations had developed some immunity to it. When an outbreak of yellow fever would occur in an African community where colonists resided, most Europeans died, while the indigenous Africans usually developed nonlethal symptoms resembling influenza. This phenomenon, in which certain populations develop immunity to yellow fever due to prolonged exposure in their childhood, is known as acquired immunity. The virus, as well as the vector A. aegypti, were probably transferred to North and South America with the trafficking of slaves from Africa, part of the Columbian exchange following European exploration and colonization. However, some researchers have argued that yellow fever might have existed in the Americas during the pre-Columbian period as mosquitoes of the genus Haemagogus, which is indigenous to the Americas, have been known to carry the disease. The first definitive outbreak of yellow fever in the New World was in 1647 on the island of Barbados. An outbreak was recorded by Spanish colonists in 1648 in the Yucatán Peninsula, where the indigenous Mayan people called the illness xekik ("blood vomit"). In 1685, Brazil suffered its first epidemic in Recife. The first mention of the disease by the name "yellow fever" occurred in 1744. (John Mitchell) (1805) (Mitchell's account of the Yellow Fever in Virginia in 1741–2) , The Philadelphia Medical Museum, 1 (1) : 1–20. (John Mitchell) (1814) "Account of the Yellow fever which prevailed in Virginia in the years 1737, 1741, and 1742, in a letter to the late Cadwallader Colden, Esq. of New York, from the late John Mitchell, M.D.F.R.S. of Virginia," American Medical and Philosophical Register, 4 : 181–215. The term "yellow fever" appears on p. 186. On p. 188, Mitchell mentions "... the distemper was what is generally called yellow fever in America." However, on pages 191–192, he states "... I shall consider the cause of the yellowness which is so remarkable in this distemper, as to have given it the name of the Yellow Fever. Unfortunately, Mitchell misidentified the cause of yellow fever, believing it was transmitted through "putrid miasma" in the air." However, Dr. Mitchell misdiagnosed the disease that he observed and treated, and the disease was probably Weil's disease or hepatitis. McNeill argues that the environmental and ecological disruption caused by the introduction of sugar plantations created the conditions for mosquito and viral reproduction, and subsequent outbreaks of yellow fever. Deforestation reduced populations of insectivorous birds and other creatures that fed on mosquitoes and their eggs. In Colonial times and during the Napoleonic Wars, the West Indies were known as a particularly dangerous posting for soldiers due to yellow fever being endemic in the area. The mortality rate in British garrisons in Jamaica was seven times that of garrisons in Canada, mostly because of yellow fever and other tropical diseases. Both English and French forces posted there were seriously affected by the "yellow jack". Wanting to regain control of the lucrative sugar trade in Saint-Domingue (Hispaniola), and with an eye on regaining France's New World empire, Napoleon sent an army under the command of his brother-in-law General Charles Leclerc to Saint-Domingue to seize control after a slave revolt. The historian J. R. McNeill asserts that yellow fever accounted for about 35,000 to 45,000 casualties of these forces during the fighting. Only one-third of the French troops survived for withdrawal and return to France. Napoleon gave up on the island and his plans for North America, selling the Louisiana Purchase to the US in 1803. In 1804, Haiti proclaimed its independence as the second republic in the Western Hemisphere. Considerable debate exists over whether the number of deaths caused by disease in the Haitian Revolution was exaggerated. Although yellow fever is most prevalent in tropical-like climates, the northern United States was not exempt from the fever. The first outbreak in English-speaking North America occurred in New York City in 1668. English colonists in Philadelphia and the French in the Mississippi River Valley recorded major outbreaks in 1669, as well as additional yellow fever epidemics in Philadelphia, Baltimore, and New York City in the 18th and 19th centuries. The disease traveled along steamboat routes from New Orleans, causing some 100,000–150,000 deaths in total. The yellow fever epidemic of 1793 in Philadelphia, which was then the capital of the United States, resulting in the deaths of several thousand people, more than 9% of the population. One of these deaths was James Hutchinson, a physician helping to treat the population of the city. The national government fled the city to Trenton, New Jersey, including President George Washington. The southern city of New Orleans was plagued with major epidemics during the 19th century, most notably in 1833 and 1853. A major epidemic occurred in both New Orleans and Shreveport, Louisiana, in 1873. Its residents called the disease "yellow jack". Urban epidemics continued in the United States until 1905, with the last outbreak affecting New Orleans. At least 25 major outbreaks took place in the Americas during the 18th and 19th centuries, including particularly serious ones in Cartagena, Chile, in 1741; Cuba in 1762 and 1900; Santo Domingo in 1803; and Memphis, Tennessee, in 1878. In the early 19th century, the prevalence of yellow fever in the Caribbean "led to serious health problems" and alarmed the United States Navy as numerous deaths and sickness curtailed naval operations and destroyed morale. One episode began in April 1822 when the frigate USS Macedonian left Boston and became part of Commodore James Biddle's West India Squadron. Unbeknownst to all, they were about to embark on a cruise to disaster and their assignment "would prove a cruise through hell". Secretary of the Navy Smith Thompson had assigned the squadron to guard United States merchant shipping and suppress piracy. During their time on deployment from 26 May to 3 August 1822, 76 of the Macedonians officers and men died, including John Cadle, surgeon USN. Seventy-four of these deaths were attributed to yellow fever. Biddle reported that another 52 of his crew were on the sick list. In their report to the secretary of the Navy, Biddle and Surgeon's Mate Charles Chase stated the cause as "fever". As a consequence of this loss, Biddle noted that his squadron was forced to return to Norfolk Navy Yard early. Upon arrival, the Macedonians crew were provided medical care and quarantined at Craney Island, Virginia. In 1853, Cloutierville, Louisiana, had a late-summer outbreak of yellow fever that quickly killed 68 of the 91 inhabitants. A local doctor concluded that some unspecified infectious agent had arrived in a package from New Orleans. In 1854, 650 residents of Savannah, Georgia, died from yellow fever. In 1858, St. Matthew's German Evangelical Lutheran Church in Charleston, South Carolina, had 308 yellow fever deaths, reducing the congregation by half. A ship carrying persons infected with the virus arrived in Hampton Roads in southeastern Virginia in June 1855. The disease spread quickly through the community, eventually killing over 3,000 people, mostly residents of Norfolk and Portsmouth. In 1873, Shreveport, Louisiana, lost 759 citizens in an 80-day period to a yellow fever epidemic, with over 400 additional victims eventually succumbing. The total death toll from August through November was approximately 1,200. In 1878, about 20,000 people died in a widespread epidemic in the Mississippi River Valley. That year, Memphis had an unusually large amount of rain, which led to an increase in the mosquito population. The result was a huge epidemic of yellow fever. The steamship John D. Porter took people fleeing Memphis northward in hopes of escaping the disease, but passengers were not allowed to disembark due to concerns of spreading yellow fever. The ship roamed the Mississippi River for the next two months before unloading her passengers. Major outbreaks have also occurred in southern Europe. Gibraltar lost many lives to outbreaks in 1804, 1814, and 1828. Barcelona suffered the loss of several thousand citizens during an outbreak in 1821. The Duke de Richelieu deployed 30,000 French troops to the border between France and Spain in the Pyrenees Mountains, to establish a cordon sanitaire to prevent the epidemic from spreading from Spain into France. Causes and transmission Ezekiel Stone Wiggins, known as the Ottawa Prophet, proposed that the cause of a yellow fever epidemic in Jacksonville, Florida, in 1888, was astrological. In 1848, Josiah C. Nott suggested that yellow fever was spread by insects such as moths or mosquitoes, basing his ideas on the pattern of transmission of the disease. Carlos Finlay, a Cuban-Spanish doctor and scientist, proposed in 1881 that yellow fever might be transmitted by previously infected mosquitoes rather than by direct contact from person to person, as had long been believed. Since the losses from yellow fever in the Spanish–American War in the 1890s were extremely high, U.S. Army doctors began research experiments with a team led by Walter Reed, and composed of doctors James Carroll, Aristides Agramonte, and Jesse William Lazear. They successfully proved Finlay's "mosquito hypothesis". Yellow fever was the first virus shown to be transmitted by mosquitoes. The physician William Gorgas applied these insights and eradicated yellow fever from Havana. He also campaigned against yellow fever during the construction of the Panama Canal. A previous effort of canal building by the French had failed in part due to mortality from the high incidence of yellow fever and malaria, which killed many workers. Although Reed has received much of the credit in United States history books for "beating" yellow fever, he had fully credited Finlay with the discovery of the yellow fever vector, and how it might be controlled. Reed often cited Finlay's papers in his articles and also credited him for the discovery in his correspondence. The acceptance of Finlay's work was one of the most important and far-reaching effects of the U.S. Army Yellow Fever Commission of 1900. Applying methods first suggested by Finlay, the United States government and Army eradicated yellow fever in Cuba and later in Panama, allowing completion of the Panama Canal. While Reed built on the research of Finlay, historian François Delaporte notes that yellow fever research was a contentious issue. Scientists, including Finlay and Reed, became successful by building on the work of less prominent scientists, without always giving them the credit they were due. Reed's research was essential in the fight against yellow fever. He is also credited for using the first type of medical consent form during his experiments in Cuba, an attempt to ensure that participants knew they were taking a risk by being part of testing. Like Cuba and Panama, Brazil also led a highly successful sanitation campaign against mosquitoes and yellow fever. Beginning in 1903, the campaign led by Oswaldo Cruz, then director general of public health, resulted not only in eradicating the disease but also in reshaping the physical landscape of Brazilian cities such as Rio de Janeiro. During rainy seasons, Rio de Janeiro regularly suffered floods, as water from the bay surrounding the city overflowed into Rio's narrow streets. Coupled with the poor drainage systems found throughout Rio, this created swampy conditions in the city's neighborhoods. Pools of stagnant water stood year-long in city streets and proved to be fertile ground for disease-carrying mosquitoes. Thus, under Cruz's direction, public health units known as "mosquito inspectors" fiercely worked to combat yellow fever throughout Rio by spraying, exterminating rats, improving drainage, and destroying unsanitary housing. Ultimately, the city's sanitation and renovation campaigns reshaped Rio de Janeiro's neighborhoods. Its poor residents were pushed from city centers to Rio's suburbs, or to towns found in the outskirts of the city. In later years, Rio's most impoverished inhabitants would come to reside in favelas. During 1920–1923, the Rockefeller Foundation's International Health Board undertook an expensive and successful yellow fever eradication campaign in Mexico. The IHB gained the respect of Mexico's federal government because of the success. The eradication of yellow fever strengthened the relationship between the US and Mexico, which had not been very good in the years prior. The eradication of yellow fever was also a major step toward better global health. In 1927, scientists isolated the yellow fever virus in West Africa. Following this, two vaccines were developed in the 1930s. Max Theiler led the completion of the 17D yellow fever vaccine in 1937, for which he was subsequently awarded the Nobel Prize in Physiology or Medicine. That vaccine, 17D, is still in use, although newer vaccines, based on vero cells, are in development (as of 2018). Current status Using vector control and strict vaccination programs, the urban cycle of yellow fever was nearly eradicated from South America. Since 1943, only a single urban outbreak in Santa Cruz de la Sierra, Bolivia, has occurred. Since the 1980s, however, the number of yellow fever cases has been increasing again, and A. aegypti has returned to the urban centers of South America. This is partly due to limitations on available insecticides, as well as habitat dislocations caused by climate change. It is also because the vector control program was abandoned. Although no new urban cycle has yet been established, scientists believe this could happen again at any point. An outbreak in Paraguay in 2008 was thought to be urban in nature, but this ultimately proved not to be the case. In Africa, virus eradication programs have mostly relied upon vaccination. These programs have largely been unsuccessful because they were unable to break the sylvatic cycle involving wild primates. With few countries establishing regular vaccination programs, measures to fight yellow fever have been neglected, making the future spread of the virus more likely. Research In the hamster model of yellow fever, early administration of the antiviral ribavirin is an effective treatment of many pathological features of the disease. Ribavirin treatment during the first five days after virus infection improved survival rates, reduced tissue damage in the liver and spleen, prevented hepatocellular steatosis, and normalised levels of alanine aminotransferase, a liver damage marker. The mechanism of action of ribavirin in reducing liver pathology in yellow fever virus infection may be similar to its activity in the treatment of hepatitis C, a related virus. Because ribavirin had failed to improve survival in a virulent rhesus model of yellow fever infection, it had been previously discounted as a possible therapy. Infection was reduced in mosquitoes with the wMel strain of Wolbachia. Yellow fever has been researched by several countries as a potential biological weapon.
Biology and health sciences
Infectious disease
null
34341
https://en.wikipedia.org/wiki/Year
Year
The year is a unit of time based on the roughly 365¼ days taken by the Earth to revolve around the Sun. The contemporary calendar year, based on the Gregorian solar calendar, approximates this cycle. The term "year" is also used to indicate other periods of roughly similar duration, such as the roughly 354-day cycle of the Moon's phases (see lunar calendar), as well as periods loosely associated with the calendar or astronomical year, such as the seasonal year, the fiscal year, the academic year, etc. Due to the Earth's axial tilt, the course of a year sees the passing of the seasons, marked by changes in weather, the hours of daylight, and, consequently, vegetation and soil fertility. In temperate and subpolar regions around the planet, four seasons are generally recognized: spring, summer, autumn, and winter. In tropical and subtropical regions, several geographical sectors do not present defined seasons; but in the seasonal tropics, the annual wet and dry seasons are recognized and tracked. By extension, the term "year" can also be applied to the time taken for any astronomical object to revolve around its primary, as for example the Martian year of roughly 1.88 Earth years. The term can also be used in reference to any long period or cycle, such as the Great Year. Calendar year A calendar year is an approximation of the number of days of the Earth's orbital period, as counted in a given calendar. The Gregorian calendar, or modern calendar, presents its calendar year to be either a common year of 365 days or a leap year of 366 days, as do the Julian calendars. For the Gregorian calendar, the average length of the calendar year (the mean year) across the complete leap cycle of 400 years is 365.2425 days (97 out of 400 years are leap years). Abbreviation In English, the unit of time for year is commonly abbreviated as "y" or "yr". The symbol "a" (for , year) is sometimes used in scientific literature, though its exact duration may be inconsistent. Etymology English year (via West Saxon ġēar (), Anglian ġēr) continues Proto-Germanic *jǣran (*jē₁ran). Cognates are German Jahr, Old High German jār, Old Norse ár and Gothic jer, from the Proto-Indo-European noun "year, season". Cognates also descended from the same Proto-Indo-European noun (with variation in suffix ablaut) are Avestan yārǝ "year", Greek () "year, season, period of time" (whence "hour"), Old Church Slavonic jarŭ, and Latin hornus "of this year". Latin (a 2nd declension masculine noun; is the accusative singular; is genitive singular and nominative plural; the dative and ablative singular) is from a PIE noun , which also yielded Gothic aþn "year" (only the dative plural aþnam is attested). Although most languages treat the word as thematic *yeh₁r-o-, there is evidence for an original derivation with an *-r/n suffix, *yeh₁-ro-. Both Indo-European words for year, *yeh₁-ro- and *h₂et-no-, would then be derived from verbal roots meaning "to go, move", *h₁ey- and *h₂et-, respectively (compare Vedic Sanskrit éti "goes", atasi "thou goest, wanderest"). A number of English words are derived from Latin , such as annual, annuity, anniversary, etc.; per annum means "each year", means "in the year of the Lord". The Greek word for "year", , is cognate with Latin vetus "old", from the PIE word *wetos- "year", also preserved in this meaning in Sanskrit "year" and "yearling (calf)", the latter also reflected in Latin vitulus "bull calf", English wether "ram" (Old English weðer, Gothic wiþrus "lamb"). In some languages, it is common to count years by referencing to one season, as in "summers", or "winters", or "harvests". Examples include Chinese 年 "year", originally 秂, an ideographic compound of a person carrying a bundle of wheat denoting "harvest". Slavic besides godŭ "time period; year" uses lěto "summer; year". Intercalation Astronomical years do not have an integer number of days or lunar months. Any calendar that follows an astronomical year must have a system of intercalation such as leap years. Julian calendar In the Julian calendar, the average (mean) length of a year is 365.25 days. In a non-leap year, there are 365 days, in a leap year there are 366 days. A leap year occurs every fourth year during which a leap day is intercalated into the month of February. The name "Leap Day" is applied to the added day. In astronomy, the Julian year is a unit of time defined as 365.25 days, each of exactly seconds (SI base unit), totaling exactly 31,557,600 seconds in the Julian astronomical year. Revised Julian calendar The Revised Julian calendar, proposed in 1923 and used in some Eastern Orthodox Churches, has 218 leap years every 900 years, for the average (mean) year length of days, close to the length of the mean tropical year, days (relative error of 9·10). In the year 2800 CE, the Gregorian and Revised Julian calendars will begin to differ by one calendar day. Gregorian calendar The Gregorian calendar aims to ensure that the northward equinox falls on or shortly before March 21 and hence it follows the northward equinox year, or tropical year. Because 97 out of 400 years are leap years, the mean length of the Gregorian calendar year is days; with a relative error below one ppm (8·10) relative to the current length of the mean tropical year ( days) and even closer to the current March equinox year of days that it aims to match. Other calendars Historically, lunisolar calendars intercalated entire leap months on an observational basis. Lunisolar calendars have mostly fallen out of use except for liturgical reasons (Hebrew calendar, various Hindu calendars). A modern adaptation of the historical Jalali calendar, known as the Solar Hijri calendar (1925), is a purely solar calendar with an irregular pattern of leap days based on observation (or astronomical computation), aiming to place new year (Nowruz) on the day of vernal equinox (for the time zone of Tehran), as opposed to using an algorithmic system of leap years. Year numbering A calendar era assigns a cardinal number to each sequential year, using a reference event in the past (called the epoch) as the beginning of the era. The Gregorian calendar era is the world's most widely used civil calendar. Its epoch is a 6th century estimate of the date of birth of Jesus of Nazareth. Two notations are used to indicate year numbering in the Gregorian calendar: the Christian "Anno Domini" (meaning "in the year of the Lord"), abbreviated AD; and "Common Era", abbreviated CE, preferred by many of other faiths and none. Year numbers are based on inclusive counting, so that there is no "year zero". Years before the epoch are abbreviated BC for Before Christ or BCE for Before the Common Era. In Astronomical year numbering, positive numbers indicate years AD/CE, the number 0 designates 1 BC/BCE, −1 designates 2 BC/BCE, and so on. Other eras include that of Ancient Rome, ("from the foundation of the city), abbreviated AUC; ("year of the world"), used for the Hebrew calendar and abbreviated AM; and the Japanese imperial eras. The Islamic Hijri year, (year of the Hijrah, abbreviated AH), is a lunar calendar of twelve lunar months and thus is shorter than a solar year. Pragmatic divisions Financial and scientific calculations often use a 365-day calendar to simplify daily rates. Fiscal year A fiscal year or financial year is a 12-month period used for calculating annual financial statements in businesses and other organizations. In many jurisdictions, regulations regarding accounting require such reports once per twelve months, but do not require that the twelve months constitute a calendar year. For example, in Canada and India the fiscal year runs from April 1; in the United Kingdom it runs from April 1 for purposes of corporation tax and government financial statements, but from April 6 for purposes of personal taxation and payment of state benefits; in Australia it runs from July 1; while in the United States the fiscal year of the federal government runs from October 1. Academic year An academic year is the annual period during which a student attends an educational institution. The academic year may be divided into academic terms, such as semesters or quarters. The school year in many countries starts in August or September and ends in May, June or July. In Israel the academic year begins around October or November, aligned with the second month of the Hebrew calendar. Some schools in the UK, Canada and the United States divide the academic year into three roughly equal-length terms (called trimesters or quarters in the United States), roughly coinciding with autumn, winter, and spring. At some, a shortened summer session, sometimes considered part of the regular academic year, is attended by students on a voluntary or elective basis. Other schools break the year into two main semesters, a first (typically August through December) and a second semester (January through May). Each of these main semesters may be split in half by mid-term exams, and each of the halves is referred to as a quarter (or term in some countries). There may also be a voluntary summer session or a short January session. Some other schools, including some in the United States, have four marking periods. Some schools in the United States, notably Boston Latin School, may divide the year into five or more marking periods. Some state in defense of this that there is perhaps a positive correlation between report frequency and academic achievement. There are typically 180 days of teaching each year in schools in the US, excluding weekends and breaks, while there are 190 days for pupils in state schools in Canada, New Zealand and the United Kingdom, and 200 for pupils in Australia. In India the academic year normally starts from June 1 and ends on May 31. Though schools start closing from mid-March, the actual academic closure is on May 31 and in Nepal it starts from July 15. Schools and universities in Australia typically have academic years that roughly align with the calendar year (i.e., starting in February or March and ending in October to December), as the southern hemisphere experiences summer from December to February. Astronomical years Julian year The Julian year, as used in astronomy and other sciences, is a time unit defined as exactly 365.25 days of SI seconds each ("ephemeris days"). This is the normal meaning of the unit "year" used in various scientific contexts. The Julian century of ephemeris days and the Julian millennium of ephemeris days are used in astronomical calculations. Fundamentally, expressing a time interval in Julian years is a way to precisely specify an amount of time (not how many "real" years), for long time intervals where stating the number of ephemeris days would be unwieldy and unintuitive. By convention, the Julian year is used in the computation of the distance covered by a light-year. In the Unified Code for Units of Measure (but not according to the International Union of Pure and Applied Physics or the International Union of Geological Sciences, see below), the symbol 'a' (without subscript) always refers to the Julian year, 'aj', of exactly seconds. 365.25 d × = 1 a = 1 aj = Ms The SI multiplier prefixes may be applied to it to form "ka", "Ma", etc. Sidereal, tropical, and anomalistic years Each of these three years can be loosely called an astronomical year. The sidereal year is the time taken for the Earth to complete one revolution of its orbit, as measured against a fixed frame of reference (such as the fixed stars, Latin , singular ). Its average duration is days (365 d 6 h 9 min 9.76 s) (at the epoch J2000.0 = January 1, 2000, 12:00:00 TT). Today the mean tropical year is defined as the period of time for the mean ecliptic longitude of the Sun to increase by 360 degrees. Since the Sun's ecliptic longitude is measured with respect to the equinox, the tropical year comprises a complete cycle of the seasons and is the basis of solar calendars such as the internationally used Gregorian calendar. The modern definition of mean tropical year differs from the actual time between passages of, e.g., the northward equinox, by a minute or two, for several reasons explained below. Because of the Earth's axial precession, this year is about 20 minutes shorter than the sidereal year. The mean tropical year is approximately 365 days, 5 hours, 48 minutes, 45 seconds, using the modern definition (= × ). The length of the tropical year varies a bit over thousands of years because the rate of axial precession is not constant. The anomalistic year is the time taken for the Earth to complete one revolution with respect to its apsides. The orbit of the Earth is elliptical; the extreme points, called apsides, are the perihelion, where the Earth is closest to the Sun, and the aphelion, where the Earth is farthest from the Sun. The anomalistic year is usually defined as the time between perihelion passages. Its average duration is 365.259636 days (365 d 6 h 13 min 52.6 s) (at the epoch J2011.0). Draconic year The draconic year, draconitic year, eclipse year, or ecliptic year is the time taken for the Sun (as seen from the Earth) to complete one revolution with respect to the same lunar node (a point where the Moon's orbit intersects the ecliptic). The year is associated with eclipses: these occur only when both the Sun and the Moon are near these nodes; so eclipses occur within about a month of every half eclipse year. Hence there are two eclipse seasons every eclipse year. The average duration of the eclipse year is days (346 d 14 h 52 min 54 s) (at the epoch J2000.0). This term is sometimes erroneously used for the draconic or nodal period of lunar precession, that is the period of a complete revolution of the Moon's ascending node around the ecliptic: Julian years ( days; at the epoch J2000.0). Full moon cycle The full moon cycle is the time for the Sun (as seen from the Earth) to complete one revolution with respect to the perigee of the Moon's orbit. This period is associated with the apparent size of the full moon, and also with the varying duration of the synodic month. The duration of one full moon cycle is: days (411 days 18 hours 49 minutes 35 seconds) (at the epoch J2000.0). Lunar year The lunar year comprises twelve full cycles of the phases of the Moon, as seen from Earth. It has a duration of approximately 354.37 days. Muslims use this for celebrating their Eids and for marking the start of the fasting month of Ramadan. A Muslim calendar year is based on the lunar cycle. The Jewish calendar is also essentially lunar, except that an intercalary lunar month is added once every two or three years, in order to keep the calendar synchronized with the solar cycle as well. Thus, a lunar year on the Jewish (Hebrew) calendar consists of either twelve or thirteen lunar months. Vague year The vague year, from or wandering year, is an integral approximation to the year equaling 365 days, which wanders in relation to more exact years. Typically the vague year is divided into 12 schematic months of 30 days each plus 5 epagomenal days. The vague year was used in the calendars of Ethiopia, Ancient Egypt, Iran, Armenia and in Mesoamerica among the Aztecs and Maya. It is still used by many Zoroastrian communities. Heliacal year A heliacal year is the interval between the heliacal risings of a star. It differs from the sidereal year for stars away from the ecliptic due mainly to the precession of the equinoxes. Sothic year The Sothic year is the heliacal year, the interval between heliacal risings, of the star Sirius. It is currently less than the sidereal year and its duration is very close to the Julian year of 365.25 days. Gaussian year The Gaussian year is the sidereal year for a planet of negligible mass (relative to the Sun) and unperturbed by other planets that is governed by the Gaussian gravitational constant. Such a planet would be slightly closer to the Sun than Earth's mean distance. Its length is: days (365 d 6 h 9 min 56 s). Besselian year The Besselian year is a tropical year that starts when the (fictitious) mean Sun reaches an ecliptic longitude of 280°. This is currently on or close to January 1. It is named after the 19th-century German astronomer and mathematician Friedrich Bessel. The following equation can be used to compute the current Besselian epoch (in years): B = 1900.0 + (Julian dateTT − ) / The TT subscript indicates that for this formula, the Julian date should use the Terrestrial Time scale, or its predecessor, ephemeris time. Variation in the length of the year and the day The exact length of an astronomical year changes over time. The positions of the equinox and solstice points with respect to the apsides of Earth's orbit change: the equinoxes and solstices move westward relative to the stars because of precession, and the apsides move in the other direction because of the long-term effects of gravitational pull by the other planets. Since the speed of the Earth varies according to its position in its orbit as measured from its perihelion, Earth's speed when in a solstice or equinox point changes over time: if such a point moves toward perihelion, the interval between two passages decreases a little from year to year; if the point moves towards aphelion, that period increases a little from year to year. So a "tropical year" measured from one passage of the northward ("vernal") equinox to the next, differs from the one measured between passages of the southward ("autumnal") equinox. The average over the full orbit does not change because of this, so the length of the average tropical year does not change because of this second-order effect. Each planet's movement is perturbed by the gravity of every other planet. This leads to short-term fluctuations in its speed, and therefore its period from year to year. Moreover, it causes long-term changes in its orbit, and therefore also long-term changes in these periods. Tidal drag between the Earth and the Moon and Sun increases the length of the day and of the month (by transferring angular momentum from the rotation of the Earth to the revolution of the Moon); since the apparent mean solar day is the unit with which we measure the length of the year in civil life, the length of the year appears to decrease. The rotation rate of the Earth is also changed by factors such as post-glacial rebound and sea level rise. Numerical value of year variation Mean year lengths in this section are calculated for 2000, and differences in year lengths, compared to 2000, are given for past and future years. In the tables a day is SI seconds long. Summary Some of the year lengths in this table are in average solar days, which are slowly getting longer (at a rate that cannot be exactly predicted in advance) and are now around SI seconds. An average Gregorian year may be said to be 365.2425 days (52.1775 weeks, and if an hour is defined as one twenty-fourth of a day, hours, minutes or seconds). Note however that in absolute time the average Gregorian year is not adequately defined unless the period of the averaging (start and end dates) is stated, because each period of 400 years is longer (by more than 1000 seconds) than the preceding one as the rotation of the Earth slows. In this calendar, a common year is 365 days ( hours, minutes or seconds), and a leap year is 366 days ( hours, minutes or seconds). The 400-year civil cycle of the Gregorian calendar has days and hence exactly weeks. Greater astronomical years Equinoctial cycle The Great Year, or equinoctial cycle, corresponds to a complete revolution of the equinoxes around the ecliptic. Its length is about 25,700 years. Galactic year The Galactic year is the time it takes Earth's Solar System to revolve once around the Galactic Center. It comprises roughly 230 million Earth years. Seasonal year A seasonal year is the time between successive recurrences of a seasonal event such as the flooding of a river, the migration of a species of bird, the flowering of a species of plant, the first frost, or the first scheduled game of a certain sport. All of these events can have wide variations of more than a month from year to year. Symbols and abbreviations A common symbol for the year as a unit of time is "a", taken from the Latin word . For example, the U.S. National Institute of Standards and Technology (NIST) Guide for the Use of the International System of Units (SI) supports the symbol "a" as the unit of time for a year. In English, the abbreviations "y" or "yr" are more commonly used in non-scientific literature. In some Earth sciences branches (geology and paleontology), "kyr, myr, byr" (thousands, millions, and billions of years, respectively) and similar abbreviations are used to denote intervals of time remote from the present. In astronomy the abbreviations kyr, Myr and Gyr are in common use for kiloyears, megayears and gigayears. The Unified Code for Units of Measure (UCUM) disambiguates the varying symbologies of ISO 1000, ISO 2955 and ANSI X3.50 by using: at = days for the mean tropical year; aj = 365.25 days for the mean Julian year; ag = days for the mean Gregorian year; In the UCUM, the symbol "a", without any qualifier, equals 1 aj. The UCUM also minimizes confusion with are, a unit of area, by using the abbreviation "ar". Since 1989, the International Astronomical Union (IAU) recognizes the symbol "a" rather than "yr" for a year, notes the different kinds of year, and recommends adopting the Julian year of 365.25 days, unless otherwise specified (IAU Style Manual). Since 1987, the International Union of Pure and Applied Physics (IUPAP) notes "a" as the general symbol for the time unit year (IUPAP Red Book). Since 1993, the International Union of Pure and Applied Chemistry (IUPAC) Green Book also uses the same symbol "a", notes the difference between Gregorian year and Julian year, and adopts the former (a = days), also noted in the IUPAC Gold Book. In 2011, the IUPAC and the International Union of Geological Sciences jointly recommended defining the "annus", with symbol "a", as the length of the tropical year in the year 2000: a = seconds (approximately ephemeris days) This differs from the above definition of 365.25 days by about 20 parts per million. The joint document says that definitions such as the Julian year "bear an inherent, pre-programmed obsolescence because of the variability of Earth's orbital movement", but then proposes using the length of the tropical year as of 2000 AD (specified down to the millisecond), which suffers from the same problem. (The tropical year oscillates with time by more than a minute.) The notation has proved controversial as it conflicts with an earlier convention among geoscientists to use "a" specifically for "years ago" (e.g. 1 Ma for 1 million years ago), and "y" or "yr" for a one-year time period. However, this historical practice does not comply with the NIST Guide, considering the unacceptability of mixing information concerning the physical quantity being measured (in this case, time intervals or points in time) with the units and also the unacceptability of using abbreviations for units. Furthermore, according to the UK Metric Association (UKMA), language-independent symbols are more universally understood (UKMA Style guide). SI prefix multipliers For the following, there are alternative forms that elide the consecutive vowels, such as kilannus, megannus, etc. The exponents and exponential notations are typically used for calculating and in displaying calculations, and for conserving space, as in tables of data. Abbreviations for "years ago" In geology and paleontology, a distinction sometimes is made between abbreviation "yr" for years and "ya" for years ago, combined with prefixes for thousand, million, or billion. In archaeology, dealing with more recent periods, normally expressed dates, e.g. "10,000 BC", may be used as a more traditional form than Before Present ("BP"). These abbreviations include: Use of "mya" and "bya" is deprecated in modern geophysics, the recommended usage being "Ma" and "Ga" for dates Before Present, but "m.y." for the durations of epochs. This ad hoc distinction between "absolute" time and time intervals is somewhat controversial amongst members of the Geological Society of America.
Physical sciences
Physics
null
34359
https://en.wikipedia.org/wiki/Yoneda%20lemma
Yoneda lemma
In mathematics, the Yoneda lemma is a fundamental result in category theory. It is an abstract result on functors of the type morphisms into a fixed object. It is a vast generalisation of Cayley's theorem from group theory (viewing a group as a miniature category with just one object and only isomorphisms). It allows the embedding of any locally small category into a category of functors (contravariant set-valued functors) defined on that category. It also clarifies how the embedded category, of representable functors and their natural transformations, relates to the other objects in the larger functor category. It is an important tool that underlies several modern developments in algebraic geometry and representation theory. It is named after Nobuo Yoneda. Generalities The Yoneda lemma suggests that instead of studying the locally small category , one should study the category of all functors of into (the category of sets with functions as morphisms). is a category we think we understand well, and a functor of into can be seen as a "representation" of in terms of known structures. The original category is contained in this functor category, but new objects appear in the functor category, which were absent and "hidden" in . Treating these new objects just like the old ones often unifies and simplifies the theory. This approach is akin to (and in fact generalizes) the common method of studying a ring by investigating the modules over that ring. The ring takes the place of the category , and the category of modules over the ring is a category of functors defined on . Formal statement Yoneda's lemma concerns functors from a fixed category to a category of sets, . If is a locally small category (i.e. the hom-sets are actual sets and not proper classes), then each object of gives rise to a functor to called a hom-functor. This functor is denoted: . The (covariant) hom-functor sends to the set of morphisms and sends a morphism (where ) to the morphism (composition with on the left) that sends a morphism in to the morphism in . That is, Yoneda's lemma says that: Here the notation denotes the category of functors from to . Given a natural transformation from to , the corresponding element of is ; and given an element of , the corresponding natural transformation is given by which assigns to a morphism a value of . Contravariant version There is a contravariant version of Yoneda's lemma, which concerns contravariant functors from to . This version involves the contravariant hom-functor which sends to the hom-set . Given an arbitrary contravariant functor from to , Yoneda's lemma asserts that Naturality The bijections provided in the (covariant) Yoneda lemma (for each and ) are the components of a natural isomorphism between two certain functors from to . One of the two functors is the evaluation functor that sends a pair of a morphism in and a natural transformation to the map This is enough to determine the other functor since we know what the natural isomorphism is. Under the second functor the image of a pair is the map that sends a natural transformation to the natural transformation , whose components are Naming conventions The use of for the covariant hom-functor and for the contravariant hom-functor is not completely standard. Many texts and articles either use the opposite convention or completely unrelated symbols for these two functors. However, most modern algebraic geometry texts starting with Alexander Grothendieck's foundational EGA use the convention in this article. The mnemonic "falling into something" can be helpful in remembering that is the covariant hom-functor. When the letter is falling (i.e. a subscript), assigns to an object the morphisms from into . Proof Since is a natural transformation, we have the following commutative diagram: This diagram shows that the natural transformation is completely determined by since for each morphism one has Moreover, any element defines a natural transformation in this way. The proof in the contravariant case is completely analogous. The Yoneda embedding An important special case of Yoneda's lemma is when the functor from to is another hom-functor . In this case, the covariant version of Yoneda's lemma states that That is, natural transformations between hom-functors are in one-to-one correspondence with morphisms (in the reverse direction) between the associated objects. Given a morphism the associated natural transformation is denoted . Mapping each object in to its associated hom-functor and each morphism to the corresponding natural transformation determines a contravariant functor from to , the functor category of all (covariant) functors from to . One can interpret as a covariant functor: The meaning of Yoneda's lemma in this setting is that the functor is fully faithful, and therefore gives an embedding of in the category of functors to . The collection of all functors is a subcategory of . Therefore, Yoneda embedding implies that the category is isomorphic to the category . The contravariant version of Yoneda's lemma states that Therefore, gives rise to a covariant functor from to the category of contravariant functors to : Yoneda's lemma then states that any locally small category can be embedded in the category of contravariant functors from to via . This is called the Yoneda embedding. The Yoneda embedding is sometimes denoted by よ, the hiragana Yo. Representable functor The Yoneda embedding essentially states that for every (locally small) category, objects in that category can be represented by presheaves, in a full and faithful manner. That is, for a presheaf P. Many common categories are, in fact, categories of pre-sheaves, and on closer inspection, prove to be categories of sheaves, and as such examples are commonly topological in nature, they can be seen to be topoi in general. The Yoneda lemma provides a point of leverage by which the topological structure of a category can be studied and understood. In terms of (co)end calculus Given two categories and with two functors , natural transformations between them can be written as the following end. For any functors and the following formulas are all formulations of the Yoneda lemma. Preadditive categories, rings and modules A preadditive category is a category where the morphism sets form abelian groups and the composition of morphisms is bilinear; examples are categories of abelian groups or modules. In a preadditive category, there is both a "multiplication" and an "addition" of morphisms, which is why preadditive categories are viewed as generalizations of rings. Rings are preadditive categories with one object. The Yoneda lemma remains true for preadditive categories if we choose as our extension the category of additive contravariant functors from the original category into the category of abelian groups; these are functors which are compatible with the addition of morphisms and should be thought of as forming a module category over the original category. The Yoneda lemma then yields the natural procedure to enlarge a preadditive category so that the enlarged version remains preadditive — in fact, the enlarged version is an abelian category, a much more powerful condition. In the case of a ring , the extended category is the category of all right modules over , and the statement of the Yoneda lemma reduces to the well-known isomorphism    for all right modules over . Relationship to Cayley's theorem As stated above, the Yoneda lemma may be considered as a vast generalization of Cayley's theorem from group theory. To see this, let be a category with a single object such that every morphism is an isomorphism (i.e. a groupoid with one object). Then forms a group under the operation of composition, and any group can be realized as a category in this way. In this context, a covariant functor consists of a set and a group homomorphism , where is the group of permutations of ; in other words, is a G-set. A natural transformation between such functors is the same thing as an equivariant map between -sets: a set function with the property that for all in and in . (On the left side of this equation, the denotes the action of on , and on the right side the action on .) Now the covariant hom-functor corresponds to the action of on itself by left-multiplication (the contravariant version corresponds to right-multiplication). The Yoneda lemma with states that , that is, the equivariant maps from this -set to itself are in bijection with . But it is easy to see that (1) these maps form a group under composition, which is a subgroup of , and (2) the function which gives the bijection is a group homomorphism. (Going in the reverse direction, it associates to every in the equivariant map of right-multiplication by .) Thus is isomorphic to a subgroup of , which is the statement of Cayley's theorem. History Yoshiki Kinoshita stated in 1996 that the term "Yoneda lemma" was coined by Saunders Mac Lane following an interview he had with Yoneda in the Gare du Nord station.
Mathematics
Category theory
null
34367
https://en.wikipedia.org/wiki/Yersinia%20pestis
Yersinia pestis
Yersinia pestis (Y. pestis; formerly Pasteurella pestis) is a gram-negative, non-motile, coccobacillus bacterium without spores that is related to both Yersinia enterocolitica and Yersinia pseudotuberculosis, the pathogen from which Y. pestis evolved and responsible for the Far East scarlet-like fever. It causes the disease plague, which caused the Plague of Justinian and the Black Death, the deadliest pandemic in recorded history. Plague takes three main forms: pneumonic, septicemic, and bubonic. It is a facultative anaerobic organism that can infect humans primarily via the Oriental rat flea (Xenopsylla cheopis), but also through airborne droplets for its pneumonic form. Yersinia pestis is a parasite of its host, the rat flea, which is also a parasite of rats, hence Y. pestis is a hyperparasite. Y. pestis was discovered in 1894 by Alexandre Yersin, a Swiss/French physician and bacteriologist from the Pasteur Institute, during an epidemic of the plague in Hong Kong. Yersin was a member of the Pasteur school of thought. Kitasato Shibasaburō, a Japanese bacteriologist who practised Koch's methodology, was also engaged at the time in finding the causative agent of the plague. However, Yersin actually linked plague with a bacillus, initially named Pasteurella pestis; it was renamed Yersinia pestis in 1944. Between one thousand and two thousand cases of the plague are still reported to the World Health Organization every year. With proper antibiotic treatment, the prognosis for victims is much better than before antibiotics were developed. Cases in Asia increased five- to six-fold during the time of the Vietnam War, possibly due to the disruption of ecosystems and closer proximity between people and animals. The plague is now commonly found in sub-Saharan Africa and Madagascar, areas that now account for over 95% of reported cases. The plague also has a detrimental effect on non-human mammals; in the United States, these include the black-tailed prairie dog and the endangered black-footed ferret. General features Y. pestis is a non-motile coccobacillus, a facultative anaerobic bacterium with bipolar staining (giving it a safety pin appearance) that produces an antiphagocytic slime layer. Similar to other Yersinia species, it tests negative for urease, lactose fermentation, and indole. The species grows best in temperatures of 28–30 °C, and at a pH of 7.2–7.6 (it still grows slowly), but can live in a large temperature and pH range. It dies very rapidly if exposed to a UV light, gets dried out, or in temperatures higher than 40°C. There are 11 species in the Yersinia genus, and three of them cause human diseases. The other two are Yersinia pseudotuberculosis and Yersinia enterocolitica, infections by either of these are usually acquired from ingesting contaminated food or water. Genome and proteome Genome Several complete genome sequences are available for various strains and subspecies of Y. pestis: strain KIM (of biovar Y. p. medievalis), and strain CO92 (of biovar Y. p. orientalis, obtained from a clinical isolate in the United States). In 2006 the genome sequence of a strain of biovar Antiqua was completed. Some strains are non-pathogenic, such as that of strain 91001, whose sequence was published in 2004. Plasmids Like Y. pseudotuberculosis and Y. enterocolitica, Y. pestis is host to the plasmid pCD1. It also hosts two other plasmids, pPCP1 (also called pPla or pPst) and pMT1 (also called pFra) that are not carried by the other Yersinia species. pFra codes for a phospholipase D that is important for the ability of Y. pestis to be transmitted by fleas. pPla codes for a protease, Pla, that activates plasmin in human hosts and is a very important virulence factor for pneumonic plague. Together, these plasmids, and a pathogenicity island called HPI, encode several proteins that cause the pathogenesis for which Y. pestis is famous. Among other things, these virulence factors are required for bacterial adhesion and injection of proteins into the host cell, invasion of bacteria in the host cell (via a type-III secretion system), and acquisition and binding of iron harvested from red blood cells (by siderophores). Y. pestis is thought to be descended from Y. pseudotuberculosis, DNA studies have found that the two are 83% similar, which is high enough to be considered the same species. In 1981 it was proposed that Y. pestis be reclassified as a subspecies of Y. pseudotuberculosis, but the Judicial Commission of the International Committee on Systematic Bacteriology declined to do this because the course of Y. pestis disease is so different than that of Y. pseudotuberculosis, which usually causes a mild diarrhea, that reclassification would generate confusion. Proteome A comprehensive and comparative proteomics analysis of Y. pestis strain KIM was performed in 2006. The analysis focused on growth under four different sets of conditions that were designed to model flea and mammal hosts. Small noncoding RNA Numerous bacterial small noncoding RNAs have been identified to play regulatory functions. Some can regulate the virulence genes. Some 63 novel putative sRNAs were identified through deep sequencing of the Y. pestis sRNA-ome. Among them was Yersinia-specific (also present in Y. pseudotuberculosis and Y. enterocolitica) Ysr141 (Yersinia small RNA 141). Ysr141 sRNA was shown to regulate the synthesis of the type III secretion system (T3SS) effector protein YopJ. The Yop-Ysc T3SS is a critical component of virulence for Yersinia species. Many novel sRNAs were identified from Y. pestis grown in vitro and in the infected lungs of mice suggesting they play role in bacterial physiology or pathogenesis. Among them sR035 predicted to pair with SD region and transcription initiation site of a thermo-sensitive regulator ymoA, and sR084 predicted to pair with fur, ferric uptake regulator. Pathogenesis and immunity In the urban and sylvatic (forest) cycles of Y. pestis, most of the spreading occurs between rodents and fleas. In the sylvatic cycle, the rodent is wild, but in the urban cycle, the rodent is primarily the brown rat (Rattus norvegicus). In addition, Y. pestis can spread from the urban environment and back. Transmission to humans is usually through the bite of infected fleas. If the disease has progressed to the pneumonic form, humans can spread the bacterium to others through airborne respiratory droplets; others who catch plague this way will mostly contract the pneumonic form themselves. Mammals as hosts Several species of rodents serve as the main reservoir for Y. pestis in the environment. In the steppes, the natural reservoir is believed to be principally the marmot. In the western United States, several species of rodents are thought to maintain Y. pestis. However, the expected disease dynamics have not been found in any rodent. Several species of rodents are known to have a variable resistance, which could lead to an asymptomatic carrier status. Evidence indicates fleas from other mammals have a role in human plague outbreaks. The lack of knowledge of the dynamics of plague in mammal species is also true among susceptible rodents such as the black-tailed prairie dog (Cynomys ludovicianus), in which plague can cause colony collapse, resulting in a massive effect on prairie food webs. However, the transmission dynamics within prairie dogs do not follow the dynamics of blocked fleas; carcasses, unblocked fleas, or another vector could possibly be important, instead. The CO92 strain was isolated from a patient who died from pneumonia and who contracted the infection from an infected cat. In other regions of the world, the reservoir of the infection is not clearly identified, which complicates prevention and early-warning programs. One such example was seen in a 2003 outbreak in Algeria. Fleas as vector The transmission of Y. pestis by fleas is well characterized. Initial acquisition of Y. pestis by the vector occurs during feeding on an infected animal. Several proteins then contribute to the maintenance of the bacteria in the flea digestive tract, among them the hemin storage system and Yersinia murine toxin (Ymt). Although Ymt is highly toxic to rodents and was once thought to be produced to ensure reinfection of new hosts, it is essential for flea colonization and for the survival of Y. pestis in fleas. The hemin storage system plays an important role in the transmission of Y. pestis back to a mammalian host. While in the insect vector, proteins encoded by hemin storage system genetic loci induce biofilm formation in the proventriculus, a valve connecting the midgut to the esophagus. The presence of this biofilm seems likely to be required for stable infection of the flea. Aggregation in the biofilm inhibits feeding, as a mass of clotted blood and bacteria forms (referred to as "Bacot's block" after entomologist A.W. Bacot, the first to describe this phenomenon). Transmission of Y. pestis occurs during the futile attempts of the flea to feed. Ingested blood is pumped into the esophagus, where it dislodges bacteria lodged in the proventriculus, which is regurgitated back into the host circulatory system. In humans and other susceptible hosts Pathogenesis due to Y. pestis infection of mammalian hosts is due to several factors, including an ability of these bacteria to suppress and avoid normal immune system responses such as phagocytosis and antibody production. Flea bites allow for the bacteria to pass the skin barrier. Y. pestis expresses a plasmin activator that is an important virulence factor for pneumonic plague and that might degrade on blood clots to facilitate systematic invasion. Many of the bacteria's virulence factors are antiphagocytic in nature. Two important antiphagocytic antigens, named F1 (fraction 1) and V or LcrV, are both important for virulence. These antigens are produced by the bacterium at normal human body temperature. Furthermore, Y. pestis survives and produces F1 and V antigens while it is residing within white blood cells such as monocytes, but not in neutrophils. Natural or induced immunity is achieved by the production of specific opsonic antibodies against F1 and V antigens; antibodies against F1 and V induce phagocytosis by neutrophils. In addition, the type-III secretion system (T3SS) allows Y. pestis to inject proteins into macrophages and other immune cells. These T3SS-injected proteins, called Yersinia outer proteins (Yops), include Yop B/D, which form pores in the host cell membrane and have been linked to cytolysis. The YopO, YopH, YopM, YopT, YopJ, and YopE are injected into the cytoplasm of host cells by T3SS into the pore created in part by YopB and YopD. The injected Yops limit phagocytosis and cell signaling pathways important in the innate immune system, as discussed below. In addition, some Y. pestis strains are capable of interfering with immune signaling (e.g., by preventing the release of some cytokines). Y. pestis proliferates inside lymph nodes, where it is able to avoid destruction by cells of the immune system such as macrophages. The ability of Y. pestis to inhibit phagocytosis allows it to grow in lymph nodes and cause lymphadenopathy. YopH is a protein tyrosine phosphatase that contributes to the ability of Y. pestis to evade immune system cells. In macrophages, YopH has been shown to dephosphorylate p130Cas, Fyb (FYN binding protein) SKAP-HOM and Pyk, a tyrosine kinase homologous to FAK. YopH also binds the p85 subunit of phosphoinositide 3-kinase, the Gab1, the Gab2 adapter proteins, and the Vav guanine nucleotide exchange factor. YopE functions as a GTPase-activating protein for members of the Rho family of GTPases such as RAC1. YopT is a cysteine protease that inhibits RhoA by removing the isoprenyl group, which is important for localizing the protein to the cell membrane. YopE and YopT has been proposed to function to limit YopB/D-induced cytolysis. This might limit the function of YopB/D to create the pores used for Yop insertion into host cells and prevent YopB/D-induced rupture of host cells and release of cell contents that would attract and stimulate immune system responses. YopJ is an acetyltransferase that binds to a conserved α-helix of MAPK kinases. YopJ acetylates MAPK kinases at serines and threonines that are normally phosphorylated during activation of the MAP kinase cascade. YopJ is activated in eukaryotic cells by interaction with target cell phytic acid (IP6). This disruption of host cell protein kinase activity causes apoptosis of macrophages, and this is proposed to be important for the establishment of infection and for evasion of the host immune response. YopO is a protein kinase also known as Yersinia protein kinase A (YpkA). YopO is a potent inducer of human macrophage apoptosis. It has also been suggested that a bacteriophage – Ypφ – may have been responsible for increasing the virulence of this organism. Depending on which form of the plague infects the individual, the plague develops a different illness; however, the plague overall affects the host cell's ability to communicate with the immune system, hindering the body bringing phagocytic cells to the area of infection. Y. pestis is a versatile killer. In addition to rodents and humans, it is known to have killed camels, chickens, and pigs. Domestic dogs and cats are susceptible to plague, as well, but cats are more likely to develop illness when infected. In either, the symptoms are similar to those experienced by humans, and can be deadly to the animal. People can be exposed by coming into contact with an infected animal (dead or alive), or inhaling infectious droplets that a sick dog or cat has coughed into the air. Immunity A formalin-inactivated vaccine was available in the United States for adults in 1993 at high risk of contracting the plague until removal from the market by the Food and Drug Administration. It was of limited effectiveness and could cause severe inflammation. Experiments with genetic engineering of a vaccine based on F1 and V antigens are underway and show promise. However, bacteria lacking antigen F1 are still virulent, and the V antigens are sufficiently variable such that vaccines composed of these antigens may not be fully protective. The United States Army Medical Research Institute of Infectious Diseases has found that an experimental F1/V antigen-based vaccine protects crab-eating macaques, but fails to protect African green monkey species. A systematic review by the Cochrane Collaboration found no studies of sufficient quality to make any statement on the efficacy of the vaccine. Isolation and identification In 1894, two bacteriologists, Alexandre Yersin of Switzerland and Kitasato Shibasaburō of Japan, independently isolated in Hong Kong the bacterium responsible for the 1894 Hong Kong plague. Though both investigators reported their findings, a series of confusing and contradictory statements by Kitasato eventually led to the acceptance of Yersin as the primary discoverer of the organism. Yersin named it Pasteurella pestis in honor of the Pasteur Institute, where he worked. In 1967, it was moved to a new genus and renamed Yersinia pestis in his honor. Yersin also noted that rats were affected by plague not only during plague epidemics, but also often preceding such epidemics in humans and that plague was regarded by many locals as a disease of rats; villagers in China and India asserted that when large numbers of rats were found dead, plague outbreaks soon followed. In 1898, French scientist Paul-Louis Simond (who had also come to China to battle the Third Pandemic) discovered the rat–flea vector that drives the disease. He had noted that persons who became ill did not have to be in close contact with each other to acquire the disease. In Yunnan, China, inhabitants would flee from their homes as soon as they saw dead rats, and on the island of Formosa (Taiwan), residents considered the handling of dead rats heightened the risks of developing plague. These observations led him to suspect that the flea might be an intermediary factor in the transmission of plague, since people acquired plague only if they were in contact with rats that had died less than 24 hours before. In a now classic experiment, Simond demonstrated how a healthy rat died of the plague after infected fleas had jumped to it from a rat that had recently died of the plague. The outbreak spread to Chinatown, San Francisco, from 1900 to 1904 and then to Oakland and the East Bay from 1907 to 1909. It has been present in the rodents of western North America ever since, as fear of the consequences of the outbreak on trade caused authorities to hide the dead of the Chinatown residents long enough for the disease to be passed to widespread species of native rodents in outlying areas. Three main strains are recognised: Y. p. antiqua, which caused a plague pandemic in the sixth century; Y. p. medievalis, which caused the Black Death and subsequent epidemics during the second pandemic wave; and Y. p. orientalis, which is responsible for current plague outbreaks. 21st century On January 15, 2018, researchers at the University of Oslo and the University of Ferrara suggested that humans and their parasites (most likely fleas and lice at the time) were the biggest carriers of the plague. Ancient DNA evidence In 2010, researchers in Germany definitively established, using PCR evidence from samples obtained from Black Death victims, that Y. pestis was the cause of the medieval Black Death. In 2011, the first genome of Y. pestis isolated from Black Death victims was published, and concluded that this medieval strain was ancestral to most modern forms of Y. pestis. In 2015, Cell published results from a study of ancient graves. Plasmids of Y. pestis were detected in archaeological samples of the teeth of seven Bronze Age individuals, in the Afanasievo culture in Siberia, the Corded Ware culture in Estonia, the Sintashta culture in Russia, the Unetice culture in Poland, and the Andronovo culture in Siberia. In 2018, the emergence and spread of the pathogen during the Neolithic decline (as far back as 6,000 years ago) was published. A site in Sweden was the source of the DNA evidence and trade networks were proposed as the likely avenue of spread rather than migrations of populations. There is evidence that suggests Y. pestis may have originated in Europe in the Cucuteni–Trypillia culture, not in Asia as is more commonly believed. DNA evidence published in 2015 indicates Y. pestis infected humans 5,000 years ago in Bronze Age Eurasia, but genetic changes that made it highly virulent did not occur until about 4,000 years ago. The highly virulent version capable of transmission by fleas through rodents, humans, and other mammals was found in two individuals associated with the Srubnaya culture from the Samara region in Russia from around 3,800 years ago and an Iron Age individual from Kapan, Armenia, from around 2,900 years ago. This indicates that at least two lineages of Y. pestis were circulating during the Bronze Age in Eurasia. The Y. pestis bacterium has a relatively large number of nonfunctioning genes and three "ungainly" plasmids, suggesting an origin less than 20,000 years ago. One such strain has been identified from about 4000 BP (the "LNBA lineage" (Late Neolithic and Bronze Age lineage)) in western Britain, indicating that this highly transmissible form spread from Eurasia to the far north-western edges of Europe. In 2016, the Y. pestis bacterium was identified from DNA in teeth found at a Crossrail building site in London. The human remains were found to be victims of the Great Plague of London, which lasted from 1665 to 1666. In 2021, researchers found a 5,000-year-old victim of Y. pestis, the world's oldest-known, in hunter-gatherer remains in the modern Latvian and Estonian border area. Between 5,300 and 4,900 YBP the population of neolithic farmers in northern Europe underwent a marked decline. It had not been determined whether this was the result of agricultural recession or from Y. pestis infection within the population. A 2024 study of Neolithic graves in Denmark and western Sweden concluded that plague was sufficiently widespread to be the cause of the decline, and that there were three outbreaks in Northern Europe between 5,200 years ago and 4,900 years ago, with the final outbreak caused by a strain of Yersinia pestis with reshuffled genes. However, another recent study, contests this notion. On the basis of 133 indiviudals from gallery graves of the Wartberg Culture, a research team from Kiel University (Collaborative Research Centre 1266) found Yersinia pestis in two individuals only. This indicates that there was no large-scale disease outbreak in this cultural context. Moreover, they found that a dog was infected. Events Between 1970 and 2020, 496 cases were reported in the United States. Cases have been found predominantly in New Mexico, Arizona, Colorado, California, Oregon, and Nevada. In 2008, plague was commonly found in sub-Saharan Africa and Madagascar, areas that accounted for over 95% of the reported cases. In September 2009, the death of Malcolm Casadaban, a molecular genetics professor at the University of Chicago, was linked to his work on a weakened laboratory strain of Y. pestis. Hemochromatosis was hypothesised to be a predisposing factor in Casadaban's death from this attenuated strain used for research. On November 3, 2019, two cases of pneumonic plague were diagnosed at a hospital in Beijing's Chaoyang district, prompting fears of an outbreak. The patient was a middle-aged man with fever, who had complained of difficulty breathing for some ten days, accompanied by his wife with similar symptoms. Police quarantined the emergency room at the hospital and controls were placed on Chinese news aggregators. On 18 November a third case was reported, in a 55-year-old man from Xilingol League, one of the twelve Mongolian autonomous regions in Northern China. The patient received treatment, and 28 symptomless contacts were placed in quarantine. In July 2020, officials increased precautions after a case of bubonic plague was confirmed in Bayannur, a city in China's Inner Mongolia autonomous region. The patient was quarantined and treated. According to China's Global Times, a second suspected case was also investigated, and a level 3 alert was issued, in effect until the end of the year. It forbade hunting and eating of animals that could carry plague, and called on the public to report suspected cases.
Biology and health sciences
Gram-negative bacteria
Plants
34368
https://en.wikipedia.org/wiki/Yellow
Yellow
Yellow is the color between green and orange on the spectrum of light. It is evoked by light with a dominant wavelength of roughly 575585 nm. It is a primary color in subtractive color systems, used in painting or color printing. In the RGB color model, used to create colors on television and computer screens, yellow is a secondary color made by combining red and green at equal intensity. Carotenoids give the characteristic yellow color to autumn leaves, corn, canaries, daffodils, and lemons, as well as egg yolks, buttercups, and bananas. They absorb light energy and protect plants from photo damage in some cases. Sunlight has a slight yellowish hue when the Sun is near the horizon, due to atmospheric scattering of shorter wavelengths (green, blue, and violet). Because it was widely available, yellow ochre pigment was one of the first colors used in art; the Lascaux cave in France has a painting of a yellow horse 17,000 years old. Ochre and orpiment pigments were used to represent gold and skin color in Egyptian tombs, then in the murals in Roman villas. In the early Christian church, yellow was the color associated with the Pope and the golden keys of the Kingdom, but it was also associated with Judas Iscariot and used to mark heretics. In the 20th century, Jews in Nazi-occupied Europe were forced to wear a yellow star. In China, bright yellow was the color of the Middle Kingdom, and could be worn only by the emperor and his household; special guests were welcomed on a yellow carpet. According to surveys in Europe, Canada, the United States and elsewhere, yellow is the color people most often associate with amusement, gentleness, humor, happiness, and spontaneity; however it can also be associated with duplicity, envy, jealousy, greed, justice, and, in the U.S., cowardice. In Iran it has connotations of pallor/sickness, but also wisdom and connection. In China and many Asian countries, it is seen as the color of royalty, nobility, respect, happiness, glory, harmony and wisdom. Etymology The word yellow is from the Old English (oblique case), meaning "yellow, and yellowish", derived from the Proto-Germanic word gelwaz "yellow". It has the same Indo-European base, , as the words gold and yell; gʰel- means both bright and gleaming, and to cry out. The English term is related to other Germanic words for yellow, namely Scots , East Frisian jeel, West Frisian , Dutch , German , and Swedish and Norwegian . According to the Oxford English Dictionary, the oldest known use of this word in English is from The Epinal Glossary in 700. Science and nature Optics, color printing, and computer screens Yellow is found between green and red on the spectrum of visible light. It is the color the human eye sees when it looks at light with a dominant wavelength between 570 and 590 nanometers. In color printing, yellow is one of the three subtractive primary colors of ink along with magenta and cyan. Together with black, they can be overlaid in the right combination to print any full color image. (See the CMYK color model). A particular yellow is used, called Process yellow (also known as "pigment yellow", "printer's yellow", and "canary yellow"). Process yellow is not an RGB color, and there is no fixed conversion from CMYK primaries to RGB. Different formulations are used for printer's ink, so there can be variations in the printed color that is pure yellow ink. The yellow on a color television or computer screen is created in a completely different way; by combining green and red light at the right level of intensity. (See RGB color model). Complementary colors Traditionally, the complementary color of yellow is purple; the two colors are opposite each other on the color wheel long used by painters. Vincent van Gogh, an avid student of color theory, used combinations of yellow and purple in several of his paintings for the maximum contrast and harmony. Hunt defines that "two colors are complementary when it is possible to reproduce the tristimulus values of a specified achromatic stimulus by an additive mixture of these two stimuli." That is, when two colored lights can be mixed to match a specified white (achromatic, non-colored) light, the colors of those two lights are complementary. This definition, however, does not constrain what version of white will be specified. In the nineteenth century, the scientists Grassmann and Helmholtz did experiments in which they concluded that finding a good complement for spectral yellow was difficult, but that the result was indigo, that is, a wavelength that today's color scientists would call violet or purple. Helmholtz says "Yellow and indigo blue" are complements. Grassmann reconstructs Newton's category boundaries in terms of wavelengths and says "This indigo therefore falls within the limits of color between which, according to Helmholtz, the complementary colors of yellow lie." Newton's own color circle has yellow directly opposite the boundary between indigo and violet. These results, that the complement of yellow is a wavelength shorter than 450 nm, are derivable from the modern CIE 1931 system of colorimetry if it is assumed that the yellow is about 580 nm or shorter wavelength, and the specified white is the color of a blackbody radiator of temperature 2800 K or lower (that is, the white of an ordinary incandescent light bulb). More typically, with a daylight-colored or around 5000 to 6000 K white, the complement of yellow will be in the blue wavelength range, which is the standard modern answer for the complement of yellow. Because of the characteristics of paint pigments and use of different color wheels, painters traditionally regard the complement of yellow as the color indigo or blue-violet. Lasers Lasers emitting in the yellow part of the spectrum are less common and more expensive than most other colors. In commercial products diode pumped solid state (DPSS) technology is employed to create the yellow light. An infrared laser diode at 808 nm is used to pump a crystal of neodymium-doped yttrium vanadium oxide (Nd:YVO4) or neodymium-doped yttrium aluminum garnet (Nd:YAG) and induces it to emit at two frequencies (281.76 THz and 223.39 THz: 1064 nm and 1342 nm wavelengths) simultaneously. This deeper infrared light is then passed through another crystal containing potassium, titanium and phosphorus (KTP), whose non-linear properties generate light at a frequency that is the sum of the two incident beams (505.15 THz); in this case corresponding to the wavelength of 593.5 nm ("yellow"). This wavelength is also available, though even more rarely, from a helium–neon laser. However, this not a true yellow, as it exceeds 590 nm. A variant of this same DPSS technology using slightly different starting frequencies was made available in 2010, producing a wavelength of 589 nm, which is considered a true yellow color. The use of yellow lasers at 589 nm and 594 nm have recently become more widespread thanks to the field of optogenetics. Astronomy Stars of spectral classes F and G have color temperatures that make them look "yellowish". The first astronomer to classify stars according to their color was F. G. W. Struve in 1827. One of his classifications was , or yellow, and this roughly corresponded to stars in the modern spectral range F5 to K0. The Strömgren photometric system for stellar classification includes a 'y' or yellow filter that is centered at a wavelength of 550 nm and has a bandwidth of 20–30 nm. Supergiant stars are rarely yellow supergiants because F and G class supergiants are physically unstable; they are most often a transitional phase between blue supergiants and red supergiants. Some yellow supergiants, the Cepheid variables, pulsate with a period proportional to their absolute magnitude; hence, if their apparent magnitude is known, the distance to them can be calculated with great precision. Cepheid variables were hence used to determine distances within and beyond the Milky Way galaxy. The most famous example is the current North Pole star, Polaris. Biology Autumn leaves, yellow flowers, bananas, oranges and other yellow fruits all contain carotenoids, yellow and red organic pigments that are found in the chloroplasts and chromoplasts of plants and some other photosynthetic organisms like algae, some bacteria and some fungi. They serve two key roles in plants and algae: they absorb light energy for use in photosynthesis, and they protect the green chlorophyll from photodamage. In late summer, as daylight hours shorten and temperatures cool, the veins that carry fluids into and out of the leaf are gradually closed off. The water and mineral intake into the leaf is reduced, slowly at first, and then more rapidly. It is during this time that the chlorophyll begins to decrease. As the chlorophyll diminishes, the yellow and red carotenoids become more and more visible, creating the classic autumn leaf color. Carotenoids are common in many living things; they give the characteristic color to carrots, maize, daffodils, rutabagas, buttercups and bananas. They are responsible for the red of cooked lobsters, the pink of flamingoes and salmon and the yellow of canaries and egg yolks. Xanthophylls are the most common yellow pigments that form one of two major divisions of the carotenoid group. The name is from Greek xanthos (, "yellow") + phyllon (, "leaf"). Xanthophylls are most commonly found in the leaves of green plants, but they also find their way into animals through the food they eat. For example, the yellow color of chicken egg yolks, fat, and skin comes from the feed the chickens consume. Chicken farmers understand this, and often add xanthophylls, usually lutein, to make the egg yolks more yellow. Bananas are green when they are picked because of the chlorophyll their skin contains. Once picked, they begin to ripen; hormones in the bananas convert amino acids into ethylene gas, which stimulates the production of several enzymes. These enzymes start to change the color, texture and flavor of the banana. The green chlorophyll supply is stopped and the yellow color of the carotenoids replaces it; eventually, as the enzymes continue their work, the cell walls break down and the bananas turn brown. Fish Yellowtail is the common name for dozens of different fish species that have yellow tails or a yellow body. Most of the time, yellowtail(fish) actually refers to Japanese amberjack, a fish that lives between Japan and Hawaii. Yellowfin tuna (Thunnus albacares) is a species of tuna, having bright yellow anal and second dorsal fins. Found in tropical and subtropical seas and weighing up to , it is caught as a replacement for depleted stocks of bluefin tuna. Smallmouth yellowfish (Labeobarbus aeneus) is a species of ray-finned fish in the genus Labeobarbus. It has become an invasive species in rivers of the Eastern Cape, South Africa, such as the Mbhashe River. Insects The yellow-fever mosquito (Aedes aegypti) is a mosquito so named because it transmits dengue fever and yellow fever, the mosquito-borne viruses. Yellowjackets are black-and-yellow wasps of the genus Vespula or Dolichovespula (though some can be black-and-white, the most notable of these being the bald-faced hornet, Dolichovespula maculata). They can be identified by their distinctive black-and-yellow color, small size (slightly larger than a bee), and entirely black antennae. Trees Populus tremuloides is a deciduous tree native to cooler areas of North America, one of several species referred to by the common name aspen. Populus tremuloides is the most widely distributed tree in North America, being found from Canada to central Mexico. The yellow birch (Betula alleghaniensis) is a birch species native to eastern North America, from Nova Scotia, New Brunswick, and southern Quebec west to Minnesota, and south in the Appalachian Mountains to northern Georgia. They are medium-sized deciduous trees and can reach about tall, trunks up to in diameter. The bark is smooth and yellow-bronze, and the wood is extensively used for flooring, cabinetry, and toothpicks. The Thorny Yellowwood is an Australian rainforest tree which has deep yellow wood. Yellow poplar is a common name for Liriodendron, the tuliptree. The common name is inaccurate as this genus is not related to poplars. The Handroanthus albus is a tree with yellow flowers native to the Cerrado of Brazil. History, art, and fashion Prehistory Yellow, in the form of yellow ochre pigment made from clay, was one of the first colors used in prehistoric cave art. The cave of Lascaux has an image of a horse colored with yellow estimated to be 17,300 years old. Ancient history In Ancient Egypt, yellow was associated with gold, which was considered to be imperishable, eternal and indestructible. The skin and bones of the gods were believed to be made of gold. The Egyptians used yellow extensively in tomb paintings; they usually used either yellow ochre or the brilliant orpiment, though it was made of arsenic and was highly toxic. A small paintbox with orpiment pigment was found in the tomb of King Tutankhamun. Men were always shown with brown faces, women with yellow ochre or gold faces. The ancient Romans used yellow in their paintings to represent gold and also in skin tones. It is found frequently in the murals of Pompeii. Post-classical history During the Post-Classical period, yellow became firmly established as the color of Judas Iscariot, the disciple who betrayed Jesus Christ, even though the Bible never describes his clothing. From this connection, yellow also took on associations with envy, jealousy and duplicity. The tradition started in the Renaissance of marking non-Christian outsiders, such as Jews, with the color yellow. In 16th-century Spain, those accused of heresy and who refused to renounce their views were compelled to come before the Spanish Inquisition dressed in a yellow cape. The color yellow has been historically associated with moneylenders and finance. The National Pawnbrokers Association's logo depicts three golden spheres hanging from a bar, referencing the three bags of gold that the patron saint of pawnbroking, St. Nicholas, holds in his hands. Additionally, the symbol of three golden orbs is found in the coat of arms of the House of Medici, a famous fifteenth-century Italian dynasty of bankers and lenders. Modern history 18th and 19th centuries The 18th and 19th century saw the discovery and manufacture of synthetic pigments and dyes, which quickly replaced the traditional yellows made from arsenic, cow urine, and other substances. , Jean-Honoré Fragonard painted A Young Girl Reading. She is dressed in a bright saffron yellow dress. This painting is "considered by many critics to be among Fragonard's most appealing and masterly". The 19th-century British painter J. M. W. Turner was one of the first in that century to use yellow to create moods and emotions, the way romantic composers were using music. His painting Rain, Steam, and Speed – the Great Central Railway was dominated by glowing yellow clouds. Georges Seurat used the new synthetic colors in his experimental paintings composed of tiny points of primary colors, particularly in his famous Sunday Afternoon on the Isle de la Grand jatte (1884–86). He did not know that the new synthetic yellow pigment, zinc yellow or zinc chromate, which he used in the light green lawns, was highly unstable and would quickly turn brown. The painter Vincent van Gogh was a particular admirer of the color yellow, the color of sunshine. Writing to his sister from the south of France in 1888, he wrote, "Now we are having beautiful warm, windless weather that is very beneficial to me. The sun, a light that for lack of a better word I can only call yellow, bright sulfur yellow, pale lemon gold. How beautiful yellow is!" In Arles, Van Gogh painted sunflowers inside a small house he rented at 2 Place Lamartine, a house painted with a color that Van Gogh described as "buttery yellow". Van Gogh was one of the first artists to use commercially manufactured paints, rather than paints he made himself. He used the traditional yellow ochre, but also chrome yellow, first made in 1809; and cadmium yellow, first made in 1820. In 1895 a new popular art form began to appear in New York newspapers; the color comic strip. It took advantage of a new color printing process, which used color separation and three different colors of ink; magenta, cyan, and yellow, plus black, to create all the colors on the page. One of the first characters in the new comic strips was a humorous boy of the New York streets named Mickey Dugen, more commonly known as the Yellow Kid, from the yellow nightshirt he wore. He gave his name (and color) to the whole genre of popular, sensational journalism, which became known as "yellow journalism". 20th and 21st centuries In the 20th century, yellow was revived as a symbol of exclusion, as it had been in the Middle Ages and Renaissance. Jews in Nazi Germany and German-occupied countries were required to sew yellow triangles with the star of David onto their clothing. In the 20th century, modernist painters reduced painting to its simplest colors and geometric shapes. The Dutch modernist painter Piet Mondrian made a series of paintings which consisted of a pure white canvas with a grid of vertical and horizontal black lines and rectangles of yellow, red, and blue. Yellow was particularly valued in the 20th century because of its high visibility. Because of its ability to be seen well from greater distances and at high speeds, yellow makes for the ideal color to be viewed from moving automobiles. It often replaced red as the color of fire trucks and other emergency vehicles, and was popular in neon signs, especially in Las Vegas and in China, where yellow was the most esteemed color. In the 1960s, Pickett Brand developed the "Eye Saver Yellow" slide rule, which was produced with a specific yellow color (Angstrom 5600) that reflects long-wavelength rays and promotes optimum eye-ease to help prevent eyestrain and improve visual accuracy. The 21st century saw the use of unusual materials and technologies to create new ways of experiencing the color yellow. One example was The weather project, by Danish-Icelandic artist Olafur Eliasson, which was installed in the open space of the Turbine Hall of London's Tate Modern in 2003. Eliasson used humidifiers to create a fine mist in the air via a mixture of sugar and water, as well as a semi-circular disc made up of hundreds of monochromatic lamps which radiated yellow light. The ceiling of the hall was covered with a huge mirror, in which visitors could see themselves as tiny black shadows against a mass of light. Fruits, vegetables, and eggs Many fruits are yellow when ripe, such as lemons and bananas, their color derived from carotenoid pigments. Egg yolks gain their color from xanthophylls, also a type of carotenoid pigment. Flowers Yellow is a common color of flowers. Other plants Rapeseed (Brassica napus), also known as rape or oilseed rape, is a bright yellow flowering member of the family Brassicaceae (mustard or cabbage family). Goldenrod is a yellow flowering plant in the family Asteraceae. Minerals and chemistry Yellowcake (also known as urania and uranic oxide) is concentrated uranium oxide, obtained through the milling of uranium ore. Yellowcake is used in the preparation of fuel for nuclear reactors and in uranium enrichment, one of the essential steps for creating nuclear weapons. Titan yellow (also known as clayton yellow), chemical formula has been used to determine magnesium in serum and urine, but the method is prone to interference, making the ammonium phosphate method superior when analysing blood cells, food or fecal material. Methyl yellow (p-dimethylaminoazobenzene) is a pH indicator used to determine acidity. It changes from yellow at pH 4.0 to red at pH 2.9. Yellow fireworks are produced by adding sodium compounds to the firework mixture. Sodium has a strong emission at 589.3 nm (D-line), a very slightly orange-tinted yellow. Amongst the elements, sulfur and gold are most obviously yellow. Phosphorus, arsenic and antimony have allotropes which are yellow or whitish-yellow; fluorine and chlorine are pale yellowish gases. Many crystalline chemical compounds, such as 2,4-Dinitrophenol, are yellowish in color. Pigments Yellow ochre (also known as Mars yellow, Pigment yellow 42, 43), hydrated ferric oxide (), is a naturally occurring pigment found in clays in many parts of the world. It is non-toxic and has been used in painting since prehistoric times. Indian yellow is a transparent, fluorescent pigment used in oil paintings and watercolors. Originally magnesium euxanthate, it was claimed to have been produced from the urine of Indian cows fed only on mango leaves. It has now been replaced by synthetic Indian yellow hue. Naples Yellow (lead antimonate yellow) is one of the oldest synthetic pigments, derived from the mineral bindheimite and used extensively up to the 20th century. It is toxic and nowadays is replaced in paint by a mixture of modern pigments. Cadmium Yellow (cadmium sulfide, CdS) has been used in artists' paints since the mid-19th century. Because of its toxicity, it may nowadays be replaced by azo pigments. Chrome yellow (lead chromate, ), derived from the mineral crocoite, was used by artists in the earlier part of the 19th century, but has been largely replaced by other yellow pigments because of the toxicity of lead. Zinc yellow or zinc chromate is a synthetic pigment made in the 19th century, and used by the painter Georges Seurat in his pointillist paintings. He did not know that it was highly unstable, and would quickly turn brown. Titanium yellow (nickel antimony titanium yellow rutile, ) is created by adding small amounts of the oxides of nickel and antimony to titanium dioxide and heating. It is used to produce yellow paints with good white coverage and has the LBNL paint code "Y10". Gamboge is an orange-brown resin, derived from trees of the genus Garcinia, which becomes yellow when powdered. It was used as a watercolor pigment in the far east from the 8th century – the name "gamboge" is derived from "Cambodia" – and has been used in Europe since the 17th century. Orpiment, also called King's Yellow or Chinese Yellow is arsenic trisulfide () and was used as a paint pigment until the 19th century when, because of its high toxicity and reaction with lead-based pigments, it was generally replaced by Cadmium Yellow. Azo dye-based pigment (a brightly colored transparent or semitransparent dye with a white pigment) is used as the colorant in most modern paints requiring either a highly saturated yellow or simplicity of color mixing. The most common is the monoazo arylide yellow family, first marketed as Hansa Yellow. Dyes Curcuma longa, also known as turmeric, is a plant grown in India and Southeast Asia which serves as a dye for clothing, especially monks' robes; as a spice for curry and other dishes; and as a popular medicine. It is also used as a food coloring for mustard and other products. Saffron, like turmeric, is one of the rare dyes that is also a spice and food colorant. It is made from the dried red stigma of the crocus sativus flower. It must be picked by hand and it takes 150 flowers to obtain a single gram of stigma, so it is extremely expensive. It probably originated in the Mediterranean or Southwest Asia, and its use was detailed in a 7th-century BC Assyrian botanical reference compiled under Ashurbanipal. It was known in India at the time of the Buddha, and after his death his followers decreed that monks should wear robes the color of saffron. Saffron was used to dye the robes of the senior Buddhist monks, while ordinary monks wore robes dyed with Gamboge or Curcuma longa, also known as Turmeric. The color of saffron comes from crocin, a red variety of carotenoid natural pigment. The color of the dyed fabric varies from deep red to orange to yellow, depending upon the type of saffron and the process. Most saffron today comes from Iran, but it is also grown commercially in Spain, Italy and Kashmir in India, and as a boutique crop in New Zealand, the United Kingdom, France, Switzerland and other countries. In the United States, it has been cultivated by the Pennsylvania Dutch community since the early 18th century. Because of the high price of saffron, other similar dyes and spices are often sold under the name saffron; for instance, what is called Indian saffron is often really turmeric. Reseda luteola, also known as dyers weed, yellow weed or weld, has been used as a yellow dye from neolithic times. It grew wild along the roads and walls of Europe, and was introduced into North America, where it grows as a weed. It was used as both as a yellow dye, whose color was deep and lasting, and to dye fabric green, first by dyeing it blue with indigo, then dyeing it with reseda luteola to turn it a rich, solid and lasting green. It was the most common yellow dye in Europe from the Middle Ages until the 18th century, when it was replaced first by the bark of the quercitron tree from North America, then by synthetic dyes. It was also widely used in North Africa and in the Ottoman Empire. Gamboge is a deep saffron to mustard yellow pigment and dye. In Asia, it is frequently used to dye Buddhist monks' robes. Gamboge is most often extracted by tapping resin from various species of evergreen trees of the family Guttiferae, which grow in Cambodia, Thailand, and elsewhere in Southeast Asia. "Kambuj" (Sanskrit: कंबुज) is the ancient Sanskrit name for Cambodia. Food coloring The most common yellow food coloring in use today is called Tartrazine. It is a synthetic lemon yellow azo dye. It is also known as E number E102, C.I. 19140, FD&C yellow 5, acid yellow 23, food yellow 4, and trisodium 1-(4-sulfonatophenyl)-4-(4-sulfonatophenylazo)-5-pyrazolone-3-carboxylate. It is the yellow most frequently used such processed food products as corn and potato chips, breakfast cereals such as corn flakes, candies, popcorn, mustard, jams and jellies, gelatin, soft drinks (notably Mountain Dew), energy and sports drinks, and pastries. It is also widely used in liquid and bar soap, shampoo, cosmetics and medicines. Sometimes it is mixed with blue dyes to color processed products green. It is typically labelled on food packages as "color", "tartrazine", or "E102". In the United States, because of concerns about possible health problems related to intolerance to tartrazine, its presence must be declared on food and drug product labels. Another popular synthetic yellow coloring is Sunset Yellow FCF (also known as orange yellow S, FD&C yellow 6 and C.I. 15985) It is manufactured from aromatic hydrocarbons from petroleum. When added to foods sold in Europe, it is denoted by E number E110. Symbolism and associations In the west, yellow is not a well-loved color. In a year 2000 survey, only 6% of respondents in Europe and America named it as their favorite color, compared with 45% for blue, 15% for green, 12% for red, and 10% for black. For 7% of respondents, it was their least favorite color. Yellow is considered a color of ambivalence and contradiction. It is associated with optimism and amusement, but also with betrayal, duplicity, and jealousy. However, in China and other parts of Asia, yellow is a color of virtue and nobility. In China Yellow has strong historical and cultural associations in China, where huáng ( or ) is the color of happiness, glory, and wisdom. Although huáng originally and occasionally still covers a range inclusive of tans and oranges, speakers of modern Standard Mandarin tend to map their use of huáng to shades corresponding to English yellow. In Chinese symbolism, yellow, red, and green are masculine colors, while black and white are considered feminine. After the development of the theory of five elements, the Chinese reckoned various correspondences. On a five season model, late summer was characterized by yellowing leaves. Yellow was taken as the color of the fifth direction of the compass, the central. China is called the Middle Kingdom; the palace of the Emperor was considered to be in the exact center of the world. The legendary first emperor of China was called the Yellow Emperor and the last, Puyi (1906–67), described in his memoirs how every object which surrounded him as a child was yellow. "It made me understand from my most tender age that I was of a unique essence, and it instilled in me the consciousness of my 'celestial nature' which made me different from every other human." The Chinese Emperor was literally considered the child of heaven, with both a political and religious role, both symbolized by yellow. After the Song dynasty, laws banned the use of certain bright yellow shades by anyone but the emperor. Distinguished visitors were honored with a yellow, not a red, carpet. In current Chinese pop culture, the term "yellow movie" () refers to films and other cultural items of a pornographic nature, analogous to the English "blue movie". This was the basis of the 2007 "very erotic very violent" meme, with the word "erotic" calquing Chinese "yellow". Light and reason Yellow, as the color of sunlight when sun is near the horizon, is commonly associated with warmth. Yellow combined with red symbolized heat and energy. A room painted yellow feels warmer than a room painted white, and a lamp with yellow light seems more natural than a lamp with white light. As the color of light, yellow is also associated with knowledge and wisdom. In English and many other languages, "brilliant" and "bright" mean intelligent. In Islam, the yellow color of gold symbolizes wisdom. In medieval European symbolism, red symbolized passion, blue symbolized the spiritual, and yellow symbolized reason. In many European universities, yellow gowns and caps are worn by members of the faculty of physical and natural sciences, as yellow is the color of reason and research. Gold and blond In ancient Greece and Rome, the gods were often depicted with yellow, or blonde hair, which was described in literature as 'golden'. The color yellow was associated with the sun gods Helios and Apollo. It was fashionable in ancient Greece for men and women to dye their hair yellow, or to spend time in the sun to bleach it. In ancient Rome, prostitutes were required to bleach their hair, to be easily identified, but it also became a fashionable hair color for aristocratic women, influenced by the exotic blonde hair of many of the newly conquered slaves from Gaul, Britain, and Germany. However, in medieval Europe and later, the word yellow often had negative connotations; associated with betrayal, so yellow hair was more poetically called 'blond,' 'light', 'fair,' or most often "golden". Visibility and caution Yellow is the most visible color from a distance, so it is often used for objects that need to be seen, such as fire engines, road maintenance equipment, school buses and taxicabs. It is also often used for warning signs, since yellow traditionally signals caution, rather than danger. Safety yellow is often used for safety and accident prevention information. A yellow light on a traffic signal means slow down, but not stop. The Occupational Safety and Health Administration (OSHA) uses Pantone 116 (a yellow hue) as their standard color implying "general warning", while the Federal Highway Administration similarly uses yellow to communicate warning or caution on highway signage. A yellow penalty card in a soccer match means warning, but not expulsion. Optimism and pleasure Yellow is the color most associated with optimism and pleasure; it is a color designed to attract attention, and is used for amusement. Yellow dresses in fashion are rare, but always associated with gaiety and celebration. Mayan and Italian The ancient Maya associated the color yellow with the direction South. The Maya glyph for "yellow" (k'an) also means "precious" or "ripe". "Giallo", in Italian, refers to crime stories, both fictional and real. This association began in about 1930, when the first series of crime novels published in Italy had yellow covers. Music The Beatles 1966 album Revolver features the No. 1 hit, "Yellow Submarine". Subsequently, United Artists released an animated film in 1968 called Yellow Submarine, based on the music of the Beatles. The March 1967 album by Donovan called Mellow Yellow reached number 2 on the U.S. Billboard charts in 1966 and number 8 in the UK in early 1967. The song of the same name popularized a widely-held belief that it was possible to get high by smoking scrapings from the inside of banana peels. This rumor was actually started in 1966 by Country Joe McDonald. Coldplay achieved worldwide fame with their 2000 single "Yellow". "Yellow River" is a song recorded by the British band Christie in 1970. The Yellow River Piano Concerto is a piano concerto arranged by a collaboration between musicians including Yin Chengzong and Chu Wanghua. Its premiere was in 1969 during the Cultural Revolution. Politics Yellow as a political color is most commonly associated with liberalism, libertarianism and anarcho-capitalism. Contemporary political parties using yellow include the Liberal Democrats and UKIP in the United Kingdom, the SNP in Scotland, ACT in New Zealand, and Libertarian Party in the United States. In the United States, a yellow dog Democrat was a Southern voter who consistently voted for Democratic candidates in the late 19th and early 20th centuries because of lingering resentment against the Republicans dating back to the Civil War and Reconstruction period. Today the term refers to a hard-core Democrat, supposedly referring to a person who would vote for a "yellow dog" before voting for a Republican. In China the Yellow Turbans were a Daoist sect that staged an extensive rebellion during the Han dynasty. The 1986 People Power Revolution in the Philippines was also known as the Yellow Revolution due to the presence of yellow ribbons during the demonstrations. Liberal and pro-democracy political parties and organizations such as UNIDO, PDP-Laban, and the Liberal Party have used the color yellow. More recently, it has become a pejorative term used by some pro-Ferdinand Marcos and pro-Rodrigo Duterte against the opposition. In France in November and December 2018, an opposition movement called the Yellow Vests went into the streets to protest against the fiscal policies of President Emmanuel Macron. They wore yellow safety vests, which French motorists are required by law to have in their cars. Selected national and international flags Three of the five most populous countries in the world (China and Brazil) have yellow or gold in their flag, representing about half of the world's population. While many flags use yellow, their symbolism varies widely, from civic virtue to golden treasure, golden fields, the desert, royalty, the keys to Heaven and the leadership of the Communist Party. In classic European heraldry, yellow, along with white, is one of the two metals (called gold and silver) and therefore flags following heraldic design rules must use either yellow or white to separate any of their other colors (see the rule of tincture and insignia). Defunct flags Religion In Buddhism, the saffron colors of robes to be worn by monks were defined by the Buddha himself and his followers in the 5th century BCE. The robe and its color is a sign of renunciation of the outside world and commitment to the order. The candidate monk, with his master, first appears before the monks of the monastery in his own clothes, with his new robe under his arm, and asks to enter the order. He then takes his vows, puts on the robes, and with his begging bowl, goes out to the world. Thereafter, he spends his mornings begging and his afternoons in contemplation and study, either in a forest, garden, or in the monastery. According to Buddhist scriptures and commentaries, the robe dye is allowed to be obtained from six kinds of substances: roots and tubers, plants, bark, leaves, flowers and fruits. The robes should also be boiled in water a long time to get the correctly sober color. Saffron and ochre, usually made with dye from the curcuma longa plant or the heartwood of the jackfruit tree, are the most common colors. The so-called forest monks usually wear ochre robes and city monks saffron, though this is not an official rule. The color of robes also varies somewhat among the different "vehicles", or schools of Buddhism, and by country, depending on their doctrines and the dyes available. The monks of the strict Vajrayana, or Tantric Buddhism, practiced in Tibet, wear the most colorful robes of saffron and red. The monks of Mahayana Buddhism, practiced mainly in Japan, China and Korea, wear lighter yellow or saffron, often with white or black. Monks of Hinayana Buddhism, practiced in Southeast Asia, usually wear ochre or saffron color. Monks of the forest tradition in Thailand and other parts of Southeast Asia wear robes of a brownish ochre, dyed from the wood of the jackfruit tree. In Hinduism, the divinity Krishna is commonly portrayed dressed in yellow. Yellow and saffron are also the colors worn by sadhu, or wandering holy men in India. The Hindu almighty and divine god Lord Ganesha or Ganpati is mostly dressed with a dhotar in yellow, which is popularly known as pivla pitambar and is considered to be the most auspicious one. In Sikhism: The Sikh Rehat Maryada clearly states that the Nishan Sahib hoisted outside every Gurudwara should be xanthic (Basanti in Punjabi) or greyish blue (modern day Navy blue) (Surmaaee in Punjabi) color. In Islam, the yellow color of gold symbolizes wisdom. In the religions of the islands of Polynesia, yellow is a sacred color, the color of the divine essence; the word "yellow" in the local languages is the same as the name of the curcuma longa plant, which is considered the food of the gods. In the Roman Catholic Church, yellow symbolizes gold, and in Christian mythology the golden key to the Kingdom of Heaven, which divine Christ gave to Saint Peter. The flag of the Vatican City and the colors of the pope are yellow and white, symbolizing the gold key and the silver key. White and yellow together can also symbolize Easter, rebirth and Resurrection. Yellow also has a negative meaning, symbolizing betrayal; Judas Iscariot is usually portrayed wearing a pale yellow toga. Yellow and golden halos mark the saints in religious paintings. In Wicca, yellow represents intellect, inspiration, imagination, and knowledge. It is used for communication, confidence, divination, and study. New Age Spiritual Metaphysics In the metaphysics of the New Age author, Alice A. Bailey, in her system called the Seven Rays which classifies humans into seven different metaphysical psychological types, the fourth ray of harmony through conflict is represented by the color yellow. People who have this metaphysical psychological type are said to be on the Yellow Ray." Yellow is used to symbolically represent the third, solar plexus chakra (Manipura). Psychics who claim to be able to observe the aura with their third eye report that someone with a yellow aura is typically someone who is in an occupation requiring intellectual acumen, such as a scientist. Sports In Association football (soccer), the referee shows a yellow card to indicate that a player has been officially warned because they have committed a foul or have wasted time. Originally in Rugby league and then later, also in Rugby Union, the referee shows a yellow card to indicate that a player has been sent to the sin bin. In cycle racing, the yellow jersey – or maillot jaune – is awarded to the leader in some stage races. The tradition was begun in the Tour de France where the sponsoring L'Auto newspaper (later L'Équipe) was printed on distinctive yellow newsprint. National teams of Brazil, Sweden and Ukraine usually play in yellow shirts. Transportation In some countries, taxicabs are commonly yellow. This practice began in Chicago, where taxi entrepreneur John D. Hertz painted his taxis yellow based on a University of Chicago study alleging that yellow is the color most easily seen at a distance. In Canada and the United States, school buses are almost uniformly painted a yellow color (often referred to as "school bus yellow") for purposes of visibility and safety, and British bus operators such as FirstGroup are attempting to introduce the concept there. "Caterpillar yellow" and "high-visibility yellow" are used for highway construction equipment. In the rules of the road, yellow (called "amber" in Britain) is a traffic light signal meaning "slow down", "caution", or "slow speed ahead". It is intermediate between green (go) and red (stop). In railway signaling, yellow is often the color for warning, slow down, such as with distant signals. Selective yellow is used in some automotive headlamps and fog lights to reduce the dazzling effects of rain, snow, and fog. Maritime signaling In International maritime signal flags a yellow flag denotes the letter "Q". It also means a ship asserts that it does not need to be quarantined. Idioms and expressions Yellow-belly is an American expression which means a coward. The term comes from the 19th century and the exact origin is unknown, but it may refer to the color of sickness, which means a person lacks strength and stamina. Yellow journalism is also an American term for news which present limited research to its findings. Yellow pages refers in various countries to directories of telephone numbers, arranged alphabetically by the type of business or service offered. Asian people, especially East Asians, have been racialized as yellow. The Yellow Peril was a term used in politics and popular fiction in the late 19th and early 20th century to describe the alleged economic and cultural danger posed to Europe and America by Chinese immigration. The term was first used by Kaiser Wilhelm II in Germany in 1895, and was the subject of numerous books and later films. Yellow fever is a slang phrase to describe an Asian race fetish. High yellow was a term sometimes used in the early 20th century, to describe light-skinned African-Americans. Yellow snow, snow that is yellow from urination.
Physical sciences
Color terms
null
34385
https://en.wikipedia.org/wiki/Yeast
Yeast
Yeasts are eukaryotic, single-celled microorganisms classified as members of the fungus kingdom. The first yeast originated hundreds of millions of years ago, and at least 1,500 species are currently recognized. They are estimated to constitute 1% of all described fungal species. Some yeast species have the ability to develop multicellular characteristics by forming strings of connected budding cells known as pseudohyphae or false hyphae, or quickly evolve into a multicellular cluster with specialised cell organelles function. Yeast sizes vary greatly, depending on species and environment, typically measuring 3–4 μm in diameter, although some yeasts can grow to 40 μm in size. Most yeasts reproduce asexually by mitosis, and many do so by the asymmetric division process known as budding. With their single-celled growth habit, yeasts can be contrasted with molds, which grow hyphae. Fungal species that can take both forms (depending on temperature or other conditions) are called dimorphic fungi. The yeast species Saccharomyces cerevisiae converts carbohydrates to carbon dioxide and alcohols through the process of fermentation. The products of this reaction have been used in baking and the production of alcoholic beverages for thousands of years. S. cerevisiae is also an important model organism in modern cell biology research, and is one of the most thoroughly studied eukaryotic microorganisms. Researchers have cultured it in order to understand the biology of the eukaryotic cell and ultimately human biology in great detail. Other species of yeasts, such as Candida albicans, are opportunistic pathogens and can cause infections in humans. Yeasts have recently been used to generate electricity in microbial fuel cells and to produce ethanol for the biofuel industry. Yeasts do not form a single taxonomic or phylogenetic grouping. The term "yeast" is often taken as a synonym for Saccharomyces cerevisiae, but the phylogenetic diversity of yeasts is shown by their placement in two separate phyla: the Ascomycota and the Basidiomycota. The budding yeasts or "true yeasts" are classified in the order Saccharomycetales, within the phylum Ascomycota. History The word "yeast" comes from Old English gist, gyst, and from the Indo-European root yes-, meaning "boil", "foam", or "bubble". Yeast microbes are probably one of the earliest domesticated organisms. Archaeologists digging in Egyptian ruins found early grinding stones and baking chambers for yeast-raised bread, as well as drawings of 4,000-year-old bakeries and breweries. Vessels studied from several archaeological sites in Israel (dating to around 5,000, 3,000 and 2,500 years ago), which were believed to have contained alcoholic beverages (beer and mead), were found to contain yeast colonies that had survived over the millennia, providing the first direct biological evidence of yeast use in early cultures. In 1680, Dutch naturalist Anton van Leeuwenhoek first microscopically observed yeast, but at the time did not consider them to be living organisms, but rather globular structures as researchers were doubtful whether yeasts were algae or fungi. Theodor Schwann recognized them as fungi in 1837. In 1857, French microbiologist Louis Pasteur showed that by bubbling oxygen into the yeast broth, cell growth could be increased, but fermentation was inhibited – an observation later called the "Pasteur effect". In the paper "Mémoire sur la fermentation alcoolique," Pasteur proved that alcoholic fermentation was conducted by living yeasts and not by a chemical catalyst. By the late 18th century two yeast strains used in brewing had been identified: Saccharomyces cerevisiae (top-fermenting yeast) and S. pastorianus (bottom-fermenting yeast). S. cerevisiae has been sold commercially by the Dutch for bread-making since 1780; while, around 1800, the Germans started producing S. cerevisiae in the form of cream. In 1825, a method was developed to remove the liquid so the yeast could be prepared as solid blocks. The industrial production of yeast blocks was enhanced by the introduction of the filter press in 1867. In 1872, Baron Max de Springer developed a manufacturing process to create granulated yeast from beetroot molasses, a technique that was used until the first World War. In the United States, naturally occurring airborne yeasts were used almost exclusively until commercial yeast was marketed at the Centennial Exposition in 1876 in Philadelphia, where Charles L. Fleischmann exhibited the product and a process to use it, as well as serving the resultant baked bread. The mechanical refrigerator (first patented in the 1850s in Europe) liberated brewers and winemakers from seasonal constraints for the first time and allowed them to exit cellars and other earthen environments. For John Molson, who made his livelihood in Montreal prior to the development of the fridge, the brewing season lasted from September through to May. The same seasonal restrictions formerly governed the distiller's art. Nutrition and growth Yeasts are chemoorganotrophs, as they use organic compounds as a source of energy and do not require sunlight to grow. Carbon is obtained mostly from hexose sugars, such as glucose and fructose, or disaccharides such as sucrose and maltose. Some species can metabolize pentose sugars such as ribose, alcohols, and organic acids. Yeast species either require oxygen for aerobic cellular respiration (obligate aerobes) or are anaerobic, but also have aerobic methods of energy production (facultative anaerobes). Unlike bacteria, no known yeast species grow only anaerobically (obligate anaerobes). Most yeasts grow best in a neutral or slightly acidic pH environment. Yeasts vary in regard to the temperature range in which they grow best. For example, Leucosporidium frigidum grows at , Saccharomyces telluris at , and Candida slooffi at . The cells can survive freezing under certain conditions, with viability decreasing over time. In general, yeasts are grown in the laboratory on solid growth media or in liquid broths. Common media used for the cultivation of yeasts include potato dextrose agar or potato dextrose broth, Wallerstein Laboratories nutrient agar, yeast peptone dextrose agar, and yeast mould agar or broth. Home brewers who cultivate yeast frequently use dried malt extract and agar as a solid growth medium. The fungicide cycloheximide is sometimes added to yeast growth media to inhibit the growth of Saccharomyces yeasts and select for wild/indigenous yeast species. This will change the yeast process. The appearance of a white, thready yeast, commonly known as kahm yeast, is often a byproduct of the lactofermentation (or pickling) of certain vegetables. It is usually the result of exposure to air. Although harmless, it can give pickled vegetables a bad flavor and must be removed regularly during fermentation. Ecology Yeasts are very common in the environment, and are often isolated from sugar-rich materials. Examples include naturally occurring yeasts on the skins of fruits and berries (such as grapes, apples, or peaches), and exudates from plants (such as plant saps or cacti). Some yeasts are found in association with soil and insects. Yeasts from the soil and from the skins of fruits and berries have been shown to dominate fungal succession during fruit decay. The ecological function and biodiversity of yeasts are relatively unknown compared to those of other microorganisms. Yeasts, including Candida albicans, Rhodotorula rubra, Torulopsis and Trichosporon cutaneum, have been found living in between people's toes as part of their skin flora. Yeasts are also present in the gut flora of mammals and some insects and even deep-sea environments host an array of yeasts. An Indian study of seven bee species and nine plant species found 45 species from 16 genera colonize the nectaries of flowers and honey stomachs of bees. Most were members of the genus Candida; the most common species in honey stomachs was Dekkera intermedia and in flower nectaries, Candida blankii. Yeast colonising nectaries of the stinking hellebore have been found to raise the temperature of the flower, which may aid in attracting pollinators by increasing the evaporation of volatile organic compounds. A black yeast has been recorded as a partner in a complex relationship between ants, their mutualistic fungus, a fungal parasite of the fungus and a bacterium that kills the parasite. The yeast has a negative effect on the bacteria that normally produce antibiotics to kill the parasite, so may affect the ants' health by allowing the parasite to spread. Certain strains of some species of yeasts produce proteins called yeast killer toxins that allow them to eliminate competing strains. (See main article on killer yeast.) This can cause problems for winemaking but could potentially also be used to advantage by using killer toxin-producing strains to make the wine. Yeast killer toxins may also have medical applications in treating yeast infections (see "Pathogenic yeasts" section below). Marine yeasts, defined as the yeasts that are isolated from marine environments, are able to grow better on a medium prepared using seawater rather than freshwater. The first marine yeasts were isolated by Bernhard Fischer in 1894 from the Atlantic Ocean, and those were identified as Torula sp. and Mycoderma sp. Following this discovery, various other marine yeasts have been isolated from around the world from different sources, including seawater, seaweeds, marine fish and mammals. Among these isolates, some marine yeasts originated from terrestrial habitats (grouped as facultative marine yeast), which were brought to and survived in marine environments. The other marine yeasts were grouped as obligate or indigenous marine yeasts, which are confined to marine habitats. However, no sufficient evidence has been found to explain the indispensability of seawater for obligate marine yeasts. It has been reported that marine yeasts are able to produce many bioactive substances, such as amino acids, glucans, glutathione, toxins, enzymes, phytase, and vitamins with potential applications in the food, pharmaceutical, cosmetic, and chemical industries as well as for marine culture and environmental protection. Marine yeast was successfully used to produce bioethanol using seawater-based media which will potentially reduce the water footprint of bioethanol. Reproduction Yeasts, like all fungi, may have asexual and sexual reproductive cycles. The most common mode of vegetative growth in yeast is asexual reproduction by budding, where a small bud (also known as a bleb or daughter cell) is formed on the parent cell. The nucleus of the parent cell splits into a daughter nucleus and migrates into the daughter cell. The bud then continues to grow until it separates from the parent cell, forming a new cell. The daughter cell produced during the budding process is generally smaller than the mother cell. Some yeasts, including Schizosaccharomyces pombe, reproduce by fission instead of budding, and thereby creating two identically sized daughter cells. In general, under high-stress conditions such as nutrient starvation, haploid cells will die; under the same conditions, however, diploid cells can undergo sporulation, entering sexual reproduction (meiosis) and producing a variety of haploid spores, which can go on to mate (conjugate), reforming the diploid. The haploid fission yeast Schizosaccharomyces pombe is a facultative sexual microorganism that can undergo mating when nutrients are limited. Exposure of S. pombe to hydrogen peroxide, an agent that causes oxidative stress leading to oxidative DNA damage, strongly induces mating and the formation of meiotic spores. The budding yeast Saccharomyces cerevisiae reproduces by mitosis as diploid cells when nutrients are abundant, but when starved, this yeast undergoes meiosis to form haploid spores. Haploid cells may then reproduce asexually by mitosis. Katz Ezov et al. presented evidence that in natural S. cerevisiae populations clonal reproduction and selfing (in the form of intratetrad mating) predominate. In nature, the mating of haploid cells to form diploid cells is most often between members of the same clonal population and out-crossing is uncommon. Analysis of the ancestry of natural S. cerevisiae strains led to the conclusion that out-crossing occurs only about once every 50,000 cell divisions. These observations suggest that the possible long-term benefits of outcrossing (e.g. generation of diversity) are likely to be insufficient for generally maintaining sex from one generation to the next. Rather, a short-term benefit, such as recombinational repair during meiosis, may be the key to the maintenance of sex in S. cerevisiae. Some pucciniomycete yeasts, in particular species of Sporidiobolus and Sporobolomyces, produce aerially dispersed, asexual ballistoconidia. Uses The useful physiological properties of yeast have led to their use in the field of biotechnology. Fermentation of sugars by yeast is the oldest and largest application of this technology. Many types of yeasts are used for making many foods: baker's yeast in bread production, brewer's yeast in beer fermentation, and yeast in wine fermentation and for xylitol production. So-called red rice yeast is actually a mold, Monascus purpureus. Yeasts include some of the most widely used model organisms for genetics and cell biology. Alcoholic beverages Alcoholic beverages are defined as beverages that contain ethanol (C2H5OH). This ethanol is almost always produced by fermentation – the metabolism of carbohydrates by certain species of yeasts under anaerobic or low-oxygen conditions. Beverages such as mead, wine, beer, or distilled spirits all use yeast at some stage of their production. A distilled beverage is a beverage containing ethanol that has been purified by distillation. Carbohydrate-containing plant material is fermented by yeast, producing a dilute solution of ethanol in the process. Spirits such as whiskey and rum are prepared by distilling these dilute solutions of ethanol. Components other than ethanol are collected in the condensate, including water, esters, and other alcohols, which (in addition to that provided by the oak in which it may be aged) account for the flavour of the beverage. Beer Brewing yeasts may be classed as "top-cropping" (or "top-fermenting") and "bottom-cropping" (or "bottom-fermenting"). Top-cropping yeasts are so called because they form a foam at the top of the wort during fermentation. An example of a top-cropping yeast is Saccharomyces cerevisiae, sometimes called an "ale yeast". Bottom-cropping yeasts are typically used to produce lager-type beers, though they can also produce ale-type beers. These yeasts ferment well at low temperatures. An example of bottom-cropping yeast is Saccharomyces pastorianus, formerly known as S. carlsbergensis. Decades ago, taxonomists reclassified S. carlsbergensis (uvarum) as a member of S. cerevisiae, noting that the only distinct difference between the two is metabolic. Lager strains of S. cerevisiae secrete an enzyme called melibiase, allowing them to hydrolyse melibiose, a disaccharide, into more fermentable monosaccharides. Top- and bottom-cropping and cold- and warm-fermenting distinctions are largely generalizations used by laypersons to communicate to the general public. The most common top-cropping brewer's yeast, S. cerevisiae, is the same species as the common baking yeast. Brewer's yeast is also very rich in essential minerals and the B vitamins (except B12), a feature exploited in food products made from leftover (by-product) yeast from brewing. However, baking and brewing yeasts typically belong to different strains, cultivated to favour different characteristics: baking yeast strains are more aggressive, to carbonate dough in the shortest amount of time possible; brewing yeast strains act more slowly but tend to produce fewer off-flavours and tolerate higher alcohol concentrations (with some strains, up to 22%). Dekkera/Brettanomyces is a genus of yeast known for its important role in the production of 'lambic' and specialty sour ales, along with the secondary conditioning of a particular Belgian Trappist beer. The taxonomy of the genus Brettanomyces has been debated since its early discovery and has seen many reclassifications over the years. Early classification was based on a few species that reproduced asexually (anamorph form) through multipolar budding. Shortly after, the formation of ascospores was observed and the genus Dekkera, which reproduces sexually (teleomorph form), was introduced as part of the taxonomy. The current taxonomy includes five species within the genera of Dekkera/Brettanomyces. Those are the anamorphs Brettanomyces bruxellensis, Brettanomyces anomalus, Brettanomyces custersianus, Brettanomyces naardenensis, and Brettanomyces nanus, with teleomorphs existing for the first two species, Dekkera bruxellensis and Dekkera anomala. The distinction between Dekkera and Brettanomyces is arguable, with Oelofse et al. (2008) citing Loureiro and Malfeito-Ferreira from 2006 when they affirmed that current molecular DNA detection techniques have uncovered no variance between the anamorph and teleomorph states. Over the past decade, Brettanomyces spp. have seen an increasing use in the craft-brewing sector of the industry, with a handful of breweries having produced beers that were primarily fermented with pure cultures of Brettanomyces spp. This has occurred out of experimentation, as very little information exists regarding pure culture fermentative capabilities and the aromatic compounds produced by various strains. Dekkera/Brettanomyces spp. have been the subjects of numerous studies conducted over the past century, although a majority of the recent research has focused on enhancing the knowledge of the wine industry. Recent research on eight Brettanomyces strains available in the brewing industry focused on strain-specific fermentations and identified the major compounds produced during pure culture anaerobic fermentation in wort. Wine Yeast is used in winemaking, where it converts the sugars present (glucose and fructose) in grape juice (must) into ethanol. Yeast is normally already present on grape skins. Fermentation can be done with this endogenous "wild yeast", but this procedure gives unpredictable results, which depend upon the exact types of yeast species present. For this reason, a pure yeast culture is usually added to the must; this yeast quickly dominates the fermentation. The wild yeasts are repressed, which ensures a reliable and predictable fermentation. Most added wine yeasts are strains of S. cerevisiae, though not all strains of the species are suitable. Different S. cerevisiae yeast strains have differing physiological and fermentative properties, therefore the actual strain of yeast selected can have a direct impact on the finished wine. Significant research has been undertaken into the development of novel wine yeast strains that produce atypical flavour profiles or increased complexity in wines. The growth of some yeasts, such as Zygosaccharomyces and Brettanomyces, in wine can result in wine faults and subsequent spoilage. Brettanomyces produces an array of metabolites when growing in wine, some of which are volatile phenolic compounds. Together, these compounds are often referred to as "Brettanomyces character", and are often described as "antiseptic" or "barnyard" type aromas. Brettanomyces is a significant contributor to wine faults within the wine industry. Researchers from the University of British Columbia, Canada, have found a new strain of yeast that has reduced amines. The amines in red wine and Chardonnay produce off-flavors and cause headaches and hypertension in some people. About 30% of people are sensitive to biogenic amines, such as histamines. Baking Yeast, most commonly S. cerevisiae, is used in baking as a leavening agent, converting the fermentable sugars present in dough into carbon dioxide. This causes the dough to expand or rise as gas forms pockets or bubbles. When the dough is baked, the yeast dies and the air pockets "set", giving the baked product a soft and spongy texture. The use of potatoes, water from potato boiling, eggs, or sugar in a bread dough accelerates the growth of yeast. Most yeasts used in baking are of the same species common in alcoholic fermentation. In addition, Saccharomyces exiguus (also known as S. minor), a wild yeast found on plants, fruits, and grains, is occasionally used for baking. In breadmaking, the yeast initially respires aerobically, producing carbon dioxide and water. When the oxygen is depleted, fermentation begins, producing ethanol as a waste product; however, this evaporates during baking. It is not known when yeast was first used to bake bread. The first records that show this use came from Ancient Egypt. Researchers speculate a mixture of flour meal and water was left longer than usual on a warm day and the yeasts that occur in natural contaminants of the flour caused it to ferment before baking. The resulting bread would have been lighter and tastier than the normal flat, hard cake. Today, there are several retailers of baker's yeast; one of the earlier developments in North America is Fleischmann's Yeast, in 1868. During World War II, Fleischmann's developed a granulated active dry yeast which did not require refrigeration, had a longer shelf life than fresh yeast, and rose twice as fast. Baker's yeast is also sold as a fresh yeast compressed into a square "cake". This form perishes quickly, so must be used soon after production. A weak solution of water and sugar can be used to determine whether yeast is expired. In the solution, active yeast will foam and bubble as it ferments the sugar into ethanol and carbon dioxide. Some recipes refer to this as proofing the yeast, as it "proves" (tests) the viability of the yeast before the other ingredients are added. When a sourdough starter is used, flour and water are added instead of sugar; this is referred to as proofing the sponge. When yeast is used for making bread, it is mixed with flour, salt, and warm water or milk. The dough is kneaded until it is smooth, and then left to rise, sometimes until it has doubled in size. The dough is then shaped into loaves. Some bread doughs are knocked back after one rising and left to rise again (this is called dough proofing) and then baked. A longer rising time gives a better flavor, but the yeast can fail to raise the bread in the final stages if it is left for too long initially. Bioremediation Some yeasts can find potential application in the field of bioremediation. One such yeast, Yarrowia lipolytica, is known to degrade palm oil mill effluent, TNT (an explosive material), and other hydrocarbons, such as alkanes, fatty acids, fats and oils. It can also tolerate high concentrations of salt and heavy metals, and is being investigated for its potential as a heavy metal biosorbent. Saccharomyces cerevisiae has potential to bioremediate toxic pollutants like arsenic from industrial effluent. Bronze statues are known to be degraded by certain species of yeast. Different yeasts from Brazilian gold mines bioaccumulate free and complexed silver ions. Industrial ethanol production The ability of yeast to convert sugar into ethanol has been harnessed by the biotechnology industry to produce ethanol fuel. The process starts by milling a feedstock, such as sugar cane, field corn, or other cereal grains, and then adding dilute sulfuric acid, or fungal alpha amylase enzymes, to break down the starches into complex sugars. A glucoamylase is then added to break the complex sugars down into simple sugars. After this, yeasts are added to convert the simple sugars to ethanol, which is then distilled off to obtain ethanol up to 96% in purity. Saccharomyces yeasts have been genetically engineered to ferment xylose, one of the major fermentable sugars present in cellulosic biomasses, such as agriculture residues, paper wastes, and wood chips. Such a development means ethanol can be efficiently produced from more inexpensive feedstocks, making cellulosic ethanol fuel a more competitively priced alternative to gasoline fuels. Nonalcoholic beverages A number of sweet carbonated beverages can be produced by the same methods as beer, except the fermentation is stopped sooner, producing carbon dioxide, but only trace amounts of alcohol, leaving a significant amount of residual sugar in the drink. Root beer, originally made by Native Americans, commercialized in the United States by Charles Elmer Hires and especially popular during Prohibition Kvass, a fermented drink made from rye, popular in Eastern Europe. It has a recognizable, but low alcoholic content. Kombucha, a fermented sweetened tea. Yeast in symbiosis with acetic acid bacteria is used in its preparation. Species of yeasts found in the tea can vary, and may include: Brettanomyces bruxellensis, Candida stellata, Schizosaccharomyces pombe, Torulaspora delbrueckii and Zygosaccharomyces bailii. Also popular in Eastern Europe and some former Soviet republics under the name chajnyj grib (), which means "tea mushroom". Kefir and kumis are made by fermenting milk with yeast and bacteria. Mauby (), made by fermenting sugar with the wild yeasts naturally present on the bark of the Colubrina elliptica tree, popular in the Caribbean Foods and Yeast is used as an ingredient in foods for its umami flavor, in much of the same way that monosodium glutamate (MSG) is used and, like MSG, yeast often contains free glutamic acid. Examples include: Yeast extract, made from the intracellular contents of yeast and used as food additives or flavours. The general method for making yeast extract for food products such as Vegemite and Marmite on a commercial scale is heat autolysis, i.e. to add salt to a suspension of yeast, making the solution hypertonic, which leads to the cells' shrivelling up. This triggers autolysis, wherein the yeast's digestive enzymes break their own proteins down into simpler compounds, a process of self-destruction. The dying yeast cells are then heated to complete their breakdown, after which the husks (yeast with thick cell walls that would give poor texture) are removed. Yeast autolysates are used in Vegemite and Promite (Australia); Marmite (the United Kingdom); the unrelated Marmite (New Zealand); Vitam-R (Germany); and Cenovis (Switzerland). Nutritional yeast, which is whole dried, deactivated yeast cells, usually S. cerevisiae. Usually in the form of yellow flake or powder, its nutty and umami flavor makes it a vegan substitute for cheese powder. Another popular use is as a topping for popcorn. It can also be used in mashed and fried potatoes, as well as in scrambled eggs. It comes in the form of flakes, or as a yellow powder similar in texture to cornmeal. In Australia, it is sometimes sold as "savoury yeast flakes". Both types of yeast foods above are rich in B-complex vitamins (besides vitamin B12 unless fortified), making them an attractive nutritional supplement to vegans. The same vitamins are also found in some yeast-fermented products mentioned above, such as kvass. Nutritional yeast in particular is naturally low in fat and sodium and a source of protein and vitamins as well as other minerals and cofactors required for growth. Many brands of nutritional yeast and yeast extract spreads, though not all, are fortified with vitamin B12, which is produced separately by bacteria. In 1920, the Fleischmann Yeast Company began to promote yeast cakes in a "Yeast for Health" campaign. They initially emphasized yeast as a source of vitamins, good for skin and digestion. Their later advertising claimed a much broader range of health benefits, and was censured as misleading by the Federal Trade Commission. The fad for yeast cakes lasted until the late 1930s. Probiotics Some probiotic supplements use the yeast S. boulardii to maintain and restore the natural flora in the gastrointestinal tract. S. boulardii has been shown to reduce the symptoms of acute diarrhea, reduce the chance of infection by Clostridium difficile (often identified simply as C. difficile or C. diff), reduce bowel movements in diarrhea-predominant IBS patients, and reduce the incidence of antibiotic-, traveler's-, and HIV/AIDS-associated diarrheas. Aquarium hobby Yeast is often used by aquarium hobbyists to generate carbon dioxide (CO2) to nourish plants in planted aquaria. CO2 levels from yeast are more difficult to regulate than those from pressurized CO2 systems. However, the low cost of yeast makes it a widely used alternative. Scientific research Several yeasts, in particular S. cerevisiae and S. pombe, have been widely used in genetics and cell biology, largely because they are simple eukaryotic cells, serving as a model for all eukaryotes, including humans, for the study of fundamental cellular processes such as the cell cycle, DNA replication, recombination, cell division, and metabolism. Also, yeasts are easily manipulated and cultured in the laboratory, which has allowed for the development of powerful standard techniques, such as yeast two-hybrid, synthetic genetic array analysis, and tetrad analysis. Many proteins important in human biology were first discovered by studying their homologues in yeast; these proteins include cell cycle proteins, signaling proteins, and protein-processing enzymes. On 24 April 1996, S. cerevisiae was announced to be the first eukaryote to have its genome, consisting of 12 million base pairs, fully sequenced as part of the Genome Project. At the time, it was the most complex organism to have its full genome sequenced, and the work of seven years and the involvement of more than 100 laboratories to accomplish. The second yeast species to have its genome sequenced was Schizosaccharomyces pombe, which was completed in 2002. It was the sixth eukaryotic genome sequenced and consists of 13.8 million base pairs. As of 2014, over 50 yeast species have had their genomes sequenced and published. Genomic and functional gene annotation of the two major yeast models can be accessed via their respective model organism databases: SGD and PomBase. Genetically engineered biofactories Various yeast species have been genetically engineered to efficiently produce various drugs, a technique called metabolic engineering. S. cerevisiae is easy to genetically engineer; its physiology, metabolism and genetics are well known, and it is amenable for use in harsh industrial conditions. A wide variety of chemical in different classes can be produced by engineered yeast, including phenolics, isoprenoids, alkaloids, and polyketides. About 20% of biopharmaceuticals are produced in S. cerevisiae, including insulin, vaccines for hepatitis, and human serum albumin. Pathogenic yeasts Some species of yeast are opportunistic pathogens that can cause infection in people with compromised immune systems. Cryptococcus neoformans and Cryptococcus gattii are significant pathogens of immunocompromised people. They are the species primarily responsible for cryptococcosis, a fungal infection that occurs in about one million HIV/AIDS patients, causing over 600,000 deaths annually. The cells of these yeast are surrounded by a rigid polysaccharide capsule, which helps to prevent them from being recognised and engulfed by white blood cells in the human body. Yeasts of the genus Candida, another group of opportunistic pathogens, cause oral and vaginal infections in humans, known as candidiasis. Candida is commonly found as a commensal yeast in the mucous membranes of humans and other warm-blooded animals. However, sometimes these same strains can become pathogenic. The yeast cells sprout a hyphal outgrowth, which locally penetrates the mucosal membrane, causing irritation and shedding of the tissues. A book from the 1980s listed the pathogenic yeasts of candidiasis in probable descending order of virulence for humans as: C. albicans, C. tropicalis, C. stellatoidea, C. glabrata, C. krusei, C. parapsilosis, C. guilliermondii, C. viswanathii, C. lusitaniae, and Rhodotorula mucilaginosa. Candida glabrata is the second most common Candida pathogen after C. albicans, causing infections of the urogenital tract, and of the bloodstream (candidemia). C. auris has been more recently identified. Food spoilage Yeasts are able to grow in foods with a low pH (5.0 or lower) and in the presence of sugars, organic acids, and other easily metabolized carbon sources. During their growth, yeasts metabolize some food components and produce metabolic end products. This causes the physical, chemical, and sensible properties of a food to change, and the food is spoiled. The growth of yeast within food products is often seen on their surfaces, as in cheeses or meats, or by the fermentation of sugars in beverages, such as juices, and semiliquid products, such as syrups and jams. The yeast of the genus Zygosaccharomyces have had a long history as spoilage yeasts within the food industry. This is mainly because these species can grow in the presence of high sucrose, ethanol, acetic acid, sorbic acid, benzoic acid, and sulfur dioxide concentrations, representing some of the commonly used food preservation methods. Methylene blue is used to test for the presence of live yeast cells. In oenology, the major spoilage yeast is Brettanomyces bruxellensis. Candida blankii has been detected in Iberian ham and meat. Symbiosis An Indian study of seven bee species and nine plant species found 45 yeast species from 16 genera colonise the nectaries of flowers and honey stomachs of bees. Most were members of the genus Candida; the most common species in honey bee stomachs was Dekkera intermedia, while the most common species colonising flower nectaries was Candida blankii. Although the mechanism is not fully understood, it was found that A. indica flowers more if Candida blankii is present. In another example, Spathaspora passalidarum, found in the digestive tract of bess beetles, aids the digestion of plant cells by fermenting xylose. Many fruits produce different types of sugars that attract yeasts, which ferment the sugar and turns it into alcohol. Fruit eating mammals find the scent of alcohol attractive as it indicates a ripe, sugary fruit which provides more nutrition. In turn, the mammals helps disperse both the fruit's seeds and the yeast's spores. Yeast and small hive beetle have mutualistic relationship. While small hive beetle is attracted by the pheromone released by the host honeybee, yeast can produce a similar pheromone which have the same attractive effect to the small hive beetle. Therefore, yeast facilitates SHB's infestation if the beehive contains yeast inside.
Biology and health sciences
Fungi
null
34411
https://en.wikipedia.org/wiki/Zodiac
Zodiac
The zodiac is a belt-shaped region of the sky that extends approximately 8° north and south celestial latitude of the ecliptic – the apparent path of the Sun across the celestial sphere over the course of the year. Within this zodiac belt appear the Moon and the brightest planets, along their orbital planes. The zodiac is divided along the ecliptic into 12 equal parts, called "signs", each occupying 30° of celestial longitude. These signs roughly correspond to the astronomical constellations with the following modern names: Aries, Taurus, Gemini, Cancer, Leo, Virgo, Libra, Scorpio, Sagittarius, Capricorn, Aquarius, and Pisces. The signs have been used to determine the time of the year by identifying each sign with the days of the year the Sun is in the respective sign. In Western astrology, and formerly astronomy, the time of each sign is associated with different attributes. The zodiacal system and its angular measurement in 360 sexagesimal degree (°) originated with Babylonian astronomy during the 1st millennium BC. It was communicated into Greek astronomy by the 2nd century BC, as well as into developing the Hindu zodiac. Due to the precession of the equinoxes, the time of year that the Sun is in a given constellation has changed since Babylonian times, and the point of March equinox has moved from Aries into Pisces. The zodiac forms a celestial coordinate system, or more specifically an ecliptic coordinate system, which takes the ecliptic as the origin of latitude and the Sun's position at vernal equinox as the origin of longitude. In modern astronomy, the ecliptic coordinate system is still used for tracking Solar System objects. Name The English word derives from , the Latinized form of the Ancient Greek ( ), meaning "cycle or circle of little animals". () is the diminutive of (, "animal"). The name reflects the prominence of animals (and mythological hybrids) among the twelve signs. Usage The zodiac was in use by the Roman era, based on concepts inherited by Hellenistic astronomy from Babylonian astronomy of the Chaldean period (mid-1st millennium BC), which, in turn, derived from an earlier system of lists of stars along the ecliptic. The construction of the zodiac is described in Ptolemy's comprehensive 2nd century AD work, the Almagest. Although the zodiac remains the basis of the ecliptic coordinate system in use in astronomy besides the equatorial one, the term and the names of the twelve signs are today mostly associated with horoscopic astrology. The term "zodiac" may also refer to the region of the celestial sphere encompassing the paths of the planets corresponding to the band of about 8 arc degrees above and below the ecliptic. The zodiac of a given planet is the band that contains the path of that particular body; e.g., the "zodiac of the Moon" is the band of 5° above and below the ecliptic. By extension, the "zodiac of the comets" may refer to the band encompassing most short-period comets. History Early history As early as the 14th century BC a complete list of the 36 Egyptian decans was placed among the hieroglyphs adorning the tomb of Seti I; they figured again in the temple of Ramesses II, and characterize every Egyptian astrological monument. Both the famous zodiacs of Dendera display their symbols, identified by Karl Richard Lepsius. The division of the ecliptic into the zodiacal signs originates in Babylonian astronomy during the first half of the 1st millennium BC. The zodiac draws on stars in earlier Babylonian star catalogues, such as the MUL.APIN catalogue, which was compiled around 1000 BC. Some constellations can be traced even further back, to Bronze Age (Old Babylonian Empire) sources, including Gemini "The Twins", from "The Great Twins"; Cancer "The Crab", from "The Crayfish", among others. Around the end of the fifth century BC, Babylonian astronomers divided the ecliptic into 12 equal "signs", by analogy to 12 schematic months of 30 days each. Each sign contained 30° of celestial longitude, thus creating the first known celestial coordinate system. According to calculations by modern astrophysics, the zodiac was introduced between 409 and 398 BC, during Persian rule, and probably within a very few years of 401 BC. Unlike modern astrologers, who place the beginning of the sign of Aries at the position of the Sun at the vernal equinox in the Northern Hemisphere (March equinox), Babylonian astronomers fixed the zodiac in relation to stars, placing the beginning of Cancer at the "Rear Twin Star" (β Geminorum) and the beginning of Aquarius at the "Rear Star of the Goat-Fish" (δ Capricorni). Due to the precession of the equinoxes, the time of year the Sun is in a given constellation has changed since Babylonian times, as the point of March equinox has moved from Aries into Pisces. Because the divisions were made into equal arcs of 30° each, they constituted an ideal system of reference for making predictions about a planet's longitude. However, Babylonian techniques of observational measurements were in a rudimentary stage of evolution. They measured the position of a planet in reference to a set of "normal stars" close to the ecliptic (±9° of latitude). The normal stars were used as observational reference points to help position a planet within this ecliptic coordinate system. In Babylonian astronomical diaries, a planet position was generally given with respect to a zodiacal sign alone, though less often in specific degrees within a sign. When the degrees of longitude were given, they were expressed with reference to the 30° of the zodiacal sign, i.e., not with a reference to the continuous 360° ecliptic. In astronomical ephemerides, the positions of significant astronomical phenomena were computed in sexagesimal fractions of a degree (equivalent to minutes and seconds of arc). For daily ephemerides, the daily positions of a planet were not as important as the astrologically significant dates when the planet crossed from one zodiacal sign to the next. Hebrew astronomy and astrology Knowledge of the Babylonian zodiac is said to be reflected in the Hebrew Bible; E. W. Bullinger interpreted the creatures that appear in the book of Ezekiel (1:10) as the middle signs of the four quarters of the zodiac, with the Lion as Leo, the Bull as Taurus, the Man as Aquarius and the Eagle as a higher aspect of Scorpio. Some authors have linked the signs of the zodiac with the twelve tribes of Israel, and with the lunar Hebrew calendar, which has twelve lunar months in a lunar year. Martin and others have argued that the arrangement of the tribes around the Tabernacle (reported in the Book of Numbers) corresponded to the order of the zodiac, with Judah, Reuben, Ephraim, and Dan representing the middle signs of Leo, Aquarius, Taurus, and Scorpio, respectively. Such connections were taken up by Thomas Mann, who in his novel Joseph and His Brothers, attributes characteristics of a sign of the zodiac to each tribe, in his rendition of the Blessing of Jacob. Hellenistic and Roman era The Babylonian star catalogs entered Greek astronomy in the 4th century BC, via Eudoxus of Cnidus. Babylonia or Chaldea in the Hellenistic world came to be so identified with astrology that "Chaldean wisdom" became among Greeks and Romans the synonym of divination through the planets and stars. Hellenistic astrology derived in part from Babylonian and Egyptian astrology. Horoscopic astrology first appeared in Ptolemaic Egypt (305 BC–30 BC). The Dendera zodiac, a relief dating to , is the first known depiction of the classical zodiac of twelve signs. The earliest extant Greek text using the Babylonian division of the zodiac into 12 signs of 30 equal degrees each is the Anaphoricus of Hypsicles of Alexandria (fl.190BC). Particularly important in the development of Western horoscopic astrology was the astrologer and astronomer Ptolemy, whose work Tetrabiblos laid the basis of the Western astrological tradition. Under the Greeks, and Ptolemy in particular, the planets, Houses, and signs of the zodiac were rationalized and their function set down in a way that has changed little to the present day. Ptolemy lived in the 2nd century AD, three centuries after the discovery of the precession of the equinoxes by Hipparchus around 130 BC. Hipparchus' lost work on precession never circulated very widely until it was brought to prominence by Ptolemy, and there are few explanations of precession outside the work of Ptolemy until late Antiquity, by which time Ptolemy's influence was widely established. Ptolemy clearly explained the theoretical basis of the western zodiac as being a tropical coordinate system, by which the zodiac is aligned to the equinoxes and solstices, rather than the visible constellations that bear the same names as the zodiac signs. Hindu zodiac According to mathematician-historian Montucla, the Hindu zodiac was adopted from the Greek zodiac through communications between ancient India and the Greek empire of Bactria. The Hindu zodiac uses the sidereal coordinate system, which makes reference to the fixed stars. The tropical zodiac (of Mesopotamian origin) is divided by the intersections of the ecliptic and equator, which shifts in relation to the backdrop of fixed stars at a rate of 1° every 72 years, creating the phenomenon known as precession of the equinoxes. The Hindu zodiac, being sidereal, does not maintain this seasonal alignment, but there are still similarities between the two systems. The Hindu zodiac signs and corresponding Greek signs sound very different, being in Sanskrit and Greek respectively, but their symbols are nearly identical. For example, dhanu means "bow" and corresponds to Sagittarius, the "archer", and kumbha means "water-pitcher" and corresponds to Aquarius, the "water-carrier". Middle Ages During the Abbasid era, Greek reference books were translated into Arabic, and Islamic astronomers then did their own observations, correcting Ptolemy's Almagest. One such book was Al-Sufi's Book of Fixed Stars, which has pictorial depictions of 48 constellations. The book was divided into three sections: constellations of the zodiac, constellations north of the zodiac, and southern constellations. When Al-Sufi's book, and other works, were translated in the 11th century, there were mistakes made in the translations. As a result, some stars ended up with the names of the constellation they belong to (e.g. Hamal in Aries). The High Middle Ages saw a revival of interest in Greco-Roman magic, first in Kabbalism and later continued in Renaissance magic. This included magical uses of the zodiac, as found, e.g., in the Sefer Raziel HaMalakh. The zodiac is found in medieval stained glass as at Angers Cathedral, where the master glass maker, André Robin, made the ornate rosettes for the North and South transepts after the fire there in 1451. Medieval Islamic era Astrology emerged in the 8th century AD as a distinct discipline in Islam, with a mix of Indian, Hellenistic Iranian and other traditions blended with Greek and Islamic astronomical knowledge, for example Ptolemy's work and Al-Sufi's Book of Fixed Stars. A knowledge of the influence that the stars have on events on the earth was important in Islamic civilization. As a rule, it was believed that the signs of the zodiac and the planets control the destiny not only of people but also of nations, and that the zodiac has the ability to determine a person's physical characteristics as well as intelligence and personal traits. The practice of astrology at this time could be divided into 4 broader categories: Genethlialogy, Catarchic Astrology, Interrogational Astrology and General Astrology. However the most common type of astrology was Genethlialogy, which examined all aspects of a person's life in relation to the planetary positions at their birth; more commonly known as our horoscope. Astrology services were offered widely across the empire, mainly in bazaars, where people could pay for a reading. Astrology was valued in the royal courts, for example, the Abbasid Caliph Al-Mansur used astrology to determine the best date for founding the new capital of Baghdad. Whilst horoscopes were generally widely accepted by society, many scholars condemned the use of astrology and divination, linking it to occult influences. Many theologians and scholars thought that it went against the tenets of Islam; as only God should be able to determine events rather than astrologers looking at the positions of the planets. In order to calculate someone's horoscope, an astrologer would use 3 tools: an astrolabe, ephemeris and a takht. First, the astrologer would use an astrolabe to find the position of the sun, align the rule with the persons time of birth and then align the rete to establish the altitude of the sun on that date. Next, the astrologer would use an ephemeris, a table denoting the mean position of the planets and stars within the sky at any given time. Finally, the astrologer would add the altitude of the sun taken from the astrolabe, with the mean position of the planets on the person's birthday, and add them together on the takht (also known as the dustboard). The dust board was merely a tablet covered in sand; on which the calculations could be made and erased easily. Once this had been calculated, the astrologer was then able to interpret the horoscope. Most of these interpretations were based on the zodiac in literature. For example, there were several manuals on how to interpret each zodiac sign, the treatise relating to each individual sign and what the characteristics of these zodiacs were. Early modern An example of the use of signs as astronomical coordinates may be found in the Nautical Almanac and Astronomical Ephemeris for the year 1767. The "Longitude of the Sun" columns show the sign (represented as a digit from 0 to and including 11), degrees from 0 to 29, minutes, and seconds. Mughal king Jahangir issued an attractive series of coins in gold and silver depicting the twelve signs of the zodiac. Twelve signs What follows is a list of the signs of the modern zodiac (with the ecliptic longitudes of their first points), where 0° Aries is understood as the vernal equinox, with their Latin, Greek, Sanskrit, and Babylonian names. But the Sanskrit and the name equivalents (after c.500 BC) denote the constellations only, not the tropical zodiac signs. The "English translation" is not usually used by English speakers. The Latin names are standard English usage (except that "Capricorn" is used rather than "Capricornus"). These twelve signs have been arranged into a nursery rhyme as a mnemonic device: Another mnemonic is A Tense Gray Cat Lay Very Low, Sneaking Slowly, Contemplating A Pounce. The following table compares the Gregorian dates on which the Sun enters a sign in the Ptolemaic tropical zodiac, and a sign in two sidereal systems: one proposed by Cyril Fagan, and a fourteen-sign system proposed by Steven Schmidt which adds Ophiuchus (see below) and Cetus (the IAU boundaries of which just graze by the ecliptic): The beginning of Aries is defined as the moment of vernal equinox, and all other dates shift accordingly. The precise Gregorian times and dates vary slightly from year to year as the Gregorian calendar shifts relative to the tropical year. These variations remain within less than two days' difference in the recent past and the near-future, vernal equinox in UT always falling either on 20 or 21 March in the period of 1797 to 2043, falling on 19 March in 1796 the last time and in 2044 the next. The vernal equinox has fallen on 20 March UT since 2008, and will continue to do so until 2043. As each sign takes up exactly 30 degrees of the zodiac, the average duration of the solar stay in each sign is one twelfth of a sidereal year, or 30.43 standard days. Due to Earth's slight orbital eccentricity, the duration of each sign varies appreciably, between about 29.4 days for Capricorn and about 31.4 days for Cancer (see Equation of time). In addition, because the Earth's axis is at an angle, some signs take longer to rise than others, and the farther away from the equator the observer is situated, the greater the difference. Thus, signs are spoken of as "long" or "short" ascension. Constellations In tropical astrology, the zodiacal signs are distinct from the constellations associated with them, not only because of their drifting apart due to the precession of equinoxes but because the physical constellations take up varying widths of the ecliptic, so the Sun is not in each constellation for the same amount of time. Thus, Virgo takes up 5 times as much ecliptic longitude as Scorpius. The zodiacal signs are an abstraction from the physical constellations, and each represent exactly th of the full circle, but the time spent by the Sun in each sign varies slightly due to the eccentricity of the Earth's orbit. Sidereal astrology assigns the zodiac sign approximately to the corresponding constellation. This alignment needs recalibrating every so often to keep the alignment in place. The ecliptic intersects with 13 constellations of Ptolemy's Almagest, as well as of the more precisely delineated IAU designated constellations. In addition to the twelve constellations after which the twelve zodiac signs are named, the ecliptic intersects Ophiuchus, the bottom part of which interjects between Scorpius and Sagittarius. Occasionally this difference between the astronomical constellations and the astrological signs is mistakenly reported in the popular press as a "change" to the list of traditional signs by some astronomical body like the IAU, NASA, or the Royal Astronomical Society. This happened in a 1995 report of the BBC Nine O'Clock News and various reports in 2011 and 2016. Some "parazodiacal" constellations are touched by the paths of the planets, leading to counts of up to 25 "constellations of the zodiac". The ancient Babylonian MUL.APIN catalog lists Orion, Perseus, Auriga, and Andromeda. Modern astronomers have noted that planets pass through Crater, Sextans, Cetus, Pegasus, Corvus, Hydra, Orion, and Scutum, with Venus very rarely passing through Aquila, Canis Minor, Auriga, and Serpens. Some other constellations are mythologically associated with the zodiacal ones: Piscis Austrinus, The Southern Fish, is attached to Aquarius. In classical maps, it swallows the stream poured out of Aquarius' pitcher, but perhaps it formerly just swam in it. Aquila, The Eagle, was possibly associated with the zodiac by virtue of its main star, Altair. Hydra in the Early Bronze Age marked the celestial equator and was associated with Leo, which is shown standing on the serpent on the Dendera zodiac. Precession of the equinoxes The zodiac system was developed in Babylonia, some 2,500 years ago, during the "Age of Aries". At the time, it is assumed, the precession of the equinoxes was unknown. Contemporary use of the coordinate system is presented with the choice of interpreting the system either as sidereal, with the signs fixed to the stellar background, or as tropical, with the signs fixed to the point (vector of the Sun) at the March equinox. Western astrology takes the tropical approach, whereas Hindu astrology takes the sidereal one. This results in the originally unified zodiacal coordinate system drifting apart gradually, with a clockwise (westward) precession of 1.4 degrees per century. For the tropical zodiac used in Western astronomy and astrology, this means that the tropical sign of Aries currently lies somewhere within the constellation Pisces ("Age of Pisces"). The sidereal coordinate system takes into account the ayanamsa, ayan meaning "transit' or 'movement', and amsa meaning 'small part', i.e. movement of equinoxes in small parts. It is unclear when Indians became aware of the precession of the equinoxes, but Bhāskara II's 12th-century treatise Siddhanta Shiromani gives equations for measurement of precession of equinoxes, and says his equations are based on some lost equations of Suryasiddhanta plus the equation of Munjaala. The discovery of precession is attributed to Hipparchus around 130 BC. Ptolemy quotes from Hipparchus' now-lost work entitled "On the Displacement of the Solstitial and Equinoctial Points" in the seventh book of his 2nd century astronomical text, Almagest, where he describes the phenomenon of precession and estimates its value. Ptolemy clarified that the convention of Greek mathematical astronomy was to commence the zodiac from the point of the vernal equinox and to always refer to this point as "the first degree" of Aries. This is known as the "tropical zodiac" (from the Greek word trópos, turn) because its starting point revolves through the circle of background constellations over time. The principle of the vernal point acting as the first degree of the zodiac for Greek astronomers is described in the 1st century BC astronomical text of Geminus of Rhodes. Geminus explains that Greek astronomers of his era associate the first degrees of the zodiac signs with the two solstices and the two equinoxes, in contrast to the older Chaldean (Babylonian) system, which placed these points within the zodiac signs. This illustrates that Ptolemy merely clarified the convention of Greek astronomers and did not originate the principle of the tropical zodiac, as is sometimes assumed. Ptolemy demonstrates that the principle of the tropical zodiac was well known to his predecessors within his astrological text, the Tetrabiblos, where he explains why it would be an error to associate the regularly spaced signs of the seasonally aligned zodiac with the irregular boundaries of the visible constellations: In modern astronomy Astronomically, the zodiac defines a belt of space extending 8° or 9° in celestial latitude to the north and south of the ecliptic, within which the orbits of the Moon and the principal planets remain. It is a feature of the ecliptic coordinate system – a celestial coordinate system centered upon the ecliptic (the plane of the Earth's orbit and the Sun's apparent path), by which celestial longitude is measured in degrees east of the vernal equinox (the ascending intersection of the ecliptic and equator). The zodiac is narrow in angular terms because most of the Sun's planets have orbits that have only a slight inclination to the orbital plane of the Earth. Stars within the zodiac are subject to occultations by the Moon and other solar system bodies. These events can be useful, for example, to estimate the cross-sectional dimensions of a minor planet, or check a star for a close companion. The Sun's placement upon the vernal equinox, which occurs annually around 21 March, defines the starting point for measurement, the first degree of which is historically known as the "first point of Aries". The first 30° along the ecliptic is nominally designated as the zodiac sign Aries, which no longer falls within the proximity of the constellation Aries since the effect of precession is to move the vernal point through the backdrop of visible constellations. It is currently located near the end of the constellation Pisces, having been within that constellation since the 2nd century AD. The subsequent 30° of the ecliptic is nominally designated the zodiac sign Taurus, and so on through the twelve signs of the zodiac so that each occupies th (30°) of the zodiac's great circle. Zodiac signs have never been used to determine the boundaries of astronomical constellations that lie in the vicinity of the zodiac, which are, and always have been, irregular in their size and shape. The convention of measuring celestial longitude within individual signs was still being used in the mid-19th century, but modern astronomy now numbers degrees of celestial longitude continuously from 0° to 360°, rather than 0° to 30° within each sign. This coordinate system is primary used by astronomers for observations of solar system objects. The use of the zodiac as a means to determine astronomical measurement remained the main method for defining celestial positions by Western astronomers until the Renaissance, at which time preference moved to the equatorial coordinate system, which measures astronomical positions by right ascension and declination rather than the ecliptic-based definitions of celestial longitude and celestial latitude. The orientation of equatorial coordinates are aligned with the Earth's axis of rotation, rather than the plane of the planet's orbit around the Sun. The word "zodiac" is used in reference to the zodiacal cloud of dust grains that move among the planets, and the zodiacal light that originates from their scattering of sunlight. While its name is derived from the zodiac, the zodiacal light covers the entire night sky, with enhancements in certain directions. Unicode characters In Unicode, the symbols of zodiac signs are encoded in block "Miscellaneous Symbols". They can be forced to look like text by appending U+FE0E, or like emojis by appending U+FE0F:
Physical sciences
Celestial sphere
null
34413
https://en.wikipedia.org/wiki/Zoology
Zoology
Zoology ( , ) is the scientific study of animals. Its studies include the structure, embryology, classification, habits, and distribution of all animals, both living and extinct, and how they interact with their ecosystems. Zoology is one of the primary branches of biology. The term is derived from Ancient Greek , ('animal'), and , ('knowledge', 'study'). Although humans have always been interested in the natural history of the animals they saw around them, and used this knowledge to domesticate certain species, the formal study of zoology can be said to have originated with Aristotle. He viewed animals as living organisms, studied their structure and development, and considered their adaptations to their surroundings and the function of their parts. Modern zoology has its origins during the Renaissance and early modern period, with Carl Linnaeus, Antonie van Leeuwenhoek, Robert Hooke, Charles Darwin, Gregor Mendel and many others. The study of animals has largely moved on to deal with form and function, adaptations, relationships between groups, behaviour and ecology. Zoology has increasingly been subdivided into disciplines such as classification, physiology, biochemistry and evolution. With the discovery of the structure of DNA by Francis Crick and James Watson in 1953, the realm of molecular biology opened up, leading to advances in cell biology, developmental biology and molecular genetics. History The history of zoology traces the study of the animal kingdom from ancient to modern times. Prehistoric people needed to study the animals and plants in their environment to exploit them and survive. Cave paintings, engravings and sculptures in France dating back 15,000 years show bison, horses, and deer in carefully rendered detail. Similar images from other parts of the world illustrated mostly the animals hunted for food and the savage animals. The Neolithic Revolution, which is characterized by the domestication of animals, continued throughout Antiquity. Ancient knowledge of wildlife is illustrated by the realistic depictions of wild and domestic animals in the Near East, Mesopotamia, and Egypt, including husbandry practices and techniques, hunting and fishing. The invention of writing is reflected in zoology by the presence of animals in Egyptian hieroglyphics. Although the concept of zoology as a single coherent field arose much later, the zoological sciences emerged from natural history reaching back to the biological works of Aristotle and Galen in the ancient Greco-Roman world. In the fourth century BC, Aristotle looked at animals as living organisms, studying their structure, development and vital phenomena. He divided them into two groups: animals with blood, equivalent to our concept of vertebrates, and animals without blood, invertebrates. He spent two years on Lesbos, observing and describing the animals and plants, considering the adaptations of different organisms and the function of their parts. Four hundred years later, Roman physician Galen dissected animals to study their anatomy and the function of the different parts, because the dissection of human cadavers was prohibited at the time. This resulted in some of his conclusions being false, but for many centuries it was considered heretical to challenge any of his views, so the study of anatomy stultified. During the post-classical era, Middle Eastern science and medicine was the most advanced in the world, integrating concepts from Ancient Greece, Rome, Mesopotamia and Persia as well as the ancient Indian tradition of Ayurveda, while making numerous advances and innovations. In the 13th century, Albertus Magnus produced commentaries and paraphrases of all Aristotle's works; his books on topics like botany, zoology, and minerals included information from ancient sources, but also the results of his own investigations. His general approach was surprisingly modern, and he wrote, "For it is [the task] of natural science not simply to accept what we are told but to inquire into the causes of natural things." An early pioneer was Conrad Gessner, whose monumental 4,500-page encyclopedia of animals, , was published in four volumes between 1551 and 1558. In Europe, Galen's work on anatomy remained largely unsurpassed and unchallenged up until the 16th century. During the Renaissance and early modern period, zoological thought was revolutionized in Europe by a renewed interest in empiricism and the discovery of many novel organisms. Prominent in this movement were Andreas Vesalius and William Harvey, who used experimentation and careful observation in physiology, and naturalists such as Carl Linnaeus, Jean-Baptiste Lamarck, and Buffon who began to classify the diversity of life and the fossil record, as well as studying the development and behavior of organisms. Antonie van Leeuwenhoek did pioneering work in microscopy and revealed the previously unknown world of microorganisms, laying the groundwork for cell theory. van Leeuwenhoek's observations were endorsed by Robert Hooke; all living organisms were composed of one or more cells and could not generate spontaneously. Cell theory provided a new perspective on the fundamental basis of life. Having previously been the realm of gentlemen naturalists, over the 18th, 19th and 20th centuries, zoology became an increasingly professional scientific discipline. Explorer-naturalists such as Alexander von Humboldt investigated the interaction between organisms and their environment, and the ways this relationship depends on geography, laying the foundations for biogeography, ecology and ethology. Naturalists began to reject essentialism and consider the importance of extinction and the mutability of species. These developments, as well as the results from embryology and paleontology, were synthesized in the 1859 publication of Charles Darwin's theory of evolution by natural selection; in this Darwin placed the theory of organic evolution on a new footing, by explaining the processes by which it can occur, and providing observational evidence that it had done so. Darwin's theory was rapidly accepted by the scientific community and soon became a central axiom of the rapidly developing science of biology. The basis for modern genetics began with the work of Gregor Mendel on peas in 1865, although the significance of his work was not realized at the time. Darwin gave a new direction to morphology and physiology, by uniting them in a common biological theory: the theory of organic evolution. The result was a reconstruction of the classification of animals upon a genealogical basis, fresh investigation of the development of animals, and early attempts to determine their genetic relationships. The end of the 19th century saw the fall of spontaneous generation and the rise of the germ theory of disease, though the mechanism of inheritance remained a mystery. In the early 20th century, the rediscovery of Mendel's work led to the rapid development of genetics, and by the 1930s the combination of population genetics and natural selection in the modern synthesis created evolutionary biology. Research in cell biology is interconnected to other fields such as genetics, biochemistry, medical microbiology, immunology, and cytochemistry. With the determination of the double helical structure of the DNA molecule by Francis Crick and James Watson in 1953, the realm of molecular biology opened up, leading to advances in cell biology, developmental biology and molecular genetics. The study of systematics was transformed as DNA sequencing elucidated the degrees of affinity between different organisms. Scope Zoology is the branch of science dealing with animals. A species can be defined as the largest group of organisms in which any two individuals of the appropriate sex can produce fertile offspring; about 1.5 million species of animal have been described and it has been estimated that as many as 8 million animal species may exist. An early necessity was to identify the organisms and group them according to their characteristics, differences and relationships, and this is the field of the taxonomist. Originally it was thought that species were immutable, but with the arrival of Darwin's theory of evolution, the field of cladistics came into being, studying the relationships between the different groups or clades. Systematics is the study of the diversification of living forms, the evolutionary history of a group is known as its phylogeny, and the relationship between the clades can be shown diagrammatically in a cladogram. Although someone who made a scientific study of animals would historically have described themselves as a zoologist, the term has come to refer to those who deal with individual animals, with others describing themselves more specifically as physiologists, ethologists, evolutionary biologists, ecologists, pharmacologists, endocrinologists or parasitologists. Branches of zoology Although the study of animal life is ancient, its scientific incarnation is relatively modern. This mirrors the transition from natural history to biology at the start of the 19th century. Since Hunter and Cuvier, comparative anatomical study has been associated with morphography, shaping the modern areas of zoological investigation: anatomy, physiology, histology, embryology, teratology and ethology. Modern zoology first arose in German and British universities. In Britain, Thomas Henry Huxley was a prominent figure. His ideas were centered on the morphology of animals. Many consider him the greatest comparative anatomist of the latter half of the 19th century. Similar to Hunter, his courses were composed of lectures and laboratory practical classes in contrast to the previous format of lectures only. Classification Scientific classification in zoology, is a method by which zoologists group and categorize organisms by biological type, such as genus or species. Biological classification is a form of scientific taxonomy. Modern biological classification has its root in the work of Carl Linnaeus, who grouped species according to shared physical characteristics. These groupings have since been revised to improve consistency with the Darwinian principle of common descent. Molecular phylogenetics, which uses nucleic acid sequence as data, has driven many recent revisions and is likely to continue to do so. Biological classification belongs to the science of zoological systematics. Many scientists now consider the five-kingdom system outdated. Modern alternative classification systems generally start with the three-domain system: Archaea (originally Archaebacteria); Bacteria (originally Eubacteria); Eukaryota (including protists, fungi, plants, and animals) These domains reflect whether the cells have nuclei or not, as well as differences in the chemical composition of the cell exteriors. Further, each kingdom is broken down recursively until each species is separately classified. The order is: Domain; kingdom; phylum; class; order; family; genus; species. The scientific name of an organism is generated from its genus and species. For example, humans are listed as Homo sapiens. Homo is the genus, and sapiens the specific epithet, both of them combined make up the species name. When writing the scientific name of an organism, it is proper to capitalize the first letter in the genus and put all of the specific epithet in lowercase. Additionally, the entire term may be italicized or underlined. The dominant classification system is called the Linnaean taxonomy. It includes ranks and binomial nomenclature. The classification, taxonomy, and nomenclature of zoological organisms is administered by the International Code of Zoological Nomenclature. A merging draft, BioCode, was published in 1997 in an attempt to standardize nomenclature, but has yet to be formally adopted. Vertebrate and invertebrate zoology Vertebrate zoology is the biological discipline that consists of the study of vertebrate animals, that is animals with a backbone, such as fish, amphibians, reptiles, birds and mammals. The various taxonomically oriented disciplines i.e. mammalogy, biological anthropology, herpetology, ornithology, and ichthyology seek to identify and classify species and study the structures and mechanisms specific to those groups. The rest of the animal kingdom is dealt with by invertebrate zoology, a vast and very diverse group of animals that includes sponges, echinoderms, tunicates, worms, molluscs, arthropods and many other phyla, but single-celled organisms or protists are not usually included. Structural zoology Cell biology studies the structural and physiological properties of cells, including their behavior, interactions, and environment. This is done on both the microscopic and molecular levels for single-celled organisms such as bacteria as well as the specialized cells in multicellular organisms such as humans. Understanding the structure and function of cells is fundamental to all of the biological sciences. The similarities and differences between cell types are particularly relevant to molecular biology. Anatomy considers the forms of macroscopic structures such as organs and organ systems. It focuses on how organs and organ systems work together in the bodies of humans and other animals, in addition to how they work independently. Anatomy and cell biology are two studies that are closely related, and can be categorized under "structural" studies. Comparative anatomy is the study of similarities and differences in the anatomy of different groups. It is closely related to evolutionary biology and phylogeny (the evolution of species). Physiology Physiology studies the mechanical, physical, and biochemical processes of living organisms by attempting to understand how all of the structures function as a whole. The theme of "structure to function" is central to biology. Physiological studies have traditionally been divided into plant physiology and animal physiology, but some principles of physiology are universal, no matter what particular organism is being studied. For example, what is learned about the physiology of yeast cells can also apply to human cells. The field of animal physiology extends the tools and methods of human physiology to non-human species. Physiology studies how, for example, the nervous, immune, endocrine, respiratory, and circulatory systems function and interact. Developmental biology Developmental biology is the study of the processes by which animals and plants reproduce and grow. The discipline includes the study of embryonic development, cellular differentiation, regeneration, asexual and sexual reproduction, metamorphosis, and the growth and differentiation of stem cells in the adult organism. Development of both animals and plants is further considered in the articles on evolution, population genetics, heredity, genetic variability, Mendelian inheritance, and reproduction. Evolutionary biology Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. Evolutionary research is concerned with the origin and descent of species, as well as their change over time, and includes scientists from many taxonomically oriented disciplines. For example, it generally involves scientists who have special training in particular organisms such as mammalogy, ornithology, herpetology, or entomology, but use those organisms as systems to answer general questions about evolution. Evolutionary biology is partly based on paleontology, which uses the fossil record to answer questions about the mode and tempo of evolution, and partly on the developments in areas such as population genetics and evolutionary theory. Following the development of DNA fingerprinting techniques in the late 20th century, the application of these techniques in zoology has increased the understanding of animal populations. In the 1980s, developmental biology re-entered evolutionary biology from its initial exclusion from the modern synthesis through the study of evolutionary developmental biology. Related fields often considered part of evolutionary biology are phylogenetics, systematics, and taxonomy. Ethology Ethology is the scientific and objective study of animal behavior under natural conditions, as opposed to behaviorism, which focuses on behavioral response studies in a laboratory setting. Ethologists have been particularly concerned with the evolution of behavior and the understanding of behavior in terms of the theory of natural selection. In one sense, the first modern ethologist was Charles Darwin, whose book, The Expression of the Emotions in Man and Animals, influenced many future ethologists. A subfield of ethology is behavioral ecology which attempts to answer Nikolaas Tinbergen's four questions with regard to animal behavior: what are the proximate causes of the behavior, the developmental history of the organism, the survival value and phylogeny of the behavior? Another area of study is animal cognition, which uses laboratory experiments and carefully controlled field studies to investigate an animal's intelligence and learning. Biogeography Biogeography studies the spatial distribution of organisms on the Earth, focusing on topics like dispersal and migration, plate tectonics, climate change, and cladistics. It is an integrative field of study, uniting concepts and information from evolutionary biology, taxonomy, ecology, physical geography, geology, paleontology and climatology. The origin of this field of study is widely accredited to Alfred Russel Wallace, a British biologist who had some of his work jointly published with Charles Darwin. Molecular biology Molecular biology studies the common genetic and developmental mechanisms of animals and plants, attempting to answer the questions regarding the mechanisms of genetic inheritance and the structure of the gene. In 1953, James Watson and Francis Crick described the structure of DNA and the interactions within the molecule, and this publication jump-started research into molecular biology and increased interest in the subject. While researchers practice techniques specific to molecular biology, it is common to combine these with methods from genetics and biochemistry. Much of molecular biology is quantitative, and recently a significant amount of work has been done using computer science techniques such as bioinformatics and computational biology. Molecular genetics, the study of gene structure and function, has been among the most prominent sub-fields of molecular biology since the early 2000s. Other branches of biology are informed by molecular biology, by either directly studying the interactions of molecules in their own right such as in cell biology and developmental biology, or indirectly, where molecular techniques are used to infer historical attributes of populations or species, as in fields in evolutionary biology such as population genetics and phylogenetics. There is also a long tradition of studying biomolecules "from the ground up", or molecularly, in biophysics. Reproduction Animals generally reproduce by sexual reproduction, a process involving the union of a male and female haploid gamete, each gamete formed by meiosis. Ordinarily, gametes produced by separate individuals unite by a process of fertilization to form a diploid zygote that can then develop into a genetically unique individual progeny. However, some animals are also capable, as an alternative reproductive process, to reproduce parthenogenetically. Parthenogenesis has been described in snakes and lizards (see Wikipedia Parthenogenesis in squamates), in amphibians (see Wikipedia Parthenogenesis in amphibians) and in numerous other species (see Wikipedia Parthenogenesis). Generally, meiosis in parthanogenetically reproducing animals occurs by a similar process to that in sexually reproducing animals, but the diploid zygote nucleus is generated by the union of two haploid genomes from the same individual rather than from different individuals.
Biology and health sciences
Biology
null
34417
https://en.wikipedia.org/wiki/Zero-sum%20game
Zero-sum game
Zero-sum game is a mathematical representation in game theory and economic theory of a situation that involves two competing entities, where the result is an advantage for one side and an equivalent loss for the other. In other words, player one's gain is equivalent to player two's loss, with the result that the net improvement in benefit of the game is zero. If the total gains of the participants are added up, and the total losses are subtracted, they will sum to zero. Thus, cutting a cake, where taking a more significant piece reduces the amount of cake available for others as much as it increases the amount available for that taker, is a zero-sum game if all participants value each unit of cake equally. Other examples of zero-sum games in daily life include games like poker, chess, sport and bridge where one person gains and another person loses, which results in a zero-net benefit for every player. In the markets and financial instruments, futures contracts and options are zero-sum games as well. In contrast, non-zero-sum describes a situation in which the interacting parties' aggregate gains and losses can be less than or more than zero. A zero-sum game is also called a strictly competitive game, while non-zero-sum games can be either competitive or non-competitive. Zero-sum games are most often solved with the minimax theorem which is closely related to linear programming duality, or with Nash equilibrium. Prisoner's Dilemma is a classic non-zero-sum game. Definition The zero-sum property (if one gains, another loses) means that any result of a zero-sum situation is Pareto optimal. Generally, any game where all strategies are Pareto optimal is called a conflict game. Zero-sum games are a specific example of constant sum games where the sum of each outcome is always zero. Such games are distributive, not integrative; the pie cannot be enlarged by good negotiation. In situation where one decision maker's gain (or loss) does not necessarily result in the other decision makers' loss (or gain), they are referred to as non-zero-sum. Thus, a country with an excess of bananas trading with another country for their excess of apples, where both benefit from the transaction, is in a non-zero-sum situation. Other non-zero-sum games are games in which the sum of gains and losses by the players is sometimes more or less than what they began with. The idea of Pareto optimal payoff in a zero-sum game gives rise to a generalized relative selfish rationality standard, the punishing-the-opponent standard, where both players always seek to minimize the opponent's payoff at a favourable cost to themselves rather than prefer more over less. The punishing-the-opponent standard can be used in both zero-sum games (e.g. warfare game, chess) and non-zero-sum games (e.g. pooling selection games). The player in the game has a simple enough desire to maximise the profit for them, and the opponent wishes to minimise it. Solution For two-player finite zero-sum games, if the players are allowed to play a mixed strategy, the game always has a one equilibrium solution. The different game theoretic solution concepts of Nash equilibrium, minimax, and maximin all give the same solution. Notice that this is not true for pure strategy. Example A game's payoff matrix is a convenient representation. Consider these situations as an example, the two-player zero-sum game pictured at right or above. The order of play proceeds as follows: The first player (red) chooses in secret one of the two actions 1 or 2; the second player (blue), unaware of the first player's choice, chooses in secret one of the three actions A, B or C. Then, the choices are revealed and each player's points total is affected according to the payoff for those choices. Example: Red chooses action 2 and Blue chooses action B. When the payoff is allocated, Red gains 20 points and Blue loses 20 points. In this example game, both players know the payoff matrix and attempt to maximize the number of their points. Red could reason as follows: "With action 2, I could lose up to 20 points and can win only 20, and with action 1 I can lose only 10 but can win up to 30, so action 1 looks a lot better." With similar reasoning, Blue would choose action C. If both players take these actions, Red will win 20 points. If Blue anticipates Red's reasoning and choice of action 1, Blue may choose action B, so as to win 10 points. If Red, in turn, anticipates this trick and goes for action 2, this wins Red 20 points. Émile Borel and John von Neumann had the fundamental insight that probability provides a way out of this conundrum. Instead of deciding on a definite action to take, the two players assign probabilities to their respective actions, and then use a random device which, according to these probabilities, chooses an action for them. Each player computes the probabilities so as to minimize the maximum expected point-loss independent of the opponent's strategy. This leads to a linear programming problem with the optimal strategies for each player. This minimax method can compute probably optimal strategies for all two-player zero-sum games. For the example given above, it turns out that Red should choose action 1 with probability and action 2 with probability , and Blue should assign the probabilities 0, , and to the three actions A, B, and C. Red will then win points on average per game. Solving The Nash equilibrium for a two-player, zero-sum game can be found by solving a linear programming problem. Suppose a zero-sum game has a payoff matrix where element is the payoff obtained when the minimizing player chooses pure strategy and the maximizing player chooses pure strategy (i.e. the player trying to minimize the payoff chooses the row and the player trying to maximize the payoff chooses the column). Assume every element of is positive. The game will have at least one Nash equilibrium. The Nash equilibrium can be found (Raghavan 1994, p. 740) by solving the following linear program to find a vector : The first constraint says each element of the vector must be nonnegative, and the second constraint says each element of the vector must be at least 1. For the resulting vector, the inverse of the sum of its elements is the value of the game. Multiplying by that value gives a probability vector, giving the probability that the maximizing player will choose each possible pure strategy. If the game matrix does not have all positive elements, add a constant to every element that is large enough to make them all positive. That will increase the value of the game by that constant, and will not affect the equilibrium mixed strategies for the equilibrium. The equilibrium mixed strategy for the minimizing player can be found by solving the dual of the given linear program. Alternatively, it can be found by using the above procedure to solve a modified payoff matrix which is the transpose and negation of (adding a constant so it is positive), then solving the resulting game. If all the solutions to the linear program are found, they will constitute all the Nash equilibria for the game. Conversely, any linear program can be converted into a two-player, zero-sum game by using a change of variables that puts it in the form of the above equations and thus such games are equivalent to linear programs, in general. Universal solution If avoiding a zero-sum game is an action choice with some probability for players, avoiding is always an equilibrium strategy for at least one player at a zero-sum game. For any two players zero-sum game where a zero-zero draw is impossible or non-credible after the play is started, such as poker, there is no Nash equilibrium strategy other than avoiding the play. Even if there is a credible zero-zero draw after a zero-sum game is started, it is not better than the avoiding strategy. In this sense, it's interesting to find reward-as-you-go in optimal choice computation shall prevail over all two players zero-sum games concerning starting the game or not. The most common or simple example from the subfield of social psychology is the concept of "social traps". In some cases pursuing individual personal interest can enhance the collective well-being of the group, but in other situations, all parties pursuing personal interest results in mutually destructive behaviour. Copeland's review notes that an n-player non-zero-sum game can be converted into an (n+1)-player zero-sum game, where the n+1st player, denoted the fictitious player, receives the negative of the sum of the gains of the other n-players (the global gain / loss). Zero-sum three-person games It is clear that there are manifold relationships between players in a zero-sum three-person game, in a zero-sum two-person game, anything one player wins is necessarily lost by the other and vice versa; therefore, there is always an absolute antagonism of interests, and that is similar in the three-person game. A particular move of a player in a zero-sum three-person game would be assumed to be clearly beneficial to him and may disbenefits to both other players, or benefits to one and disbenefits to the other opponent. Particularly, parallelism of interests between two players makes a cooperation desirable; it may happen that a player has a choice among various policies: Get into a parallelism interest with another player by adjusting his conduct, or the opposite; that he can choose with which of other two players he prefers to build such parallelism, and to what extent. The picture on the left shows that a typical example of a zero-sum three-person game. If Player 1 chooses to defence, but Player 2 & 3 chooses to offence, both of them will gain one point. At the same time, Player 1 will lose two-point because points are taken away by other players, and it is evident that Player 2 & 3 has parallelism of interests. Real life example Economic benefits of low-cost airlines in saturated markets - net benefits or a zero-sum game Studies show that the entry of low-cost airlines into the Hong Kong market brought in $671 million in revenue and resulted in an outflow of $294 million. Therefore, the replacement effect should be considered when introducing a new model, which will lead to economic leakage and injection. Thus introducing new models requires caution. For example, if the number of new airlines departing from and arriving at the airport is the same, the economic contribution to the host city may be a zero-sum game. Because for Hong Kong, the consumption of overseas tourists in Hong Kong is income, while the consumption of Hong Kong residents in opposite cities is outflow. In addition, the introduction of new airlines can also have a negative impact on existing airlines. Consequently, when a new aviation model is introduced, feasibility tests need to be carried out in all aspects, taking into account the economic inflow and outflow and displacement effects caused by the model. Zero-sum games in financial markets Derivatives trading may be considered a zero-sum game, as each dollar gained by one party in a transaction must be lost by the other, hence yielding a net transfer of wealth of zero. An options contract - whereby a buyer purchases a derivative contract which provides them with the right to buy an underlying asset from a seller at a specified strike price before a specified expiration date – is an example of a zero-sum game. A futures contract – whereby a buyer purchases a derivative contract to buy an underlying asset from the seller for a specified price on a specified date – is also an example of a zero-sum game. This is because the fundamental principle of these contracts is that they are agreements between two parties, and any gain made by one party must be matched by a loss sustained by the other. If the price of the underlying asset increases before the expiration date the buyer may exercise/ close the options/ futures contract. The buyers gain and corresponding sellers loss will be the difference between the strike price and value of the underlying asset at that time. Hence, the net transfer of wealth is zero. Swaps, which involve the exchange of cash flows from two different financial instruments, are also considered a zero-sum game. Consider a standard interest rate swap whereby Firm A pays a fixed rate and receives a floating rate; correspondingly Firm B pays a floating rate and receives a fixed rate. If rates increase, then Firm A will gain, and Firm B will lose by the rate differential (floating rate – fixed rate). If rates decrease, then Firm A will lose, and Firm B will gain by the rate differential (fixed rate – floating rate). Whilst derivatives trading may be considered a zero-sum game, it is important to remember that this is not an absolute truth. The financial markets are complex and multifaceted, with a range of participants engaging in a variety of activities. While some trades may result in a simple transfer of wealth from one party to another, the market as a whole is not purely competitive, and many transactions serve important economic functions. The stock market is an excellent example of a positive-sum game, often erroneously labelled as a zero-sum game. This is a zero-sum fallacy: the perception that one trader in the stock market may only increase the value of their holdings if another trader decreases their holdings. The primary goal of the stock market is to match buyers and sellers, but the prevailing price is the one which equilibrates supply and demand. Stock prices generally move according to changes in future expectations, such as acquisition announcements, upside earnings surprises, or improved guidance. For instance, if Company C announces a deal to acquire Company D, and investors believe that the acquisition will result in synergies and hence increased profitability for Company C, there will be an increased demand for Company C stock. In this scenario, all existing holders of Company C stock will enjoy gains without incurring any corresponding measurable losses to other players. Furthermore, in the long run, the stock market is a positive-sum game. As economic growth occurs, demand increases, output increases, companies grow, and company valuations increase, leading to value creation and wealth addition in the market. Complexity It has been theorized by Robert Wright in his book Nonzero: The Logic of Human Destiny, that society becomes increasingly non-zero-sum as it becomes more complex, specialized, and interdependent. Extensions In 1944, John von Neumann and Oskar Morgenstern proved that any non-zero-sum game for n players is equivalent to a zero-sum game with n + 1 players; the (n + 1)th player representing the global profit or loss. Misunderstandings Zero-sum games and particularly their solutions are commonly misunderstood by critics of game theory, usually with respect to the independence and rationality of the players, as well as to the interpretation of utility functions. Furthermore, the word "game" does not imply the model is valid only for recreational games. Politics is sometimes called zero sum because in common usage the idea of a stalemate is perceived to be "zero sum"; politics and macroeconomics are not zero sum games, however, because they do not constitute conserved systems. Zero-sum thinking In psychology, zero-sum thinking refers to the perception that a given situation is like a zero-sum game, where one person's gain is equal to another person's loss.
Mathematics
Game theory
null
34420
https://en.wikipedia.org/wiki/Zinc
Zinc
Zinc is a chemical element with the symbol Zn and atomic number 30. It is a slightly brittle metal at room temperature and has a shiny-greyish appearance when oxidation is removed. It is the first element in group 12 (IIB) of the periodic table. In some respects, zinc is chemically similar to magnesium: both elements exhibit only one normal oxidation state (+2), and the Zn2+ and Mg2+ ions are of similar size. Zinc is the 24th most abundant element in Earth's crust and has five stable isotopes. The most common zinc ore is sphalerite (zinc blende), a zinc sulfide mineral. The largest workable lodes are in Australia, Asia, and the United States. Zinc is refined by froth flotation of the ore, roasting, and final extraction using electricity (electrowinning). Zinc is an essential trace element for humans, animals, plants and for microorganisms and is necessary for prenatal and postnatal development. It is the second most abundant trace metal in humans after iron and it is the only metal which appears in all enzyme classes. Zinc is also an essential nutrient element for coral growth as it is an important cofactor for many enzymes. Zinc deficiency affects about two billion people in the developing world and is associated with many diseases. In children, deficiency causes growth retardation, delayed sexual maturation, infection susceptibility, and diarrhea. Enzymes with a zinc atom in the reactive center are widespread in biochemistry, such as alcohol dehydrogenase in humans. Consumption of excess zinc may cause ataxia, lethargy, and copper deficiency. In marine biomes, notably within polar regions, a deficit of zinc can compromise the vitality of primary algal communities, potentially destabilizing the intricate marine trophic structures and consequently impacting biodiversity. Brass, an alloy of copper and zinc in various proportions, was used as early as the third millennium BC in the Aegean area and the region which currently includes Iraq, the United Arab Emirates, Kalmykia, Turkmenistan and Georgia. In the second millennium BC it was used in the regions currently including West India, Uzbekistan, Iran, Syria, Iraq, and Israel. Zinc metal was not produced on a large scale until the 12th century in India, though it was known to the ancient Romans and Greeks. The mines of Rajasthan have given definite evidence of zinc production going back to the 6th century BC. The oldest evidence of pure zinc comes from Zawar, in Rajasthan, as early as the 9th century AD when a distillation process was employed to make pure zinc. Alchemists burned zinc in air to form what they called "philosopher's wool" or "white snow". The element was probably named by the alchemist Paracelsus after the German word Zinke (prong, tooth). German chemist Andreas Sigismund Marggraf is credited with discovering pure metallic zinc in 1746. Work by Luigi Galvani and Alessandro Volta uncovered the electrochemical properties of zinc by 1800. Corrosion-resistant zinc plating of iron (hot-dip galvanizing) is the major application for zinc. Other applications are in electrical batteries, small non-structural castings, and alloys such as brass. A variety of zinc compounds are commonly used, such as zinc carbonate and zinc gluconate (as dietary supplements), zinc chloride (in deodorants), zinc pyrithione (anti-dandruff shampoos), zinc sulfide (in luminescent paints), and dimethylzinc or diethylzinc in the organic laboratory. Characteristics Physical properties Zinc is a bluish-white, lustrous, diamagnetic metal, though most common commercial grades of the metal have a dull finish. It is somewhat less dense than iron and has a hexagonal crystal structure, with a distorted form of hexagonal close packing, in which each atom has six nearest neighbors (at 265.9 pm) in its own plane and six others at a greater distance of 290.6 pm. The metal is hard and brittle at most temperatures but becomes malleable between 100 and 150 °C. Above 210 °C, the metal becomes brittle again and can be pulverized by beating. Zinc is a fair conductor of electricity. For a metal, zinc has relatively low melting (419.5 °C) and boiling point (907 °C). The melting point is the lowest of all the d-block metals aside from mercury and cadmium; for this reason among others, zinc, cadmium, and mercury are often not considered to be transition metals like the rest of the d-block metals. Many alloys contain zinc, including brass. Other metals long known to form binary alloys with zinc are aluminium, antimony, bismuth, gold, iron, lead, mercury, silver, tin, magnesium, cobalt, nickel, tellurium, and sodium. Although neither zinc nor zirconium is ferromagnetic, their alloy, , exhibits ferromagnetism below 35 K. Occurrence Zinc makes up about 75 ppm (0.0075%) of Earth's crust, making it the 24th most abundant element. It also makes up 312 ppm of the solar system, where it is the 22nd most abundant element. Typical background concentrations of zinc do not exceed 1 μg/m3 in the atmosphere; 300 mg/kg in soil; 100 mg/kg in vegetation; 20 μg/L in freshwater and 5 μg/L in seawater. The element is normally found in association with other base metals such as copper and lead in ores. Zinc is a chalcophile, meaning the element is more likely to be found in minerals together with sulfur and other heavy chalcogens, rather than with the light chalcogen oxygen or with non-chalcogen electronegative elements such as the halogens. Sulfides formed as the crust solidified under the reducing conditions of the early Earth's atmosphere. Sphalerite, which is a form of zinc sulfide, is the most heavily mined zinc-containing ore because its concentrate contains 60–62% zinc. Other source minerals for zinc include smithsonite (zinc carbonate), hemimorphite (zinc silicate), wurtzite (another zinc sulfide), and sometimes hydrozincite (basic zinc carbonate). With the exception of wurtzite, all these other minerals were formed by weathering of the primordial zinc sulfides. Identified world zinc resources total about 1.9–2.8 billion tonnes. Large deposits are in Australia, Canada and the United States, with the largest reserves in Iran. The most recent estimate of reserve base for zinc (meets specified minimum physical criteria related to current mining and production practices) was made in 2009 and calculated to be roughly 480 Mt. Zinc reserves, on the other hand, are geologically identified ore bodies whose suitability for recovery is economically based (location, grade, quality, and quantity) at the time of determination. Since exploration and mine development is an ongoing process, the amount of zinc reserves is not a fixed number and sustainability of zinc ore supplies cannot be judged by simply extrapolating the combined mine life of today's zinc mines. This concept is well supported by data from the United States Geological Survey (USGS), which illustrates that although refined zinc production increased 80% between 1990 and 2010, the reserve lifetime for zinc has remained unchanged. About 346 million tonnes have been extracted throughout history to 2002, and scholars have estimated that about 109–305 million tonnes are in use. Isotopes Five stable isotopes of zinc occur in nature, with 64Zn being the most abundant isotope (49.17% natural abundance). The other isotopes found in nature are (27.73%), (4.04%), (18.45%), and (0.61%). Several dozen radioisotopes have been characterized. , which has a half-life of 243.66 days, is the least active radioisotope, followed by with a half-life of 46.5 hours. Zinc has 10 nuclear isomers, of which 69mZn has the longest half-life, 13.76 h. The superscript m indicates a metastable isotope. The nucleus of a metastable isotope is in an excited state and will return to the ground state by emitting a photon in the form of a gamma ray. has three excited metastable states and has two. The isotopes , , and each have only one excited metastable state. The most common decay mode of a radioisotope of zinc with a mass number lower than 66 is electron capture. The decay product resulting from electron capture is an isotope of copper. + → + The most common decay mode of a radioisotope of zinc with mass number higher than 66 is beta decay (β−), which produces an isotope of gallium. → + + Compounds and chemistry Reactivity Zinc has an electron configuration of [Ar]3d104s2 and is a member of the group 12 of the periodic table. It is a moderately reactive metal and strong reducing agent; in the reactivity series it is comparable to manganese. The surface of the pure metal tarnishes quickly, eventually forming a protective passivating layer of the basic zinc carbonate, , by reaction with atmospheric carbon dioxide. Zinc burns in air with a bright bluish-green flame, giving off fumes of zinc oxide. Zinc reacts readily with acids, alkalis and other non-metals. Extremely pure zinc reacts only slowly at room temperature with acids. Strong acids, such as hydrochloric or sulfuric acid, can remove the passivating layer and the subsequent reaction with the acid releases hydrogen gas. Zinc chemistry resembles that of the late first-row transition metals, nickel and copper, as well as certain main group elements. Almost all zinc compounds have the element in the +2 oxidation state. When Zn2&plus; compounds form, the outer shell s electrons are lost, yielding a bare zinc ion with the electronic configuration [Ar]3d10. The filled interior d shell generally does not participate in bonding, producing diamagnetic and mostly colorless compounds. In aqueous solution an octahedral complex, is the predominant species. The ionic radii of zinc and magnesium happen to be nearly identical. Consequently some of the equivalent salts have the same crystal structure, and in other circumstances where ionic radius is a determining factor, the chemistry of zinc has much in common with that of magnesium. Compared to the transition metals, zinc tends to form bonds with a greater degree of covalency. Complexes with N- and S- donors are much more stable. Complexes of zinc are mostly 4- or 6- coordinate, although 5-coordinate complexes are known. Other oxidation states require unusual physical conditions, and the only positive oxidation states demonstrated are +1 or +2. The volatilization of zinc in combination with zinc chloride at temperatures above 285 °C indicates the formation of , a zinc compound with a +1 oxidation state. Calculations indicate that a zinc compound with the oxidation state of +4 is unlikely to exist. Zn(III) is predicted to exist in the presence of strongly electronegative trianions; however, there exists some doubt around this possibility. Zinc(I) compounds Zinc(I) compounds are very rare. The [Zn2]2+ ion is implicated by the formation of a yellow diamagnetic glass by dissolving metallic zinc in molten ZnCl2. The [Zn2]2+ core would be analogous to the [Hg2]2+ cation present in mercury(I) compounds. The diamagnetic nature of the ion confirms its dimeric structure. The first zinc(I) compound containing the Zn–Zn bond, (η5-C5Me5)2Zn2. Zinc(II) compounds Binary compounds of zinc are known for most of the metalloids and all the nonmetals except the noble gases. The oxide ZnO is a white powder that is nearly insoluble in neutral aqueous solutions, but is amphoteric, dissolving in both strong basic and acidic solutions. The other chalcogenides (ZnS, ZnSe, and ZnTe) have varied applications in electronics and optics. Pnictogenides (, , and ), the peroxide (), the hydride (), and the carbide () are also known. Of the four halides, has the most ionic character, while the others (, , and ) have relatively low melting points and are considered to have more covalent character. In weak basic solutions containing ions, the hydroxide forms as a white precipitate. In stronger alkaline solutions, this hydroxide is dissolved to form zincates (). The nitrate , chlorate , sulfate , phosphate , molybdate , cyanide , arsenite , arsenate and the chromate (one of the few colored zinc compounds) are a few examples of other common inorganic compounds of zinc. Organozinc compounds are those that contain zinc–carbon covalent bonds. Diethylzinc () is a reagent in synthetic chemistry. It was first reported in 1848 from the reaction of zinc and ethyl iodide, and was the first compound known to contain a metal–carbon sigma bond. Test for zinc Cobalticyanide paper (Rinnmann's test for Zn) can be used as a chemical indicator for zinc. 4 g of K3Co(CN)6 and 1 g of KClO3 is dissolved on 100 ml of water. Paper is dipped in the solution and dried at 100 °C. One drop of the sample is dropped onto the dry paper and heated. A green disc indicates the presence of zinc. History Ancient use Various isolated examples of the use of impure zinc in ancient times have been discovered. Zinc ores were used to make the zinc–copper alloy brass thousands of years prior to the discovery of zinc as a separate element. Judean brass from the 14th to 10th centuries BC contains 23% zinc. Knowledge of how to produce brass spread to Ancient Greece by the 7th century BC, but few varieties were made. Ornaments made of alloys containing 80–90% zinc, with lead, iron, antimony, and other metals making up the remainder, have been found that are 2,500 years old. A possibly prehistoric statuette containing 87.5% zinc was found in a Dacian archaeological site. Strabo writing in the 1st century BC (but quoting a now lost work of the 4th century BC historian Theopompus) mentions "drops of false silver" which when mixed with copper make brass. This may refer to small quantities of zinc that is a by-product of smelting sulfide ores. Zinc in such remnants in smelting ovens was usually discarded as it was thought to be worthless. The manufacture of brass was known to the Romans by about 30 BC. They made brass by heating powdered calamine (zinc silicate or carbonate), charcoal and copper together in a crucible. The resulting calamine brass was then either cast or hammered into shape for use in weaponry. Some coins struck by Romans in the Christian era are made of what is probably calamine brass. The oldest known pills were made of the zinc carbonates hydrozincite and smithsonite. The pills were used for sore eyes and were found aboard the Roman ship Relitto del Pozzino, wrecked in 140 BC. The Berne zinc tablet is a votive plaque dating to Roman Gaul made of an alloy that is mostly zinc. The Charaka Samhita, thought to have been written between 300 and 500 AD, mentions a metal which, when oxidized, produces pushpanjan, thought to be zinc oxide. Zinc mines at Zawar, near Udaipur in India, have been active since the Mauryan period ( and 187 BCE). The smelting of metallic zinc here, however, appears to have begun around the 12th century AD. One estimate is that this location produced an estimated million tonnes of metallic zinc and zinc oxide from the 12th to 16th centuries. Another estimate gives a total production of 60,000 tonnes of metallic zinc over this period. The Rasaratna Samuccaya, written in approximately the 13th century AD, mentions two types of zinc-containing ores: one used for metal extraction and another used for medicinal purposes. Early studies and naming Zinc was distinctly recognized as a metal under the designation of Yasada or Jasada in the medical Lexicon ascribed to the Hindu king Madanapala (of Taka dynasty) and written about the year 1374. Smelting and extraction of impure zinc by reducing calamine with wool and other organic substances was accomplished in the 13th century in India. The Chinese did not learn of the technique until the 17th century. Alchemists burned zinc metal in air and collected the resulting zinc oxide on a condenser. Some alchemists called this zinc oxide lana philosophica, Latin for "philosopher's wool", because it collected in wooly tufts, whereas others thought it looked like white snow and named it nix album. The name of the metal was probably first documented by Paracelsus, a Swiss-born German alchemist, who referred to the metal as "zincum" or "zinken" in his book Liber Mineralium II, in the 16th century. The word is probably derived from the German , and supposedly meant "tooth-like, pointed or jagged" (metallic zinc crystals have a needle-like appearance). Zink could also imply "tin-like" because of its relation to German zinn meaning tin. Yet another possibility is that the word is derived from the Persian word seng meaning stone. The metal was also called Indian tin, tutanego, calamine, and spinter. German metallurgist Andreas Libavius received a quantity of what he called "calay" (from the Malay or Hindi word for tin) originating from Malabar off a cargo ship captured from the Portuguese in the year 1596. Libavius described the properties of the sample, which may have been zinc. Zinc was regularly imported to Europe from the Orient in the 17th and early 18th centuries, but was at times very expensive. Isolation Metallic zinc was isolated in India by 1300 AD. Before it was isolated in Europe, it was imported from India in about 1600 CE. Postlewayt's Universal Dictionary, a contemporary source giving technological information in Europe, did not mention zinc before 1751 but the element was studied before then. Flemish metallurgist and alchemist P. M. de Respour reported that he had extracted metallic zinc from zinc oxide in 1668. By the start of the 18th century, Étienne François Geoffroy described how zinc oxide condenses as yellow crystals on bars of iron placed above zinc ore that is being smelted. In Britain, John Lane is said to have carried out experiments to smelt zinc, probably at Landore, prior to his bankruptcy in 1726. In 1738 in Great Britain, William Champion patented a process to extract zinc from calamine in a vertical retort-style smelter. His technique resembled that used at Zawar zinc mines in Rajasthan, but no evidence suggests he visited the Orient. Champion's process was used through 1851. German chemist Andreas Marggraf normally gets credit for isolating pure metallic zinc in the West, even though Swedish chemist Anton von Swab had distilled zinc from calamine four years previously. In his 1746 experiment, Marggraf heated a mixture of calamine and charcoal in a closed vessel without copper to obtain a metal. This procedure became commercially practical by 1752. Later work William Champion's brother, John, patented a process in 1758 for calcining zinc sulfide into an oxide usable in the retort process. Prior to this, only calamine could be used to produce zinc. In 1798, Johann Christian Ruberg improved on the smelting process by building the first horizontal retort smelter. Jean-Jacques Daniel Dony built a different kind of horizontal zinc smelter in Belgium that processed even more zinc. Italian doctor Luigi Galvani discovered in 1780 that connecting the spinal cord of a freshly dissected frog to an iron rail attached by a brass hook caused the frog's leg to twitch. He incorrectly thought he had discovered an ability of nerves and muscles to create electricity and called the effect "animal electricity". The galvanic cell and the process of galvanization were both named for Luigi Galvani, and his discoveries paved the way for electrical batteries, galvanization, and cathodic protection. Galvani's friend, Alessandro Volta, continued researching the effect and invented the Voltaic pile in 1800. Volta's pile consisted of a stack of simplified galvanic cells, each being one plate of copper and one of zinc connected by an electrolyte. By stacking these units in series, the Voltaic pile (or "battery") as a whole had a higher voltage, which could be used more easily than single cells. Electricity is produced because the Volta potential between the two metal plates makes electrons flow from the zinc to the copper and corrode the zinc. The non-magnetic character of zinc and its lack of color in solution delayed discovery of its importance to biochemistry and nutrition. This changed in 1940 when carbonic anhydrase, an enzyme that scrubs carbon dioxide from blood, was shown to have zinc in its active site. The digestive enzyme carboxypeptidase became the second known zinc-containing enzyme in 1955. Production Mining and processing Zinc is the fourth most common metal in use, trailing only iron, aluminium, and copper with an annual production of about 13 million tonnes. The world's largest zinc producer is Nyrstar, a merger of the Australian OZ Minerals and the Belgian Umicore. About 70% of the world's zinc originates from mining, while the remaining 30% comes from recycling secondary zinc. Commercially pure zinc is known as Special High Grade, often abbreviated SHG, and is 99.995% pure. Worldwide, 95% of new zinc is mined from sulfidic ore deposits, in which sphalerite (ZnS) is nearly always mixed with the sulfides of copper, lead and iron. Zinc mines are scattered throughout the world, with the main areas being China, Australia, and Peru. China produced 38% of the global zinc output in 2014. Zinc metal is produced using extractive metallurgy. The ore is finely ground, then put through froth flotation to separate minerals from gangue (on the property of hydrophobicity), to get a zinc sulfide ore concentrate consisting of about 50% zinc, 32% sulfur, 13% iron, and 5% . Roasting converts the zinc sulfide concentrate to zinc oxide: 2ZnS + 3O2 ->[t^o] 2ZnO + 2SO2 The sulfur dioxide is used for the production of sulfuric acid, which is necessary for the leaching process. If deposits of zinc carbonate, zinc silicate, or zinc-spinel (like the Skorpion Deposit in Namibia) are used for zinc production, the roasting can be omitted. For further processing two basic methods are used: pyrometallurgy or electrowinning. Pyrometallurgy reduces zinc oxide with carbon or carbon monoxide at into the metal, which is distilled as zinc vapor to separate it from other metals, which are not volatile at those temperatures. The zinc vapor is collected in a condenser. The equations below describe this process: ZnO + C ->[950^oC] Zn + CO ZnO + CO ->[950^oC] Zn + CO2 In electrowinning, zinc is leached from the ore concentrate by sulfuric acid and impurities are precipitated: ZnO + H2SO4 -> ZnSO4 + H2O Finally, the zinc is reduced by electrolysis. 2ZnSO4 + 2H2O -> 2Zn + O2 + 2H2SO4 The sulfuric acid is regenerated and recycled to the leaching step. When galvanised feedstock is fed to an electric arc furnace, the zinc is recovered from the dust by a number of processes, predominantly the Waelz process (90% as of 2014). Environmental impact Refinement of sulfidic zinc ores produces large volumes of sulfur dioxide and cadmium vapor. Smelter slag and other residues contain significant quantities of metals. About 1.1 million tonnes of metallic zinc and 130 thousand tonnes of lead were mined and smelted in the Belgian towns of La Calamine and Plombières between 1806 and 1882. The dumps of the past mining operations leach zinc and cadmium, and the sediments of the Geul River contain non-trivial amounts of metals. About two thousand years ago, emissions of zinc from mining and smelting totaled 10 thousand tonnes a year. After increasing 10-fold from 1850, zinc emissions peaked at 3.4 million tonnes per year in the 1980s and declined to 2.7 million tonnes in the 1990s, although a 2005 study of the Arctic troposphere found that the concentrations there did not reflect the decline. Man-made and natural emissions occur at a ratio of 20 to 1. Zinc in rivers flowing through industrial and mining areas can be as high as 20 ppm. Effective sewage treatment greatly reduces this; treatment along the Rhine, for example, has decreased zinc levels to 50 ppb. Concentrations of zinc as low as 2 ppm adversely affects the amount of oxygen that fish can carry in their blood. Soils contaminated with zinc from mining, refining, or fertilizing with zinc-bearing sludge can contain several grams of zinc per kilogram of dry soil. Levels of zinc in excess of 500 ppm in soil interfere with the ability of plants to absorb other essential metals, such as iron and manganese. Zinc levels of 2000 ppm to 180,000 ppm (18%) have been recorded in some soil samples. Applications Major applications of zinc include, with percentages given for the US Galvanizing (55%) Brass and bronze (16%) Other alloys (21%) Miscellaneous (8%) Anti-corrosion and batteries Zinc is most commonly used as an anti-corrosion agent, and galvanization (coating of iron or steel) is the most familiar form. In 2009 in the United States, 55% or 893,000 tons of the zinc metal was used for galvanization. Zinc is more reactive than iron or steel and thus will attract almost all local oxidation until it completely corrodes away. A protective surface layer of oxide and carbonate ( forms as the zinc corrodes. This protection lasts even after the zinc layer is scratched but degrades through time as the zinc corrodes away. The zinc is applied electrochemically or as molten zinc by hot-dip galvanizing or spraying. Galvanization is used on chain-link fencing, guard rails, suspension bridges, lightposts, metal roofs, heat exchangers, and car bodies. The relative reactivity of zinc and its ability to attract oxidation to itself makes it an efficient sacrificial anode in cathodic protection (CP). For example, cathodic protection of a buried pipeline can be achieved by connecting anodes made from zinc to the pipe. Zinc acts as the anode (negative terminus) by slowly corroding away as it passes electric current to the steel pipeline. Zinc is also used to cathodically protect metals that are exposed to sea water. A zinc disc attached to a ship's iron rudder will slowly corrode while the rudder stays intact. Similarly, a zinc plug attached to a propeller or the metal protective guard for the keel of the ship provides temporary protection. With a standard electrode potential (SEP) of −0.76 volts, zinc is used as an anode material for batteries. (More reactive lithium (SEP −3.04 V) is used for anodes in lithium batteries ). Powdered zinc is used in this way in alkaline batteries and the case (which also serves as the anode) of zinc–carbon batteries is formed from sheet zinc. Zinc is used as the anode or fuel of the zinc–air battery/fuel cell. The zinc-cerium redox flow battery also relies on a zinc-based negative half-cell. Alloys A widely used zinc alloy is brass, in which copper is alloyed with anywhere from 3% to 45% zinc, depending upon the type of brass. Brass is generally more ductile and stronger than copper, and has superior corrosion resistance. These properties make it useful in communication equipment, hardware, musical instruments, and water valves. Other widely used zinc alloys include nickel silver, typewriter metal, soft and aluminium solder, and commercial bronze. Zinc is also used in contemporary pipe organs as a substitute for the traditional lead/tin alloy in pipes. Alloys of 85–88% zinc, 4–10% copper, and 2–8% aluminium find limited use in certain types of machine bearings. Zinc has been the primary metal in American one cent coins (pennies) since 1982. The zinc core is coated with a thin layer of copper to give the appearance of a copper coin. In 1994, of zinc were used to produce 13.6 billion pennies in the United States. Alloys of zinc with small amounts of copper, aluminium, and magnesium are useful in die casting as well as spin casting, especially in the automotive, electrical, and hardware industries. These alloys are marketed under the name Zamak. An example of this is zinc aluminium. The low melting point together with the low viscosity of the alloy makes possible the production of small and intricate shapes. The low working temperature leads to rapid cooling of the cast products and fast production for assembly. Another alloy, marketed under the brand name Prestal, contains 78% zinc and 22% aluminium, and is reported to be nearly as strong as steel but as malleable as plastic. This superplasticity of the alloy allows it to be molded using die casts made of ceramics and cement. Similar alloys with the addition of a small amount of lead can be cold-rolled into sheets. An alloy of 96% zinc and 4% aluminium is used to make stamping dies for low production run applications for which ferrous metal dies would be too expensive. For building facades, roofing, and other applications for sheet metal formed by deep drawing, roll forming, or bending, zinc alloys with titanium and copper are used. Unalloyed zinc is too brittle for these manufacturing processes. As a dense, inexpensive, easily worked material, zinc is used as a lead replacement. In the wake of lead concerns, zinc appears in weights for various applications ranging from fishing to tire balances and flywheels. Cadmium zinc telluride (CZT) is a semiconductive alloy that can be divided into an array of small sensing devices. These devices are similar to an integrated circuit and can detect the energy of incoming gamma ray photons. When behind an absorbing mask, the CZT sensor array can determine the direction of the rays. Other industrial uses Roughly one quarter of all zinc output in the United States in 2009 was consumed in zinc compounds; a variety of which are used industrially. Zinc oxide is widely used as a white pigment in paints and as a catalyst in the manufacture of rubber to disperse heat. Zinc oxide is used to protect rubber polymers and plastics from ultraviolet radiation (UV). The semiconductor properties of zinc oxide make it useful in varistors and photocopying products. The zinc zinc-oxide cycle is a two step thermochemical process based on zinc and zinc oxide for hydrogen production. Zinc chloride is often added to lumber as a fire retardant and sometimes as a wood preservative. It is used in the manufacture of other chemicals. Zinc methyl () is used in a number of organic syntheses. Zinc sulfide (ZnS) is used in luminescent pigments such as on the hands of clocks, X-ray and television screens, and luminous paints. Crystals of ZnS are used in lasers that operate in the mid-infrared part of the spectrum. Zinc sulfate is a chemical in dyes and pigments. Zinc pyrithione is used in antifouling paints. Zinc powder is sometimes used as a propellant in model rockets. When a compressed mixture of 70% zinc and 30% sulfur powder is ignited there is a violent chemical reaction. This produces zinc sulfide, together with large amounts of hot gas, heat, and light. Zinc sheet metal is used as a durable covering for roofs, walls, and countertops, the last often seen in bistros and oyster bars, and is known for the rustic look imparted by its surface oxidation in use to a blue-gray patina and susceptibility to scratching. , the most abundant isotope of zinc, is very susceptible to neutron activation, being transmuted into the highly radioactive , which has a half-life of 244 days and produces intense gamma radiation. Because of this, zinc oxide used in nuclear reactors as an anti-corrosion agent is depleted of before use, this is called depleted zinc oxide. For the same reason, zinc has been proposed as a salting material for nuclear weapons (cobalt is another, better-known salting material). A jacket of isotopically enriched would be irradiated by the intense high-energy neutron flux from an exploding thermonuclear weapon, forming a large amount of significantly increasing the radioactivity of the weapon's fallout. Such a weapon is not known to have ever been built, tested, or used. is used as a tracer to study how alloys that contain zinc wear out, or the path and the role of zinc in organisms. Zinc dithiocarbamate complexes are used as agricultural fungicides; these include Zineb, Metiram, Propineb and Ziram. Zinc naphthenate is used as wood preservative. Zinc in the form of ZDDP, is used as an anti-wear additive for metal parts in engine oil. Organic chemistry Organozinc chemistry is the science of compounds that contain carbon-zinc bonds, describing the physical properties, synthesis, and chemical reactions. Many organozinc compounds are commercially important. Among important applications are: The Frankland-Duppa Reaction in which an oxalate ester (ROCOCOOR) reacts with an alkyl halide R'X, zinc and hydrochloric acid to form α-hydroxycarboxylic esters RR'COHCOOR Organozincs have similar reactivity to Grignard reagents but are much less nucleophilic, and they are expensive and difficult to handle. Organozincs typically perform nucleophilic addition on electrophiles such as aldehydes, which are then reduced to alcohols. Commercially available diorganozinc compounds include dimethylzinc, diethylzinc and diphenylzinc. Like Grignard reagents, organozincs are commonly produced from organobromine precursors. Zinc has found many uses in catalysis in organic synthesis including enantioselective synthesis, being a cheap and readily available alternative to precious metal complexes. Quantitative results (yield and enantiomeric excess) obtained with chiral zinc catalysts can be comparable to those achieved with palladium, ruthenium, iridium and others. Dietary supplement In most single-tablet, over-the-counter, daily vitamin and mineral supplements, zinc is included in such forms as zinc oxide, zinc acetate, zinc gluconate, or zinc amino acid chelate. Generally, zinc supplement is recommended where there is high risk of zinc deficiency (such as low and middle income countries) as a preventive measure. Although zinc sulfate is a commonly used zinc form, zinc citrate, gluconate and picolinate may be valid options as well. These forms are better absorbed than zinc oxide. Gastroenteritis Zinc is an inexpensive and effective part of treatment of diarrhea among children in the developing world. Zinc becomes depleted in the body during diarrhea and replenishing zinc with a 10- to 14-day course of treatment can reduce the duration and severity of diarrheal episodes and may also prevent future episodes for as long as three months. Gastroenteritis is strongly attenuated by ingestion of zinc, possibly by direct antimicrobial action of the ions in the gastrointestinal tract, or by the absorption of the zinc and re-release from immune cells (all granulocytes secrete zinc), or both. Common cold Weight gain Zinc deficiency may lead to loss of appetite. The use of zinc in the treatment of anorexia has been advocated since 1979. At least 15 clinical trials have shown that zinc improved weight gain in anorexia. A 1994 trial showed that zinc doubled the rate of body mass increase in the treatment of anorexia nervosa. Deficiency of other nutrients such as tyrosine, tryptophan and thiamine could contribute to this phenomenon of "malnutrition-induced malnutrition". A meta-analysis of 33 prospective intervention trials regarding zinc supplementation and its effects on the growth of children in many countries showed that zinc supplementation alone had a statistically significant effect on linear growth and body weight gain, indicating that other deficiencies that may have been present were not responsible for growth retardation. Other People taking zinc supplements may slow down the progress to age-related macular degeneration. Zinc supplement is an effective treatment for acrodermatitis enteropathica, a genetic disorder affecting zinc absorption that was previously fatal to affected infants. Zinc deficiency has been associated with major depressive disorder (MDD), and zinc supplements may be an effective treatment. Zinc may help individuals sleep more. Topical use Topical preparations of zinc include those used on the skin, often in the form of zinc oxide. Zinc oxide is generally recognized by the FDA as safe and effective and is considered a very photo-stable. Zinc oxide is one of the most common active ingredients formulated into a sunscreen to mitigate sunburn. Applied thinly to a baby's diaper area (perineum) with each diaper change, it can protect against diaper rash. Chelated zinc is used in toothpastes and mouthwashes to prevent bad breath; zinc citrate helps reduce the build-up of calculus (tartar). Zinc pyrithione is widely included in shampoos to prevent dandruff. Topical zinc has also been shown to effectively treat, as well as prolong remission in genital herpes. Biological role Zinc is an essential trace element for humans. and other animals, for plants and for microorganisms. Zinc is required for the function of over 300 enzymes and 1000 transcription factors, and is stored and transferred in metallothioneins. It is the second most abundant trace metal in humans after iron and it is the only metal which appears in all enzyme classes. In proteins, zinc ions are often coordinated to the amino acid side chains of aspartic acid, glutamic acid, cysteine and histidine. The theoretical and computational description of this zinc binding in proteins (as well as that of other transition metals) is difficult. Roughly  grams of zinc are distributed throughout the human body. Most zinc is in the brain, muscle, bones, kidney, and liver, with the highest concentrations in the prostate and parts of the eye. Semen is particularly rich in zinc, a key factor in prostate gland function and reproductive organ growth. Zinc homeostasis of the body is mainly controlled by the intestine. Here, ZIP4 and especially TRPM7 were linked to intestinal zinc uptake essential for postnatal survival. In humans, the biological roles of zinc are ubiquitous. It interacts with "a wide range of organic ligands", and has roles in the metabolism of RNA and DNA, signal transduction, and gene expression. It also regulates apoptosis. A review from 2015 indicated that about 10% of human proteins (~3000) bind zinc, in addition to hundreds more that transport and traffic zinc; a similar in silico study in the plant Arabidopsis thaliana found 2367 zinc-related proteins. In the brain, zinc is stored in specific synaptic vesicles by glutamatergic neurons and can modulate neuronal excitability. It plays a key role in synaptic plasticity and so in learning. Zinc homeostasis also plays a critical role in the functional regulation of the central nervous system. Dysregulation of zinc homeostasis in the central nervous system that results in excessive synaptic zinc concentrations is believed to induce neurotoxicity through mitochondrial oxidative stress (e.g., by disrupting certain enzymes involved in the electron transport chain, including complex I, complex III, and α-ketoglutarate dehydrogenase), the dysregulation of calcium homeostasis, glutamatergic neuronal excitotoxicity, and interference with intraneuronal signal transduction. L- and D-histidine facilitate brain zinc uptake. SLC30A3 is the primary zinc transporter involved in cerebral zinc homeostasis. Enzymes Zinc is an efficient Lewis acid, making it a useful catalytic agent in hydroxylation and other enzymatic reactions. The metal also has a flexible coordination geometry, which allows proteins using it to rapidly shift conformations to perform biological reactions. Two examples of zinc-containing enzymes are carbonic anhydrase and carboxypeptidase, which are vital to the processes of carbon dioxide () regulation and digestion of proteins, respectively. In vertebrate blood, carbonic anhydrase converts into bicarbonate and the same enzyme transforms the bicarbonate back into for exhalation through the lungs. Without this enzyme, this conversion would occur about one million times slower at the normal blood pH of 7 or would require a pH of 10 or more. The non-related β-carbonic anhydrase is required in plants for leaf formation, the synthesis of indole acetic acid (auxin) and alcoholic fermentation. Carboxypeptidase cleaves peptide linkages during digestion of proteins. A coordinate covalent bond is formed between the terminal peptide and a C=O group attached to zinc, which gives the carbon a positive charge. This helps to create a hydrophobic pocket on the enzyme near the zinc, which attracts the non-polar part of the protein being digested. Signalling Zinc has been recognized as a messenger, able to activate signalling pathways. Many of these pathways provide the driving force in aberrant cancer growth. They can be targeted through ZIP transporters. Other proteins Zinc serves a purely structural role in zinc fingers, twists and clusters. Zinc fingers form parts of some transcription factors, which are proteins that recognize DNA base sequences during the replication and transcription of DNA. Each of the nine or ten ions in a zinc finger helps maintain the finger's structure by coordinately binding to four amino acids in the transcription factor. In blood plasma, zinc is bound to and transported by albumin (60%, low-affinity) and transferrin (10%). Because transferrin also transports iron, excessive iron reduces zinc absorption, and vice versa. A similar antagonism exists with copper. The concentration of zinc in blood plasma stays relatively constant regardless of zinc intake. Cells in the salivary gland, prostate, immune system, and intestine use zinc signaling to communicate with other cells. Zinc may be held in metallothionein reserves within microorganisms or in the intestines or liver of animals. Metallothionein in intestinal cells is capable of adjusting absorption of zinc by 15–40%. However, inadequate or excessive zinc intake can be harmful; excess zinc particularly impairs copper absorption because metallothionein absorbs both metals. The human dopamine transporter contains a high affinity extracellular zinc binding site which, upon zinc binding, inhibits dopamine reuptake and amplifies amphetamine-induced dopamine efflux in vitro. The human serotonin transporter and norepinephrine transporter do not contain zinc binding sites. Some EF-hand calcium binding proteins such as S100 or NCS-1 are also able to bind zinc ions. Nutrition Dietary recommendations The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for zinc in 2001. The current EARs for zinc for women and men ages 14 and up is 6.8 and 9.4 mg/day, respectively. The RDAs are 8 and 11 mg/day. RDAs are higher than EARs so as to identify amounts that will cover people with higher than average requirements. RDA for pregnancy is 11 mg/day. RDA for lactation is 12 mg/day. For infants up to 12 months the RDA is 3 mg/day. For children ages 1–13 years the RDA increases with age from 3 to 8 mg/day. As for safety, the IOM sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of zinc the adult UL is 40 mg/day including both food and supplements combined (lower for children). Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes (DRIs). The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For people ages 18 and older the PRI calculations are complex, as the EFSA has set higher and higher values as the phytate content of the diet increases. For women, PRIs increase from 7.5 to 12.7 mg/day as phytate intake increases from 300 to 1200 mg/day; for men the range is 9.4 to 16.3 mg/day. These PRIs are higher than the U.S. RDAs. The EFSA reviewed the same safety question and set its UL at 25 mg/day, which is much lower than the U.S. value. For U.S. food and dietary supplement labeling purposes the amount in a serving is expressed as a percent of Daily Value (%DV). For zinc labeling purposes 100% of the Daily Value was 15 mg, but on May 27, 2016, it was revised to 11 mg. A table of the old and new adult daily values is provided at Reference Daily Intake. Dietary intake Animal products such as meat, fish, shellfish, fowl, eggs, and dairy contain zinc. The concentration of zinc in plants varies with the level in the soil. With adequate zinc in the soil, the food plants that contain the most zinc are wheat (germ and bran) and various seeds, including sesame, poppy, alfalfa, celery, and mustard. Zinc is also found in beans, nuts, almonds, whole grains, pumpkin seeds, sunflower seeds, and blackcurrant. Other sources include fortified food and dietary supplements in various forms. A 1998 review concluded that zinc oxide, one of the most common supplements in the United States, and zinc carbonate are nearly insoluble and poorly absorbed in the body. This review cited studies that found lower plasma zinc concentrations in the subjects who consumed zinc oxide and zinc carbonate than in those who took zinc acetate and sulfate salts. For fortification, however, a 2003 review recommended cereals (containing zinc oxide) as a cheap, stable source that is as easily absorbed as the more expensive forms. A 2005 study found that various compounds of zinc, including oxide and sulfate, did not show statistically significant differences in absorption when added as fortificants to maize tortillas. Deficiency Nearly two billion people in the developing world are deficient in zinc. Groups at risk include children in developing countries and elderly with chronic illnesses. In children, it causes an increase in infection and diarrhea and contributes to the death of about 800,000 children worldwide per year. The World Health Organization advocates zinc supplementation for severe malnutrition and diarrhea. Zinc supplements help prevent disease and reduce mortality, especially among children with low birth weight or stunted growth. However, zinc supplements should not be administered alone, because many in the developing world have several deficiencies, and zinc interacts with other micronutrients. While zinc deficiency is usually due to insufficient dietary intake, it can be associated with malabsorption, acrodermatitis enteropathica, chronic liver disease, chronic renal disease, sickle cell disease, diabetes, malignancy, and other chronic illnesses. In the United States, a federal survey of food consumption determined that for women and men over the age of 19, average consumption was 9.7 and 14.2 mg/day, respectively. For women, 17% consumed less than the EAR, for men 11%. The percentages below EAR increased with age. The most recent published update of the survey (NHANES 2013–2014) reported lower averages – 9.3 and 13.2 mg/day – again with intake decreasing with age. Symptoms of mild zinc deficiency are diverse. Clinical outcomes include depressed growth, diarrhea, impotence and delayed sexual maturation, alopecia, eye and skin lesions, impaired appetite, altered cognition, impaired immune functions, defects in carbohydrate use, and reproductive teratogenesis. Zinc deficiency depresses immunity, but excessive zinc does also. Despite some concerns, western vegetarians and vegans do not suffer any more from overt zinc deficiency than meat-eaters. Major plant sources of zinc include cooked dried beans, sea vegetables, fortified cereals, soy foods, nuts, peas, and seeds. However, phytates in many whole-grains and fibers may interfere with zinc absorption and marginal zinc intake has poorly understood effects. The zinc chelator phytate, found in seeds and cereal bran, can contribute to zinc malabsorption. Some evidence suggests that more than the US RDA (8 mg/day for adult women; 11 mg/day for adult men) may be needed in those whose diet is high in phytates, such as some vegetarians. The European Food Safety Authority (EFSA) guidelines attempt to compensate for this by recommending higher zinc intake when dietary phytate intake is greater. These considerations must be balanced against the paucity of adequate zinc biomarkers, and the most widely used indicator, plasma zinc, has poor sensitivity and specificity. Soil remediation Species of Calluna, Erica and Vaccinium can grow in zinc-metalliferous soils, because translocation of toxic ions is prevented by the action of ericoid mycorrhizal fungi. Agriculture Zinc deficiency appears to be the most common micronutrient deficiency in crop plants; it is particularly common in high-pH soils. Zinc-deficient soil is cultivated in the cropland of about half of Turkey and India, a third of China, and most of Western Australia. Substantial responses to zinc fertilization have been reported in these areas. Plants that grow in soils that are zinc-deficient are more susceptible to disease. Zinc is added to the soil primarily through the weathering of rocks, but humans have added zinc through fossil fuel combustion, mine waste, phosphate fertilizers, pesticide (zinc phosphide), limestone, manure, sewage sludge, and particles from galvanized surfaces. Excess zinc is toxic to plants, although zinc toxicity is far less widespread. Precautions Toxicity Although zinc is an essential requirement for good health, excess zinc can be harmful. Excessive absorption of zinc suppresses copper and iron absorption. The free zinc ion in solution is highly toxic to plants, invertebrates, and even vertebrate fish. The Free Ion Activity Model is well-established in the literature, and shows that just micromolar amounts of the free ion kills some organisms. A recent example showed 6 micromolar killing 93% of all Daphnia in water. The free zinc ion is a powerful Lewis acid up to the point of being corrosive. Stomach acid contains hydrochloric acid, in which metallic zinc dissolves readily to give corrosive zinc chloride. Swallowing a post-1982 American one cent piece (97.5% zinc) can cause damage to the stomach lining through the high solubility of the zinc ion in the acidic stomach. Evidence shows that people taking 100–300 mg of zinc daily may suffer induced copper deficiency. A 2007 trial observed that elderly men taking 80 mg daily were hospitalized for urinary complications more often than those taking a placebo. Levels of 100–300 mg may interfere with the use of copper and iron or adversely affect cholesterol. Zinc in excess of 500 ppm in soil interferes with the plant absorption of other essential metals, such as iron and manganese. A condition called the zinc shakes or "zinc chills" can be induced by inhalation of zinc fumes while brazing or welding galvanized materials. Zinc is a common ingredient of denture cream which may contain between 17 and 38 mg of zinc per gram. Disability and even deaths from excessive use of these products have been claimed. The U.S. Food and Drug Administration (FDA) states that zinc damages nerve receptors in the nose, causing anosmia. Reports of anosmia were also observed in the 1930s when zinc preparations were used in a failed attempt to prevent polio infections. On June 16, 2009, the FDA ordered removal of zinc-based intranasal cold products from store shelves. The FDA said the loss of smell can be life-threatening because people with impaired smell cannot detect leaking gas or smoke, and cannot tell if food has spoiled before they eat it. Recent research suggests that the topical antimicrobial zinc pyrithione is a potent heat shock response inducer that may impair genomic integrity with induction of PARP-dependent energy crisis in cultured human keratinocytes and melanocytes. Poisoning In 1982, the US Mint began minting pennies coated in copper but containing primarily zinc. Zinc pennies pose a risk of zinc toxicosis, which can be fatal. One reported case of chronic ingestion of 425 pennies (over 1 kg of zinc) resulted in death due to gastrointestinal bacterial and fungal sepsis. Another patient who ingested 12 grams of zinc showed only lethargy and ataxia (gross lack of coordination of muscle movements). Several other cases have been reported of humans suffering zinc intoxication by the ingestion of zinc coins. Pennies and other small coins are sometimes ingested by dogs, requiring veterinary removal of the foreign objects. The zinc content of some coins can cause zinc toxicity, commonly fatal in dogs through severe hemolytic anemia and liver or kidney damage; vomiting and diarrhea are possible symptoms. Zinc is highly toxic in parrots and poisoning can often be fatal. The consumption of fruit juices stored in galvanized cans has resulted in mass parrot poisonings with zinc.
Physical sciences
Chemical elements_2
null
34422
https://en.wikipedia.org/wiki/Zirconium
Zirconium
Zirconium is a chemical element; it has symbol Zr and atomic number 40. First identified in 1789, isolated in impure form in 1824, and manufactured at scale by 1925, pure zirconium is a lustrous transition metal with a greyish-white color that closely resembles hafnium and, to a lesser extent, titanium. It is solid at room temperature, ductile, malleable and corrosion-resistant. The name zirconium is derived from the name of the mineral zircon, the most important source of zirconium. The word is related to Persian zargun (zircon; zar-gun, "gold-like" or "as gold"). Besides zircon, zirconium occurs in over 140 other minerals, including baddeleyite and eudialyte; most zirconium is produced as a byproduct of minerals mined for titanium and tin. Zirconium forms a variety of inorganic compounds, such as zirconium dioxide, and organometallic compounds, such as zirconocene dichloride. Five isotopes occur naturally, four of which are stable. The metal and its alloys are mainly used as a refractory and opacifier; zirconium alloys are used to clad nuclear fuel rods due to their low neutron absorption and strong resistance to corrosion, and in space vehicles and turbine blades where high heat resistance is necessary. Zirconium also finds uses in flashbulbs, biomedical applications such as dental implants and prosthetics, deodorant, and water purification systems. Zirconium compounds have no known biological role, though the element is widely distributed in nature and appears in small quantities in biological systems without adverse effects. There is no indication of zirconium as a carcinogen. The main hazards posed by zirconium are flammability in powder form and irritation of the eyes. Characteristics Zirconium is a lustrous, greyish-white, soft, ductile, malleable metal that is solid at room temperature, though it is hard and brittle at lesser purities. In powder form, zirconium is highly flammable, but the solid form is much less prone to ignition. Zirconium is highly resistant to corrosion by alkalis, acids, salt water and other agents. However, it will dissolve in hydrochloric and sulfuric acid, especially when fluorine is present. Alloys with zinc are magnetic at less than 35 K. The melting point of zirconium is 1855 °C (3371 °F), and the boiling point is 4409 °C (7968 °F). Zirconium has an electronegativity of 1.33 on the Pauling scale. Of the elements within the d-block with known electronegativities, zirconium has the fourth lowest electronegativity after hafnium, yttrium, and lutetium. At room temperature zirconium exhibits a hexagonally close-packed crystal structure, α-Zr, which changes to β-Zr, a body-centered cubic crystal structure, at 863 °C. Zirconium exists in the β-phase until the melting point. Isotopes Naturally occurring zirconium is composed of five isotopes. 90Zr, 91Zr, 92Zr and 94Zr are stable, although 94Zr is predicted to undergo double beta decay (not observed experimentally) with a half-life of more than 1.10×1017 years. 96Zr has a half-life of 2.34×1019 years, and is the longest-lived radioisotope of zirconium. Of these natural isotopes, 90Zr is the most common, making up 51.45% of all zirconium. 96Zr is the least common, comprising only 2.80% of zirconium. Thirty-three artificial isotopes of zirconium have been synthesized, ranging in atomic mass from 77 to 114. 93Zr is the longest-lived artificial isotope, with a half-life of 1.61×106 years. Radioactive isotopes at or above mass number 93 decay by electron emission, whereas those at or below 89 decay by positron emission. The only exception is 88Zr, which decays by electron capture. Thirteen isotopes of zirconium also exist as metastable isomers: 83m1Zr, 83m2Zr, 85mZr, 87mZr, 88mZr, 89mZr, 90m1Zr, 90m2Zr, 91mZr, 97mZr, 98mZr, 99mZr, and 108mZr. Of these, 97mZr has the shortest half-life at 104.8 nanoseconds. 89mZr is the longest lived with a half-life of 4.161 minutes. Occurrence Zirconium has a concentration of about 130 mg/kg within the Earth's crust and about 0.026 μg/L in sea water. It is the 18th most abundant element in the crust. It is not found in nature as a native metal, reflecting its intrinsic instability with respect to water. The principal commercial source of zirconium is zircon (ZrSiO4), a silicate mineral, which is found primarily in Australia, Brazil, India, Russia, South Africa and the United States, as well as in smaller deposits around the world. As of 2013, two-thirds of zircon mining occurs in Australia and South Africa. Zircon resources exceed 60 million tonnes worldwide and annual worldwide zirconium production is approximately 900,000 tonnes. Zirconium also occurs in more than 140 other minerals, including the commercially useful ores baddeleyite and eudialyte. Zirconium is relatively abundant in S-type stars, and has been detected in the sun and in meteorites. Lunar rock samples brought back from several Apollo missions to the moon have a high zirconium oxide content relative to terrestrial rocks. EPR spectroscopy has been used in investigations of the unusual 3+ valence state of zirconium. The EPR spectrum of Zr3+, which has been initially observed as a parasitic signal in Fe‐doped single crystals of ScPO4, was definitively identified by preparing single crystals of ScPO4 doped with isotopically enriched (94.6%)91Zr. Single crystals of LuPO4 and YPO4 doped with both naturally abundant and isotopically enriched Zr have also been grown and investigated. Production Occurrence Zirconium is a by-product formed after mining and processing of the titanium minerals ilmenite and rutile, as well as tin mining. From 2003 to 2007, while prices for the mineral zircon steadily increased from $360 to $840 per tonne, the price for unwrought zirconium metal decreased from $39,900 to $22,700 per ton. Zirconium metal is much more expensive than zircon because the reduction processes are costly. Collected from coastal waters, zircon-bearing sand is purified by spiral concentrators to separate lighter materials, which are then returned to the water because they are natural components of beach sand. Using magnetic separation, the titanium ores ilmenite and rutile are removed. Most zircon is used directly in commercial applications, but a small percentage is converted to the metal. Most Zr metal is produced by the reduction of the zirconium(IV) chloride with magnesium metal in the Kroll process. The resulting metal is sintered until sufficiently ductile for metalworking. Separation of zirconium and hafnium Commercial zirconium metal typically contains 1–3% of hafnium, which is usually not problematic because the chemical properties of hafnium and zirconium are very similar. Their neutron-absorbing properties differ strongly, however, necessitating the separation of hafnium from zirconium for nuclear reactors. Several separation schemes are in use. The liquid-liquid extraction of the thiocyanate-oxide derivatives exploits the fact that the hafnium derivative is slightly more soluble in methyl isobutyl ketone than in water. This method accounts for roughly two-thirds of pure zirconium production, though other methods are being researched; for instance, in India, a TBP-nitrate solvent extraction process is used for the separation of zirconium from other metals. Zr and Hf can also be separated by fractional crystallization of potassium hexafluorozirconate (K2ZrF6), which is less soluble in water than the analogous hafnium derivative. Fractional distillation of the tetrachlorides, also called extractive distillation, is also used. Vacuum arc melting, combined with the use of hot extruding techniques and supercooled copper hearths, is capable of producing zirconium that has been purified of oxygen, nitrogen, and carbon. Hafnium must be removed from zirconium for nuclear applications because hafnium has a neutron absorption cross-section 600 times greater than zirconium. The separated hafnium can be used for reactor control rods. Compounds Like other transition metals, zirconium forms a wide range of inorganic compounds and coordination complexes. In general, these compounds are colourless diamagnetic solids wherein zirconium has the oxidation state +4. Some organometallic compounds are considered to have Zr(II) oxidation state. Non-equilibrium oxidation states between 0 and 4 have been detected during zirconium oxidation. Oxides, nitrides, and carbides The most common oxide is zirconium dioxide, ZrO2, also known as zirconia. This clear to white-coloured solid has exceptional fracture toughness (for a ceramic) and chemical resistance, especially in its cubic form. These properties make zirconia useful as a thermal barrier coating, although it is also a common diamond substitute. Zirconium monoxide, ZrO, is also known and S-type stars are recognised by detection of its emission lines. Zirconium tungstate has the unusual property of shrinking in all dimensions when heated, whereas most other substances expand when heated. Zirconyl chloride is one of the few water-soluble zirconium complexes, with the formula [Zr4(OH)12(H2O)16]Cl8. Zirconium carbide and zirconium nitride are refractory solids. Both are highly corrosion-resistant and find uses in high-temperature resistant coatings and cutting tools. Zirconium hydride phases are known to form when zirconium alloys are exposed to large quantities of hydrogen over time; due to the brittleness of zirconium hydrides relative to zirconium alloys, the mitigation of zirconium hydride formation was highly studied during the development of the first commercial nuclear reactors, in which zirconium carbide was a frequently used material. Lead zirconate titanate (PZT) is the most commonly used piezoelectric material, being used as transducers and actuators in medical and microelectromechanical systems applications. Halides and pseudohalides All four common halides are known, ZrF4, ZrCl4, ZrBr4, and ZrI4. All have polymeric structures and are far less volatile than the corresponding titanium tetrahalides; they find applications in the formation of organic complexes such as zirconocene dichloride. All tend to hydrolyse to give the so-called oxyhalides and dioxides. Fusion of the tetrahalides with additional metal gives lower zirconium halides (e.g. ZrCl3). These adopt a layered structure, conducting within the layers but not perpendicular thereto. The corresponding tetraalkoxides are also known. Unlike the halides, the alkoxides dissolve in nonpolar solvents. Dihydrogen hexafluorozirconate is used in the metal finishing industry as an etching agent to promote paint adhesion. Organic derivatives Organozirconium chemistry is key to Ziegler–Natta catalysts, used to produce polypropylene. This application exploits the ability of zirconium to reversibly form bonds to carbon. Zirconocene dibromide ((C5H5)2ZrBr2), reported in 1952 by Birmingham and Wilkinson, was the first organozirconium compound. Schwartz's reagent, prepared in 1970 by P. C. Wailes and H. Weigold, is a metallocene used in organic synthesis for transformations of alkenes and alkynes. Many complexes of Zr(II) are derivatives of zirconocene, one example being (C5Me5)2Zr(CO)2. History The zirconium-containing mineral zircon and related minerals (jargoon, jacinth, or hyacinth, ligure) were mentioned in biblical writings. The mineral was not known to contain a new element until 1789, when Klaproth analyzed a jargoon from the island of Ceylon (now Sri Lanka). He named the new element Zirkonerde (zirconia), related to the Persian zargun (zircon; zar-gun, "gold-like" or "as gold"). Humphry Davy attempted to isolate this new element in 1808 through electrolysis, but failed. Zirconium metal was first obtained in an impure form in 1824 by Berzelius by heating a mixture of potassium and potassium zirconium fluoride in an iron tube. The crystal bar process (also known as the Iodide Process), discovered by Anton Eduard van Arkel and Jan Hendrik de Boer in 1925, was the first industrial process for the commercial production of metallic zirconium. It involves the formation and subsequent thermal decomposition of zirconium tetraiodide (), and was superseded in 1945 by the much cheaper Kroll process developed by William Justin Kroll, in which zirconium tetrachloride () is reduced by magnesium: ZrCl4 + 2Mg -> Zr + 2MgCl2 Applications Approximately 900,000 tonnes of zirconium ores were mined in 1995, mostly as zircon. Most zircon is used directly in high-temperature applications. Because it is refractory, hard, and resistant to chemical attack, zircon finds many applications. Its main use is as an opacifier, conferring a white, opaque appearance to ceramic materials. Because of its chemical resistance, zircon is also used in aggressive environments, such as moulds for molten metals. Zirconium dioxide (ZrO2) is used in laboratory crucibles, in metallurgical furnaces, and as a refractory material Because it is mechanically strong and flexible, it can be sintered into ceramic knives and other blades. Zircon (ZrSiO4) and cubic zirconia (ZrO2) are cut into gemstones for use in jewelry. Zirconium dioxide is a component in some abrasives, such as grinding wheels and sandpaper. Zircon is also used in dating of rocks from about the time of the Earth's formation through the measurement of its inherent radioisotopes, most often uranium and lead. A small fraction of the zircon is converted to the metal, which finds various niche applications. Because of zirconium's excellent resistance to corrosion, it is often used as an alloying agent in materials that are exposed to aggressive environments, such as surgical appliances, light filaments, and watch cases. The high reactivity of zirconium with oxygen at high temperatures is exploited in some specialised applications such as explosive primers and as getters in vacuum tubes. Zirconium powder is used as a degassing agent in electron tubes, while zirconium wire and sheets are utilized for grid and anode supports. Burning zirconium was used as a light source in some photographic flashbulbs. Zirconium powder with a mesh size from 10 to 80 is occasionally used in pyrotechnic compositions to generate sparks. The high reactivity of zirconium leads to bright white sparks. Nuclear applications Cladding for nuclear reactor fuels consumes about 1% of the zirconium supply, mainly in the form of zircaloys. The desired properties of these alloys are a low neutron-capture cross-section and resistance to corrosion under normal service conditions. Efficient methods for removing the hafnium impurities were developed to serve this purpose. One disadvantage of zirconium alloys is the reactivity with water, producing hydrogen, leading to degradation of the fuel rod cladding: Zr + 2H2O -> ZrO2 + 2H2 Hydrolysis is very slow below 100 °C, but rapid at temperature above 900 °C. Most metals undergo similar reactions. The redox reaction is relevant to the instability of fuel assemblies at high temperatures. This reaction occurred in the reactors 1, 2 and 3 of the Fukushima I Nuclear Power Plant (Japan) after the reactor cooling was interrupted by the earthquake and tsunami disaster of March 11, 2011, leading to the Fukushima I nuclear accidents. After venting the hydrogen in the maintenance hall of those three reactors, the mixture of hydrogen with atmospheric oxygen exploded, severely damaging the installations and at least one of the containment buildings. Zirconium is a constituent of uranium zirconium hydrides, nuclear fuels used in research reactors. Space and aeronautic industries Materials fabricated from zirconium metal and ZrO2 are used in space vehicles where resistance to heat is needed. High temperature parts such as combustors, blades, and vanes in jet engines and stationary gas turbines are increasingly being protected by thin ceramic layers and/or paintable coatings, usually composed of a mixture of zirconia and yttria. Zirconium is also used as a material of first choice for hydrogen peroxide () tanks, propellant lines, valves, and thrusters, in propulsion space systems such as these equipping the Sierra Space's Dream Chaser spaceplane where the thrust is provided by the combustion of kerosene and hydrogen peroxide, a powerful, but unstable, oxidizer. The reason is that zirconium has an excellent corrosion resistance to and, above all, do not catalyse its spontaneous self-decomposition as the ions of many transition metals do. Medical uses Zirconium-bearing compounds are used in many biomedical applications, including dental implants and crowns, knee and hip replacements, middle-ear ossicular chain reconstruction, and other restorative and prosthetic devices. Zirconium binds urea, a property that has been utilized extensively to the benefit of patients with chronic kidney disease. For example, zirconium is a primary component of the sorbent column dependent dialysate regeneration and recirculation system known as the REDY system, which was first introduced in 1973. More than 2,000,000 dialysis treatments have been performed using the sorbent column in the REDY system. Although the REDY system was superseded in the 1990s by less expensive alternatives, new sorbent-based dialysis systems are being evaluated and approved by the U.S. Food and Drug Administration (FDA). Renal Solutions developed the DIALISORB technology, a portable, low water dialysis system. Also, developmental versions of a Wearable Artificial Kidney have incorporated sorbent-based technologies. Sodium zirconium cyclosilicate is used by mouth in the treatment of hyperkalemia. It is a selective sorbent designed to trap potassium ions in preference to other ions throughout the gastrointestinal tract. Mixtures of monomeric and polymeric Zr4+ and Al3+ complexes with hydroxide, chloride and glycine, called aluminium zirconium glycine salts, are used in a preparation as an antiperspirant in many deodorant products. It has been used since the early 1960s, as it was determined more efficacious as an antiperspirant than contemporary active ingredients such as aluminium chlorohydrate. Defunct applications Zirconium carbonate (3ZrO2·CO2·H2O) was used in lotions to treat poison ivy but was discontinued because it occasionally caused skin reactions. Safety Although zirconium has no known biological role, the human body contains, on average, 250 milligrams of zirconium, and daily intake is approximately 4.15 milligrams (3.5 milligrams from food and 0.65 milligrams from water), depending on dietary habits. Zirconium is widely distributed in nature and is found in all biological systems, for example: 2.86 μg/g in whole wheat, 3.09 μg/g in brown rice, 0.55 μg/g in spinach, 1.23 μg/g in eggs, and 0.86 μg/g in ground beef. Further, zirconium is commonly used in commercial products (e.g. deodorant sticks, aerosol antiperspirants) and also in water purification (e.g. control of phosphorus pollution, bacteria- and pyrogen-contaminated water). Short-term exposure to zirconium powder can cause irritation, but only contact with the eyes requires medical attention. Persistent exposure to zirconium tetrachloride results in increased mortality in rats and guinea pigs and a decrease of blood hemoglobin and red blood cells in dogs. However, in a study of 20 rats given a standard diet containing ~4% zirconium oxide, there were no adverse effects on growth rate, blood and urine parameters, or mortality. The U.S. Occupational Safety and Health Administration (OSHA) legal limit (permissible exposure limit) for zirconium exposure is 5 mg/m3 over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) recommended exposure limit (REL) is 5 mg/m3 over an 8-hour workday and a short term limit of 10 mg/m3. At levels of 25 mg/m3, zirconium is immediately dangerous to life and health. However, zirconium is not considered an industrial health hazard. Furthermore, reports of zirconium-related adverse reactions are rare and, in general, rigorous cause-and-effect relationships have not been established. No evidence has been validated that zirconium is carcinogenic or genotoxic. Among the numerous radioactive isotopes of zirconium, 93Zr is among the most common. It is released as a product of nuclear fission of 235U and 239Pu, mainly in nuclear power plants and during nuclear weapons tests in the 1950s and 1960s. It has a very long half-life (1.53 million years), its decay emits only low energy radiations, and it is not considered particularly hazardous.
Physical sciences
Chemical elements_2
null
34439
https://en.wikipedia.org/wiki/Zoonosis
Zoonosis
A zoonosis (; plural zoonoses) or zoonotic disease is an infectious disease of humans caused by a pathogen (an infectious agent, such as a bacterium, virus, parasite, or prion) that can jump from a non-human vertebrate to a human. When humans infect non-humans, it is called reverse zoonosis or anthroponosis. Major modern diseases such as Ebola and salmonellosis are zoonoses. HIV was a zoonotic disease transmitted to humans in the early part of the 20th century, though it has now evolved into a separate human-only disease. Human infection with animal influenza viruses is rare, as they do not transmit easily to or among humans. However, avian and swine influenza viruses in particular possess high zoonotic potential, and these occasionally recombine with human strains of the flu and can cause pandemics such as the 2009 swine flu. Zoonoses can be caused by a range of disease pathogens such as emergent viruses, bacteria, fungi and parasites; of 1,415 pathogens known to infect humans, 61% were zoonotic. Most human diseases originated in non-humans; however, only diseases that routinely involve non-human to human transmission, such as rabies, are considered direct zoonoses. Zoonoses have different modes of transmission. In direct zoonosis the disease is directly transmitted from non-humans to humans through media such as air (influenza) or bites and saliva (rabies). In contrast, transmission can also occur via an intermediate species (referred to as a vector), which carry the disease pathogen without getting sick. The term is from Ancient Greek: ζῷον zoon "animal" and νόσος nosos "sickness". Host genetics plays an important role in determining which non-human viruses will be able to make copies of themselves in the human body. Dangerous non-human viruses are those that require few mutations to begin replicating themselves in human cells. These viruses are dangerous since the required combinations of mutations might randomly arise in the natural reservoir. Causes The emergence of zoonotic diseases originated with the domestication of animals. Zoonotic transmission can occur in any context in which there is contact with or consumption of animals, animal products, or animal derivatives. This can occur in a companionistic (pets), economic (farming, trade, butchering, etc.), predatory (hunting, butchering, or consuming wild game), or research context. Recently, there has been a rise in frequency of appearance of new zoonotic diseases. "Approximately 1.67 million undescribed viruses are thought to exist in mammals and birds, up to half of which are estimated to have the potential to spill over into humans", says a study led by researchers at the University of California, Davis. According to a report from the United Nations Environment Programme and International Livestock Research Institute a large part of the causes are environmental like climate change, unsustainable agriculture, exploitation of wildlife, and land use change. Others are linked to changes in human society such as an increase in mobility. The organizations propose a set of measures to stop the rise. Contamination of food or water supply Foodborne zoonotic diseases are caused by a variety of pathogens that can affect both humans and animals. The most significant zoonotic pathogens causing foodborne diseases are: Bacterial pathogens Escherichia coli O157:H7, Campylobacter, Caliciviridae, and Salmonella. Viral pathogens Hepatitis E: Hepatitis E virus (HEV) is primarily transmitted through pork products, especially in developing countries with limited sanitation. The infection can lead to acute liver disease and is particularly dangerous for pregnant women. Norovirus: Often found in contaminated shellfish and fresh produce, norovirus is a leading cause of foodborne illness globally. It spreads easily and causes symptoms like vomiting, diarrhea, and stomach pain. Parasitic pathogens Toxoplasma gondii: This parasite is commonly found in undercooked meat, especially pork and lamb, and can cause toxoplasmosis. While typically mild, toxoplasmosis can be severe in immunocompromised individuals and pregnant women, potentially leading to complications. Trichinella spp. is transmitted through undercooked pork and wild game, causing trichinellosis. Symptoms range from mild gastrointestinal distress to severe muscle pain and, in rare cases, can be fatal. Farming, ranching and animal husbandry Contact with farm animals can lead to disease in farmers or others that come into contact with infected farm animals. Glanders primarily affects those who work closely with horses and donkeys. Close contact with cattle can lead to cutaneous anthrax infection, whereas inhalation anthrax infection is more common for workers in slaughterhouses, tanneries, and wool mills. Close contact with sheep who have recently given birth can lead to infection with the bacterium Chlamydia psittaci, causing chlamydiosis (and enzootic abortion in pregnant women), as well as increase the risk of Q fever, toxoplasmosis, and listeriosis, in the pregnant or otherwise immunocompromised. Echinococcosis is caused by a tapeworm, which can spread from infected sheep by food or water contaminated by feces or wool. Avian influenza is common in chickens, and, while it is rare in humans, the main public health worry is that a strain of avian influenza will recombine with a human influenza virus and cause a pandemic like the 1918 Spanish flu. In 2017, free-range chickens in the UK were temporarily ordered to remain inside due to the threat of avian influenza. Cattle are an important reservoir of cryptosporidiosis, which mainly affects the immunocompromised. Reports have shown mink can also become infected. In Western countries, hepatitis E burden is largely dependent on exposure to animal products, and pork is a significant source of infection, in this respect. Similarly, the human coronavirus OC43, the main cause of the common cold, can use the pig as a zoonotic reservoir, constantly reinfecting the human population. Veterinarians are exposed to unique occupational hazards when it comes to zoonotic disease. In the US, studies have highlighted an increased risk of injuries and lack of veterinary awareness of these hazards. Research has proved the importance for continued clinical veterinarian education on occupational risks associated with musculoskeletal injuries, animal bites, needle-sticks, and cuts. A July 2020 report by the United Nations Environment Programme stated that the increase in zoonotic pandemics is directly attributable to anthropogenic destruction of nature and the increased global demand for meat and that the industrial farming of pigs and chickens in particular will be a primary risk factor for the spillover of zoonotic diseases in the future. Habitat loss of viral reservoir species has been identified as a significant source in at least one spillover event. Wildlife trade or animal attacks The wildlife trade may increase spillover risk because it directly increases the number of interactions across animal species, sometimes in small spaces. The origin of the COVID-19 pandemic is traced to the wet markets in China. Zoonotic disease emergence is demonstrably linked to the consumption of wildlife meat, exacerbated by human encroachment into natural habitats and amplified by the unsanitary conditions of wildlife markets. These markets, where diverse species converge, facilitate the mixing and transmission of pathogens, including those responsible for outbreaks of HIV-1, Ebola, and mpox, and potentially even the COVID-19 pandemic. Notably, small mammals often harbor a vast array of zoonotic bacteria and viruses, yet endemic bacterial transmission among wildlife remains largely unexplored. Therefore, accurately determining the pathogenic landscape of traded wildlife is crucial for guiding effective measures to combat zoonotic diseases and documenting the societal and environmental costs associated with this practice. Rabies Insect vectors African sleeping sickness Dirofilariasis Eastern equine encephalitis Japanese encephalitis Saint Louis encephalitis Scrub typhus Tularemia Venezuelan equine encephalitis West Nile fever Western equine encephalitis Zika fever Pets Pets can transmit a number of diseases. Dogs and cats are routinely vaccinated against rabies. Pets can also transmit ringworm and Giardia, which are endemic in both animal and human populations. Toxoplasmosis is a common infection of cats; in humans it is a mild disease although it can be dangerous to pregnant women. Dirofilariasis is caused by Dirofilaria immitis through mosquitoes infected by mammals like dogs and cats. Cat-scratch disease is caused by Bartonella henselae and Bartonella quintana, which are transmitted by fleas that are endemic to cats. Toxocariasis is the infection of humans by any of species of roundworm, including species specific to dogs (Toxocara canis) or cats (Toxocara cati). Cryptosporidiosis can be spread to humans from pet lizards, such as the leopard gecko. Encephalitozoon cuniculi is a microsporidial parasite carried by many mammals, including rabbits, and is an important opportunistic pathogen in people immunocompromised by HIV/AIDS, organ transplantation, or CD4+ T-lymphocyte deficiency. Pets may also serve as a reservoir of viral disease and contribute to the chronic presence of certain viral diseases in the human population. For instance, approximately 20% of domestic dogs, cats, and horses carry anti-hepatitis E virus antibodies and thus these animals probably contribute to human hepatitis E burden as well. For non-vulnerable populations (e.g., people who are not immunocompromised) the associated disease burden is, however, small. Furthermore, the trade of non domestic animals such as wild animals as pets can also increase the risk of zoonosis spread. Exhibition Outbreaks of zoonoses have been traced to human interaction with, and exposure to, other animals at fairs, live animal markets, petting zoos, and other settings. In 2005, the Centers for Disease Control and Prevention (CDC) issued an updated list of recommendations for preventing zoonosis transmission in public settings. The recommendations, developed in conjunction with the National Association of State Public Health Veterinarians, include educational responsibilities of venue operators, limiting public animal contact, and animal care and management. Hunting and bushmeat Hunting involves humans tracking, chasing, and capturing wild animals, primarily for food or materials like fur. However, other reasons like pest control or managing wildlife populations can also exist. Transmission of zoonotic diseases, those leaping from animals to humans, can occur through various routes: direct physical contact, airborne droplets or particles, bites or vector transport by insects, oral ingestion, or even contact with contaminated environments. Wildlife activities like hunting and trade bring humans closer to dangerous zoonotic pathogens, threatening global health. According to the Center for Diseases Control and Prevention (CDC) hunting and consuming wild animal meat ("bushmeat") in regions like Africa can expose people to infectious diseases due to the types of animals involved, like bats and primates. Unfortunately, common preservation methods like smoking or drying aren't enough to eliminate these risks. Although bushmeat provides protein and income for many, the practice is intricately linked to numerous emerging infectious diseases like Ebola, HIV, and SARS, raising critical public health concerns. A review published in 2022 found evidence that zoonotic spillover linked to wildmeat consumption has been reported across all continents. Deforestation, biodiversity loss and environmental degradation Kate Jones, Chair of Ecology and Biodiversity at University College London, says zoonotic diseases are increasingly linked to environmental change and human behavior. The disruption of pristine forests driven by logging, mining, road building through remote places, rapid urbanization, and population growth is bringing people into closer contact with animal species they may never have been near before. The resulting transmission of disease from wildlife to humans, she says, is now "a hidden cost of human economic development". In a guest article, published by IPBES, President of the EcoHealth Alliance and zoologist Peter Daszak, along with three co-chairs of the 2019 Global Assessment Report on Biodiversity and Ecosystem Services, Josef Settele, Sandra Díaz, and Eduardo Brondizio, wrote that "rampant deforestation, uncontrolled expansion of agriculture, intensive farming, mining and infrastructure development, as well as the exploitation of wild species have created a 'perfect storm' for the spillover of diseases from wildlife to people." Joshua Moon, Clare Wenham, and Sophie Harman said that there is evidence that decreased biodiversity has an effect on the diversity of hosts and frequency of human-animal interactions with potential for pathogenic spillover. An April 2020 study, published in the Proceedings of the Royal Society Part B journal, found that increased virus spillover events from animals to humans can be linked to biodiversity loss and environmental degradation, as humans further encroach on wildlands to engage in agriculture, hunting, and resource extraction they become exposed to pathogens which normally would remain in these areas. Such spillover events have been tripling every decade since 1980. An August 2020 study, published in Nature, concludes that the anthropogenic destruction of ecosystems for the purpose of expanding agriculture and human settlements reduces biodiversity and allows for smaller animals such as bats and rats, which are more adaptable to human pressures and also carry the most zoonotic diseases, to proliferate. This in turn can result in more pandemics. In October 2020, the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services published its report on the 'era of pandemics' by 22 experts in a variety of fields and concluded that anthropogenic destruction of biodiversity is paving the way to the pandemic era and could result in as many as 850,000 viruses being transmitted from animals – in particular birds and mammals – to humans. The increased pressure on ecosystems is being driven by the "exponential rise" in consumption and trade of commodities such as meat, palm oil, and metals, largely facilitated by developed nations, and by a growing human population. According to Peter Daszak, the chair of the group who produced the report, "there is no great mystery about the cause of the Covid-19 pandemic, or of any modern pandemic. The same human activities that drive climate change and biodiversity loss also drive pandemic risk through their impacts on our environment." Climate change According to a report from the United Nations Environment Programme and International Livestock Research Institute, entitled "Preventing the next pandemic – Zoonotic diseases and how to break the chain of transmission", climate change is one of the 7 human-related causes of the increase in the number of zoonotic diseases. The University of Sydney issued a study, in March 2021, that examines factors increasing the likelihood of epidemics and pandemics like the COVID-19 pandemic. The researchers found that "pressure on ecosystems, climate change and economic development are key factors" in doing so. More zoonotic diseases were found in high-income countries. A 2022 study dedicated to the link between climate change and zoonosis found a strong link between climate change and the epidemic emergence in the last 15 years, as it caused a massive migration of species to new areas, and consequently contact between species which do not normally come in contact with one another. Even in a scenario with weak climatic changes, there will be 15,000 spillover of viruses to new hosts in the next decades. The areas with the most possibilities for spillover are the mountainous tropical regions of Africa and southeast Asia. Southeast Asia is especially vulnerable as it has a large number of bat species that generally do not mix, but could easily if climate change forced them to begin migrating. A 2021 study found possible links between climate change and transmission of COVID-19 through bats. The authors suggest that climate-driven changes in the distribution and robustness of bat species harboring coronaviruses may have occurred in eastern Asian hotspots (southern China, Myanmar, and Laos), constituting a driver behind the evolution and spread of the virus. Secondary Transmission Zoonotic diseases contribute significantly to the burdened public health system as vulnerable groups such the elderly, children, childbearing women and immune-compromised individuals are at risk. According to the World Health Organization (WHO), any disease or infection that is primarily ‘naturally’ transmissible from vertebrate animals to humans or from humans to animals is classified as a zoonosis. Factors such as climate change, urbanization, animal migration and trade, travel and tourism, vector biology, anthropogenic factors, and natural factors have greatly influenced the emergence, re-emergence, distribution, and patterns of zoonoses. Zoonotic diseases generally refer to diseases of animal origin in which direct or vector mediated animal-to-human transmission is the usual source of human infection. Animal populations are the principal reservoir of the pathogen and horizontal infection in humans is rare. A few examples in this category include lyssavirus infections, Lyme borreliosis, plague, tularemia, leptospirosis, ehrlichiosis, Nipah virus, West Nile virus (WNV) and hantavirus infections. Secondary transmission encompasses a category of diseases of animal origin in which the actual transmission to humans is a rare event but, once it has occurred, human-to-human transmission maintains the infection cycle for some period of time. Some examples include human immunodeficiency virus (HIV)/acquired immune deficiency syndrome (AIDS), certain influenza A strains, Ebola virus and severe acute respiratory syndrome (SARS). One example is Ebola which is spread by direct transmission to humans from handling bushmeat (wild animals hunted for food) and contact with infected bats or close contact with infected animals, including chimpanzees, fruit bats, and forest antelope. Secondary transmission also occurs from human to human by direct contact with blood, bodily fluids, or skin of patients with or who died of Ebola virus disease. Some examples of pathogens with this pattern of secondary transmission are human immunodeficiency virus/acquired immune deficiency syndrome, influenza A, Ebola virus and severe acute respiratory syndrome. Recent infections of these emerging and re-emerging zoonotic infections have occurred as a results of many ecological and sociological changes globally. History During most of human prehistory groups of hunter-gatherers were probably very small. Such groups probably made contact with other such bands only rarely. Such isolation would have caused epidemic diseases to be restricted to any given local population, because propagation and expansion of epidemics depend on frequent contact with other individuals who have not yet developed an adequate immune response. To persist in such a population, a pathogen either had to be a chronic infection, staying present and potentially infectious in the infected host for long periods, or it had to have other additional species as reservoir where it can maintain itself until further susceptible hosts are contacted and infected. In fact, for many "human" diseases, the human is actually better viewed as an accidental or incidental victim and a dead-end host. Examples include rabies, anthrax, tularemia, and West Nile fever. Thus, much of human exposure to infectious disease has been zoonotic. Many diseases, even epidemic ones, have zoonotic origin and measles, smallpox, influenza, HIV, and diphtheria are particular examples. Various forms of the common cold and tuberculosis also are adaptations of strains originating in other species. Some experts have suggested that all human viral infections were originally zoonotic. Zoonoses are of interest because they are often previously unrecognized diseases or have increased virulence in populations lacking immunity. The West Nile virus first appeared in the United States in 1999, in the New York City area. Bubonic plague is a zoonotic disease, as are salmonellosis, Rocky Mountain spotted fever, and Lyme disease. A major factor contributing to the appearance of new zoonotic pathogens in human populations is increased contact between humans and wildlife. This can be caused either by encroachment of human activity into wilderness areas or by movement of wild animals into areas of human activity. An example of this is the outbreak of Nipah virus in peninsular Malaysia, in 1999, when intensive pig farming began within the habitat of infected fruit bats. The unidentified infection of these pigs amplified the force of infection, transmitting the virus to farmers, and eventually causing 105 human deaths. Similarly, in recent times avian influenza and West Nile virus have spilled over into human populations probably due to interactions between the carrier host and domestic animals. Highly mobile animals, such as bats and birds, may present a greater risk of zoonotic transmission than other animals due to the ease with which they can move into areas of human habitation. Because they depend on the human host for part of their life-cycle, diseases such as African schistosomiasis, river blindness, and elephantiasis are not defined as zoonotic, even though they may depend on transmission by insects or other vectors. Use in vaccines The first vaccine against smallpox by Edward Jenner in 1800 was by infection of a zoonotic bovine virus which caused a disease called cowpox. Jenner had noticed that milkmaids were resistant to smallpox. Milkmaids contracted a milder version of the disease from infected cows that conferred cross immunity to the human disease. Jenner abstracted an infectious preparation of 'cowpox' and subsequently used it to inoculate persons against smallpox. As a result of vaccination, smallpox has been eradicated globally, and mass inoculation against this disease ceased in 1981. There are a variety of vaccine types, including traditional inactivated pathogen vaccines, subunit vaccines, live attenuated vaccines. There are also new vaccine technologies such as viral vector vaccines and DNA/RNA vaccines, which include many of the COVID-19 vaccines. Lists of diseases
Biology and health sciences
Concepts
Health
34440
https://en.wikipedia.org/wiki/Zeppelin
Zeppelin
A Zeppelin is a type of rigid airship named after the German inventor Ferdinand von Zeppelin () who pioneered rigid airship development at the beginning of the 20th century. Zeppelin's notions were first formulated in 1874 and developed in detail in 1893. They were patented in Germany in 1895 and in the United States in 1899. After the outstanding success of the Zeppelin design, the word zeppelin came to be commonly used to refer to all forms of rigid airships. Zeppelins were first flown commercially in 1910 by Deutsche Luftschiffahrts-AG (DELAG), the world's first airline in revenue service. By mid-1914, DELAG had carried over 10,000 fare-paying passengers on over 1,500 flights. During World War I, the German military made extensive use of Zeppelins as bombers and as scouts. Numerous bombing raids on Britain resulted in over 500 deaths. The defeat of Germany in 1918 temporarily slowed the airship business. Although DELAG established a scheduled daily service between Berlin, Munich, and Friedrichshafen in 1919, the airships built for that service eventually had to be surrendered under the terms of the Treaty of Versailles, which also prohibited Germany from building large airships. An exception was made to allow the construction of one airship for the United States Navy, the order for which saved the company from extinction. In 1926, the restrictions on airship construction were lifted and, with the aid of donations from the public, work began on the construction of LZ 127 Graf Zeppelin. That revived the company's fortunes and, during the 1930s, the airships Graf Zeppelin, and the even larger LZ 129 Hindenburg operated regular transatlantic flights from Germany to North America and Brazil. The spire of the Empire State Building was originally designed to serve as a mooring mast for Zeppelins and other airships, although it was found that high winds made that impossible and the plan was abandoned. The Hindenburg disaster in 1937, along with political and economic developments in Germany in the lead-up to World War II, hastened the demise of airships. Principal characteristics The principal feature of the Zeppelin's design was a fabric-covered, rigid metal framework of transverse rings and longitudinal girders enclosing a number of individual gasbags. This allowed the craft to be much larger than non-rigid airships, which relied on the inflation of a single-pressure envelope to maintain their shape. The framework of most Zeppelins was made of duralumin, a combination of aluminium, copper, and two or three other metals, the exact composition of which was kept secret for years. Early Zeppelins used rubberized cotton for the gasbags, but most later craft used goldbeater's skin made from cattle gut. The first Zeppelins had long cylindrical hulls with tapered ends and complex multi-plane fins. During World War I, following the lead of the rival firm Schütte-Lanz Luftschiffbau, almost all later airships changed to the more familiar streamlined shape with cruciform tail fins. Zeppelins were propelled by several internal combustion engines, mounted in gondolas or engine cars attached outside the structural framework. Some of these could provide reverse thrust for manoeuvring while mooring. Early models had a fairly small externally-mounted gondola for passengers and crew beneath the frame. This space was never heated, because fire outside of the kitchen was considered too risky, and during trips across the North Atlantic or Siberia passengers were forced to bundle in blankets and furs to keep warm and were often miserably cold. By the time of the Hindenburg several important changes had made traveling much more comfortable: the passenger space had been relocated to the interior of the framework, passenger rooms were insulated from the exterior by the dining area, and forced-warm air could be circulated from the water that cooled the forward engines. The new design did prevent passengers from enjoying the views from the windows of their berths, which had been a major attraction on the Graf Zeppelin. On both the older and newer vessels, the external viewing windows were often open during flight. The flight altitude was so low that no pressurization of the cabins was necessary. The Hindenburg did maintain a pressurized air-locked smoking room: no flame was allowed, but a single electric lighter was provided, which could not be removed from the room. Access to Zeppelins was achieved in a number of ways. The Graf Zeppelins gondola was accessed while the vessel was on the ground, via gangways. The Hindenburg also had passenger gangways leading from the ground directly into its hull which could be withdrawn entirely, ground access to the gondola, and an exterior access hatch via its electrical room; this latter was intended for crew use only. On some long-distance zeppelins, engines were powered by a special Blau gas produced by the Zeppelin facility in Friedrichshafen. The combustible Blau gas was formulated to make its weight near that of air, so that its storage and consumption had little effect on the zeppelin's buoyancy. Blau gas was used on the first zeppelin voyage to America, starting in 1929. History Early designs Count Ferdinand von Zeppelin's interest in airship development began in 1874, when he was inspired by a lecture given by Heinrich von Stephan on the subject of "World Postal Services and Air Travel" to outline the basic principle of his later craft in a diary entry dated 25 March 1874. It describes a large rigidly framed outer envelope containing several separate gasbags. He had previously encountered Union Army balloons in 1863 when he visited the United States as a military observer during the American Civil War. Count Zeppelin began to seriously pursue his project after his early retirement from the army in 1890 at the age of 52. Convinced of the potential importance of aviation, he started working on various designs in 1891, and had completed detailed designs by 1893. An official committee reviewed his plans in 1894, and he received a patent, granted on 31 August 1895, with Theodor Kober producing the technical drawings. Zeppelin's patent described a ("Steerable aircraft with several carrier bodies arranged one behind another"), an airship consisting of flexibly articulated rigid sections. The front section, containing the crew and engines, was long with a gas capacity of . The middle section was long with an intended useful load of and the rear section long with an intended load of . Count Zeppelin's attempts to secure government funding for his project proved unsuccessful, but a lecture given to the Union of German Engineers gained their support. Zeppelin also sought support from the industrialist Carl Berg, then engaged in construction work on the second airship design of David Schwarz. Berg was under contract not to supply aluminium to any other airship manufacturer, and subsequently made a payment to Schwarz's widow as compensation for breaking this agreement. Schwarz's design differed fundamentally from Zeppelin's, crucially lacking the use of separate gasbags inside a rigid envelope. In 1898, Count Zeppelin founded the Gesellschaft zur Förderung der Luftschiffahrt (Society for the Promotion of Airship Flight), contributing more than half of its 800,000 mark share-capital himself. Responsibility for the detail design was given to Kober, whose place was later taken by Ludwig Dürr, and construction of the first airship began in 1899 in a floating assembly-hall or hangar in the Bay of Manzell near Friedrichshafen on Lake Constance (the Bodensee). The intention behind the floating hall was to facilitate the difficult task of bringing the airship out of the hall, as it could easily be aligned with the wind. The LZ 1 (LZ for Luftschiff Zeppelin, or "Zeppelin Airship") was long with a hydrogen capacity of , was driven by two Daimler engines each driving a pair of propellers mounted either side of the envelope via bevel gears and a driveshaft, and was controlled in pitch by moving a weight between its two nacelles. The first flight took place over Lake Constance on 2 July 1900. Damaged during landing, it was repaired and modified and proved its potential in two subsequent flights made on 17 and 24 October 1900, bettering the 6 m/s ( ) velocity attained by the French airship La France. Despite this performance, the shareholders declined to invest more money, and so the company was liquidated, with Count von Zeppelin purchasing the ship and equipment. The Count wished to continue experimenting, but he eventually dismantled the ship in 1901. Donations, the profits of a special lottery, some public funding, a mortgage of Count von Zeppelin's wife's estate, and a 100,000 mark contribution by Count von Zeppelin himself allowed the construction of LZ 2, which made only a single flight on 17 January 1906. After both engines failed it made a forced landing in the Allgäu mountains, where a storm subsequently damaged the anchored ship beyond repair. Incorporating all the usable parts of LZ 2, its successor LZ 3 became the first truly successful Zeppelin. This renewed the interest of the German military, but a condition of purchase of an airship was a 24-hour endurance trial. This was beyond the capabilities of LZ 3, leading Zeppelin to construct his fourth design, the LZ 4, first flown on 20 June 1908. On 1 July it was flown over Switzerland to Zürich and then back to Lake Constance, covering and reaching an altitude of . An attempt to complete the 24-hour trial flight ended when LZ 4 had to make a landing at Echterdingen near Stuttgart because of mechanical problems. During the stop, a storm tore the airship away from its moorings on the afternoon of 5 August 1908. It crashed into a tree, caught fire, and quickly burnt out. No one was seriously injured. This accident would have finished Zeppelin's experiments, but his flights had generated huge public interest and a sense of national pride regarding his work, and spontaneous donations from the public began pouring in, eventually totalling over six million marks. This enabled the Count to found the Luftschiffbau Zeppelin GmbH (Airship Construction Zeppelin Ltd.) and the Zeppelin Foundation. Before World War I Before World War I (1914–1918) the Zeppelin company manufactured 21 more airships. The Imperial German Army bought LZ 3 and LZ 5 (a sister-ship to LZ 4 which was completed in May 1909) and designated them Z I and Z II respectively. Z II was wrecked in a gale in April 1910, while Z I flew until 1913, when it was decommissioned and replaced by LZ 15, designated ersatz Z I. First flown on 16 January 1913, it was wrecked on 19 March of the same year. In April 1913 its newly built sister-ship LZ 15 (Z IV) accidentally intruded into French airspace owing to a navigational error caused by high winds and poor visibility. The commander judged it proper to land the airship to demonstrate that the incursion was accidental, and brought the ship down on the military parade-ground at Lunéville. The airship remained on the ground until the following day, permitting a detailed examination by French airship experts. In 1909, Count Zeppelin founded the world's first airline, the Deutsche Luftschiffahrts-Aktiengesellschaft (German Airship Travel Corporation), generally known as DELAG to promote his airships, initially using LZ 6, which he had hoped to sell to the German Army. Notable aviation figures like Orville Wright offered critical perspectives on the Zeppelin; in a September 1909 New York Times interview, Wright compared airships to steam engines nearing their developmental peak, while seeing airplanes as akin to gas engines with untapped potential for innovation.The airships did not provide a scheduled service between cities, but generally operated pleasure cruises, carrying twenty passengers. The airships were given names in addition to their production numbers. LZ 6 first flew on 25 August 1909 and was accidentally destroyed in Baden-Oos on 14 September 1910 by a fire in its hangar. The second DELAG airship, LZ 7 Deutschland, made its maiden voyage on 19 June 1910. On 28 June it set off on a voyage to publicise Zeppelins, carrying 19 journalists as passengers. A combination of adverse weather and engine failure brought it down at Mount Limberg near Bad Iburg in Lower Saxony, its hull getting stuck in trees. All passengers and crew were unhurt, except for one crew member who broke his leg when he jumped from the craft. It was replaced by LZ 8 Deutschland II, which also had a short career, first flying on 30 March 1911 and becoming damaged beyond repair when caught by a strong cross-wind while being walked out of its shed on 16 May. The company's fortunes changed with the next ship, LZ 10 Schwaben, which first flew on 26 June 1911 and carried 1,553 passengers in 218 flights before catching fire after a gust tore it from its mooring near Düsseldorf. Other DELAG ships included LZ 11 Viktoria Luise (1912), LZ 13 Hansa (1912) and LZ 17 Sachsen (1913). By the outbreak of World War I in August 1914, 1588 flights had carried 10,197 fare-paying passengers. On 24 April 1912, the Imperial German Navy ordered its first Zeppelin—an enlarged version of the airships operated by DELAG—which received the naval designation Z 1 and entered Navy service in October 1912. On 18 January 1913 Admiral Alfred von Tirpitz, Secretary of State of the German Imperial Naval Office, obtained the agreement of Kaiser Wilhelm II to a five-year program of expansion of German naval-airship strength, involving the building of two airship bases and constructing a fleet of ten airships. The first airship of the program, L 2, was ordered on 30 January. L 1 was lost on 9 September near Heligoland when caught in a storm while taking part in an exercise with the German fleet. 14 crew members drowned, the first fatalities in a Zeppelin accident. Less than six weeks later, on 17 October, LZ 18 (L 2) caught fire during its acceptance trials, killing the entire crew. These accidents deprived the Navy of most of its experienced personnel: the head of the Admiralty Air Department was killed in the L 1 and his successor died in the L 2. The Navy was left with three partially trained crews. The next Navy zeppelin, the M class L 3, did not enter service until May 1914: in the meantime, Sachsen was hired from DELAG as a training ship. By the outbreak of war in August 1914, Zeppelin had started constructing the first M class airships, which had a length of , with a volume of and a useful load of . Their three Maybach C-X engines produced each, and they could reach speeds of up to . During World War I During World War I, Germany’s airships were operated separately by the Army and the Navy. At the war’s outset, the Army assumed control of the three remaining DELAG airships, having already decommissioned three older Zeppelins, including Z I. Throughout the war, the Navy primarily used its Zeppelins for reconnaissance missions. Although Zeppelin bombing raids, especially those aimed at London, captivated the German public’s imagination, they had limited material success. Nevertheless, these raids—along with later bombing raids by airplanes—led to the diversion of men and resources from the Western Front. Additionally, the fear of these attacks impacted industrial production to some extent. Early offensive operations by Army airships quickly revealed their extreme vulnerability to ground fire when flown at low altitudes, leading to the loss of several airships. Since no dedicated bombs had been developed at the time, these early raids dropped artillery shells instead. On August 5, 1914, the airship Z VI bombed Liège. Due to cloud cover, it had to fly at a relatively low altitude, making it susceptible to small-arms fire; the damage led to a forced landing near Bonn, where the airship was destroyed. Thirteen bombs were dropped during the raid, resulting in the deaths of nine civilians. On August 21, airships Z VII and Z VIII were damaged by ground fire while supporting German Army operations in Alsace, with Z VIII ultimately lost. During the night of August 24-25, Z IX bombed Antwerp, striking near the royal palace and killing five people. Less effective raids followed on the nights of September 1-2 and October 7. However, on October 8, Z IX was destroyed in its hangar at Düsseldorf by Flight Lieutenant Reginald Marix of the Royal Naval Air Service (RNAS). The RNAS had previously bombed Zeppelin bases in Cologne on September 22, 1914. On the Eastern Front, airship Z V was brought down by ground fire on August 28 during the Battle of Tannenberg, with most of its crew captured. Z IV bombed Warsaw on September 24 and was also used to support German Army operations in East Prussia. By the end of 1914, the Army’s airship fleet had been reduced to four. On 20 March 1915, temporarily forbidden from bombing London by the Kaiser, Z X (LZ 29), LZ 35 and the Schütte-Lanz airship SL 2 set off to bomb Paris: SL 2 was damaged by artillery fire while crossing the front and turned back but the two Zeppelins reached Paris and dropped of bombs, killing only one and wounding eight. On the return journey Z X was damaged by anti-aircraft fire and was damaged beyond repair in the resulting forced landing. Three weeks later LZ 35 suffered a similar fate after bombing Poperinghe. Paris mounted a more effective defense against zeppelin raids than London. Zeppelins attacking Paris had to first fly over the system of forts between the front and the city, from which they were subjected to anti-aircraft fire with reduced risk of collateral damage. The French also maintained a continuous patrol of two fighters over Paris at an altitude from which they could promptly attack arriving zeppelins avoiding the delay required to reach the zeppelin altitude. Two further missions were flown against Paris in January 1916: on 29 January LZ 79 killed 23 and injured another 30 but was so severely damaged by anti-aircraft fire that it crashed during the return journey. A second mission by LZ 77 the following night bombed the suburbs of Asnières and Versailles, with little effect. Airship operations in the Balkans started in the autumn of 1915, and an airship base was constructed at Szentandras. In November 1915, LZ 81 was used to fly diplomats to Sofia for negotiations with the Bulgarian government. This base was also used by LZ 85 to conduct two raids on Salonika in early 1916: a third raid on 4 May ended with it being brought down by anti-aircraft fire. The crew survived but were taken prisoner. When Romania entered the war in August 1916, LZ 101 was transferred to Yambol and bombed Bucharest on 28 August, 4 September and 25 September. LZ 86 transferred to Szentandras and made a single attack on the Ploiești oil fields on 4 September but was wrecked on attempting to land after the mission. Its replacement, LZ 86, was damaged by anti-aircraft fire during its second attack on Bucharest on 26 September and was damaged beyond repair in the resulting forced landing, and was replaced by LZ 97. At the instigation of the Kaiser a plan was made to bomb Saint Petersburg in December 1916. Two Navy zeppelins were transferred to Wainoden on the Courland Peninsula. A preliminary attempt to bomb Reval on 28 December ended in failure caused by operating problems due to the extreme cold, and one of the airships was destroyed in a forced landing at Seerappen. The plan was subsequently abandoned. In 1917 the German High Command made an attempt to use a Zeppelin to deliver supplies to Lettow-Vorbeck's forces in German East Africa. L 57, a specially lengthened craft was to have flown the mission but was destroyed shortly after completion. A Zeppelin then under construction, L 59, was then modified for the mission: it set off from Yambol on 21 November 1917 and nearly reached its destination, but was ordered to return by radio. Its journey covered and lasted 95 hours. It was then used for reconnaissance and bombing missions in the eastern Mediterranean. It flew one bombing mission against Naples on 10–11 March 1918. A planned attack on Suez was turned back by high winds, and on 7 April 1918 it was on a mission to bomb the British naval base at Malta when it caught fire over the Straits of Otranto, with the loss of all its crew. On 5 January 1918, a fire at Ahlhorn destroyed four of the specialised double sheds along with four Zeppelins and one Schütte-Lanz. In July 1918, the Tondern raid conducted by the RAF and Royal Navy, destroyed two Zeppelins in their sheds. 1914–1918 naval patrols The main use of the airship was in reconnaissance over the North Sea and the Baltic, and the majority of airships manufactured were used by the Navy. Patrolling had priority over any other airship activity. During the war almost 1,000 missions were flown over the North Sea alone, compared with about 50 strategic bombing raids. The German Navy had some 15 Zeppelins in commission by the end of 1915 and was able to have two or more patrolling continuously at any one time. However, their operations were limited by weather conditions. On 16 February, L 3 and L 4 were lost owing to a combination of engine failure and high winds, L 3 crashing on the Danish island of Fanø without loss of life and L 4 coming down at Blaavands Huk; eleven crew escaped from the forward gondola, after which the lightened airship with four crew members remaining in the aft engine car was blown out to sea and lost. At this stage in the war there was no clear doctrine for the use of Naval airships. A single large Zeppelin, L 5, played an unimportant part in the Battle of the Dogger Bank on 24 January 1915. L 5 was carrying out a routine patrol when it picked up Admiral Hipper's radio signal announcing that he was engaged with the British battle cruiser squadron. Heading towards the German fleet's position, the Zeppelin was forced to climb above the cloud cover by fire from the British fleet: its commander then decided that it was his duty to cover the retreating German fleet rather than observe British movements. In 1915, patrols were only carried out on 124 days and in other years the total was considerably less. They prevented British ships from approaching Germany, spotted when and where the British were laying mines and later aided in the destruction of those mines. Zeppelins would sometimes land on the sea next to a minesweeper, bring aboard an officer and show him the mines' locations. In 1917, the Royal Navy began to take effective countermeasures against airship patrols over the North Sea. In April, the first Curtiss H.12 Large America long-range flying boats were delivered to RNAS Felixstowe, and in July 1917, the aircraft carrier entered service and launching platforms for aeroplanes were fitted to the forward turrets of some light cruisers. On 14 May, L 22 was shot down near Terschelling Bank by an H.12 flown by Lt. Galpin and Sub-Lt. Leckie, which had been alerted following interception of its radio traffic. Two abortive interceptions were made by Galpin and Leckie on 24 May and 5 June. On 14 June, L 43 was brought down by an H.12 flown by Sub Lts. Hobbs and Dickie. On the same day Galpin and Leckie intercepted and attacked L 46. The Germans had believed that the previous unsuccessful attacks had been made by an aircraft operating from one of the Royal Navy's seaplane carriers; now realising that there was a new threat, Strasser ordered airships patrolling in the Terschilling area to maintain an altitude of at least , considerably reducing their effectiveness. On 21 August, L 23, patrolling off the Danish coast, was spotted by the British 3rd Light Cruiser squadron which was in the area. launched its Sopwith Pup, and Sub-Lt. B. A. Smart succeeded in shooting the Zeppelin down in flames. The cause of the airship's loss was not discovered by the Germans, who believed the Zeppelin had been brought down by anti-aircraft fire from ships. Bombing campaign against Britain At the beginning of the conflict, the German command had high hopes for the airships, which were considerably more capable than contemporary light fixed-wing machines: they were almost as fast, could carry multiple machine guns, and had enormously greater bomb-load range and endurance. Contrary to expectation, it was not easy to ignite the hydrogen using standard bullets and shrapnel. The Allies only started to exploit the Zeppelin's great vulnerability to fire when a combination of Pomeroy and Brock explosive ammunition with Buckingham incendiary ammunition was used in fighter aircraft machine guns during 1916. The British had been concerned over the threat posed by Zeppelins since 1909, and attacked the Zeppelin bases early in the war. LZ 25 was destroyed in its hangar at Düsseldorf on 8 October 1914 by bombs dropped by Flt Lt Reginald Marix, RNAS, and the sheds at Cologne as well as the Zeppelin works in Friedrichshafen were also attacked. These raids were followed by the Cuxhaven Raid on Christmas Day 1914, one of the first operations carried out by ship-launched aeroplanes. Airship raids on Great Britain were approved by the Kaiser on 7 January 1915, although he excluded London as a target and further demanded that no attacks be made on historic buildings. The raids were intended to target only military sites on the east coast and around the Thames estuary, but bombing accuracy was poor owing to the height at which the airships flew and navigation was problematic. The airships relied largely on dead reckoning, supplemented by a radio direction-finding system of limited accuracy. After blackouts became widespread, many bombs fell at random on uninhabited countryside. 1915 The first raid on England took place on the night of 19–20 January 1915. Two Zeppelins, L 3 and L 4, intended to attack targets near the River Humber but, diverted by strong winds, eventually dropped their bombs on Great Yarmouth, Sheringham, King's Lynn and the surrounding villages, killing four and injuring 16. Material damage was estimated at £7,740. The Kaiser authorised the bombing of the London docks on 12 February 1915, but no raids on London took place until May. Two Navy raids failed due to bad weather on 14 and 15 April, and it was decided to delay further attempts until the more capable P class Zeppelins were in service. The Army received the first of these, LZ 38, and Erich Linnarz commanded it on a raid over Ipswich on 29–30 April and another, attacking Southend on 9–10 May. LZ 38 also attacked Dover and Ramsgate on 16–17 May, before returning to bomb Southend on 26–27 May. These four raids killed six people and injured six, causing property damage estimated at £16,898. Twice Royal Naval Air Service (RNAS) aircraft tried to intercept LZ 38 but on both occasions it was either able to outclimb the aircraft or was already at too great an altitude for the aircraft to intercept. On 31 May Linnarz commanded LZ 38 on the first raid against London. In total some 120 bombs were dropped on a line stretching from Stoke Newington south to Stepney and then north toward Leytonstone. Seven people were killed and 35 injured. 41 fires were started, burning out seven buildings and the total damage was assessed at £18,596. Aware of the problems that the Germans were experiencing in navigation, this raid caused the government to issue a D notice prohibiting the press from reporting anything about raids that was not mentioned in official statements. Only one of the 15 defensive sorties managed to make visual contact with the enemy, and one of the pilots, Flt Lieut D. M. Barnes, was killed on attempting to land. The first naval attempt on London took place on 4 June: strong winds caused the commander of L 9 to misjudge his position, and the bombs were dropped on Gravesend. L 9 was also diverted by the weather on 6–7 June, attacking Hull instead of London and causing considerable damage. On the same night an Army raid of three Zeppelins also failed because of the weather, and as the airships returned to Evere (Brussels) they ran into a counter-raid by RNAS aircraft flying from Furnes, Belgium. LZ 38 was destroyed on the ground and LZ 37 was intercepted in the air by R. A. J. Warneford, who dropped six bombs on the airship, setting it on fire. All but one of the crew died. Warneford was awarded the Victoria Cross for his achievement. As a consequence of the RNAS raid both the Army and Navy withdrew from their bases in Belgium. After an ineffective attack by L 10 on Tyneside on 15–16 June the short summer nights discouraged further raids for some months, and the remaining Army Zeppelins were reassigned to the Eastern and Balkan fronts. The Navy resumed raids on Britain in August, when three largely ineffective raids were carried out. On 10 August the antiaircraft guns had their first success, causing L 12 to come down into the sea off Zeebrugge, and on 17–18 August L 10 became the first Navy airship to reach London. Mistaking the reservoirs of the Lea Valley for the Thames, it dropped its bombs on Walthamstow and Leytonstone. L 10 was destroyed a little over two weeks later: it was struck by lightning and caught fire off Cuxhaven, and the entire crew was killed. Three Army airships set off to bomb London on 7–8 September, of which two succeeded: SL 2 dropped bombs between Southwark and Woolwich: LZ 74 scattered 39 bombs over Cheshunt before heading on to London and dropping a single bomb on Fenchurch Street station. The Navy attempted to follow up the Army's success the following night. One Zeppelin targeted the benzole plant at Skinningrove and three set off to bomb London: two were forced to turn back but L 13, commanded by Kapitänleutnant Heinrich Mathy reached London. The bomb-load included a bomb, the largest yet carried. This exploded near Smithfield Market, destroying several houses and killing two men. More bombs fell on the textile warehouses north of St Paul's Cathedral, causing a fire which despite the attendance of 22 fire engines caused over half a million pounds of damage: Mathy then turned east, dropping his remaining bombs on Liverpool Street station. The Zeppelin was the target of concentrated anti-aircraft fire, but no hits were scored and the falling shrapnel caused both damage and alarm on the ground. The raid killed 22 people and injured 87. The monetary cost of the damage was over one sixth of the total damage costs inflicted by bombing raids during the war. After three more raids were scattered by the weather, a five-Zeppelin raid was launched by the Navy on 13 October, the "Theatreland Raid." Arriving over the Norfolk coast at around 18:30, the Zeppelins encountered new ground defences installed since the September raid; these had no success, although the airship commanders commented on the improved defences of the city. L 15 began bombing over Charing Cross, the first bombs striking the Lyceum Theatre and the corner of Exeter and Wellington Streets, killing 17 and injuring 20. None of the other Zeppelins reached central London: bombs fell on Woolwich, Guildford, Tonbridge, Croydon, Hertford and an army camp near Folkestone. A total of 71 people were killed and 128 injured. This was the last raid of 1915, as bad weather coincided with the new moon in both November and December 1915 and continued into January 1916. Although these raids had no significant military impact, the psychological effect was considerable. The writer D. H. Lawrence described one raid in a letter to Lady Ottoline Morrell: 1916 The raids continued in 1916. In December 1915, additional P class Zeppelins and the first of the new Q class airships were delivered. The Q class was an enlargement of the P class with improved ceiling and bomb-load. The Army took full control of ground defences in February 1916, and a variety of sub 4-inch (less than 102 mm) calibre guns were converted to anti-aircraft use. Searchlights were introduced, initially manned by police. By mid-1916, there were 271 anti-aircraft guns and 258 searchlights across England. Aerial defences against Zeppelins were divided between the RNAS and the Royal Flying Corps (RFC), with the Navy engaging enemy airships approaching the coast while the RFC took responsibility once the enemy had crossed the coastline. Initially the War Office had believed that the Zeppelins used a layer of inert gas to protect themselves from incendiary bullets, and favoured the use of bombs or devices like the Ranken dart. However, by mid-1916 an effective mixture of explosive, tracer and incendiary rounds had been developed. There were 23 airship raids in 1916, in which 125 tons of bombs were dropped, killing 293 people and injuring 691. The first raid of 1916 was carried out by the German Navy. Nine Zeppelins were sent to Liverpool on the night of 31 January – 1 February. A combination of poor weather and mechanical problems scattered them across the Midlands and several towns were bombed. A total of 61 people were reported killed and 101 injured by the raid. Despite ground fog, 22 aircraft took off to find the Zeppelins but none succeeded, and two pilots were killed when attempting to land. One airship, the L 19, came down in the North Sea because of engine failure and damage from Dutch ground-fire. Although the wreck stayed afloat for a while and was sighted by a British fishing trawler, the boat's crew refused to rescue the Zeppelin's crew because they were outnumbered, and all 16 crew died. Further raids were delayed by an extended period of poor weather and also by the withdrawal of the majority of Naval Zeppelins in an attempt to resolve the recurrent engine failures. Three Zeppelins set off to bomb Rosyth on 5–6 March but were forced by high winds to divert to Hull, killing 18, injuring 52 and causing £25,005 damage. At the beginning of April raids were attempted on five successive nights. Ten airships set off on 31 March: most turned back and L 15, damaged by antiaircraft fire and an aircraft attacking using Ranken darts, came down in the sea near Margate. Most of the 48 killed in the raid were victims of a single bomb which fell on an Army billet in Cleethorpes. The following night two Navy Zeppelins bombed targets in the north of England, killing 22 and injuring 130. On the night of 2/3 April a six-airship raid was made, targeting the naval base at Rosyth, the Forth Bridge and London. None of the airships bombed their intended targets; 13 were killed, 24 injured and much of the £77,113 damage was caused by the destruction of a warehouse in Leith containing whisky. Raids on 4/5 April and 5/6 April had little effect, as did a five-Zeppelin raid on 25/6 April and a raid by a single Army Zeppelin the following night. On 2/3 July a nine-Zeppelin raid against Manchester and Rosyth was largely ineffective due to weather conditions, and one was forced to land in neutral Denmark, its crew being interned. On 28–29 July, the first raid to include one of the new and much larger R-class Zeppelins, L 31, took place. The 10-Zeppelin raid achieved very little; four turned back early and the rest wandered over a fog-covered landscape before giving up. Adverse weather dispersed raids on 30–31 July and 2–3 August, and on 8–9 August nine airships attacked Hull with little effect. On 24–25 August 12 Navy Zeppelins were launched: eight turned back without attacking and only Heinrich Mathy's L 31 reached London; flying above low clouds, 36 bombs were dropped in 10 minutes on south east London. Nine people were killed, 40 injured and £130,203 of damage was caused. Zeppelins were very difficult to attack successfully at high altitude, although this also made accurate bombing impossible. Aeroplanes struggled to reach a typical altitude of , and firing the solid bullets usually used by aircraft Lewis guns was ineffectual: they made small holes causing inconsequential gas leaks. Britain developed new bullets, the Brock containing oxidant potassium chlorate, and the Buckingham filled with phosphorus, which reacted with the chlorate to catch fire and hence ignite the Zeppelin's hydrogen. These had become available by September 1916. The biggest raid to date was launched on 2–3 September, when twelve German Navy and four Army airships set out to bomb London. A combination of rain and snowstorms scattered the airships while they were still over the North Sea. Only one of the naval airships came within seven miles of central London, and both damage and casualties were slight. The newly commissioned Schütte-Lanz SL 11 dropped a few bombs on Hertfordshire while approaching London: it was picked up by searchlights as it bombed Ponders End and at around 02:15 it was intercepted by a B.E.2c flown by Lt. William Leefe Robinson, who fired three 40-round drums of Brocks and Buckingham ammunition into the airship. The third drum started a fire and the airship was quickly enveloped in flames. It fell to the ground near Cuffley, witnessed by the crews of several of the other Zeppelins and many on the ground; there were no survivors. The victory earned Leefe Robinson a Victoria Cross; the pieces of SL 11 were gathered up and sold as souvenirs by the Red Cross to raise money for wounded soldiers. The loss of SL 11 to the new ammunition ended the German Army's enthusiasm for raids on Britain. The German Navy remained aggressive, and another 12-Zeppelin raid was launched on 23–24 September. Eight older airships bombed targets in the Midlands and northeast, while four R-class Zeppelins attacked London. L 30 did not even cross the coast, dropping its bombs at sea. L 31 approached London from the south, dropping a few bombs on the southern suburbs before crossing the Thames and bombing Leyton, killing eight people and injuring 30. L 32 was piloted by Oberleutnant Werner Peterson of the Naval Airship Service, who had only taken command of L 32 in August 1916. L 32 approached from the south, crossing the English Channel close to Dungeness light house, passing Tunbridge Wells at 12:10 and dropping bombs on Sevenoaks and Swanley before crossing over Purfleet. After receiving heavy gunfire and encountering a multitude of anti-aircraft search lights over London, Peterson decided to head up the Essex coast from Tilbury and abort the mission. Water ballast was dropped to gain altitude and L 32 climbed to 13,000 feet. Shortly afterwards at 12:45 L 32 was spotted by 2nd Lieutenant Frederick Sowrey of the Royal Flying Corps, who had taken off from nearby RAF Hornchurch (known at the time as Sutton's Farm). As Sowrey approached he fired three drums of ammunition into the hull of L 32, including the latest Bock & Pomeroy incendiary rounds. L 32, according to witness accounts, violently turned and lost altitude, burning from both ends and along its back. The airship narrowly missed Billericay High Street as it passed over, one witness saying the windows to her home rattled and the Zeppelin sounded like a hissing freight train. L 32 continued down Hill side and came down at Snail's Hall Farm off Green Farm Lane in Great Burstead, crashing at 01:30 on farm land; the 650-foot-long airship struck a large oak tree. The entire 22 crew were killed. Two crew members jumped rather than be burned (one was said to be Werner Peterson). The crew's bodies were kept in a barn nearby until 27 September when the Royal Flying Corps transported them to nearby Great Burstead Church. They were interred there until 1966, when they were reinterred at the German Military Cemetery in Cannock Chase. Attending the scene of the crash site were the Royal Naval Intelligence, who recovered the latest secret code book which was found within the gondola of the crashed L32. L 33 dropped a few incendiaries over Upminster and Bromley-by-Bow, where it was hit by an anti-aircraft shell, despite being at an altitude of . As it headed towards Chelmsford it began to lose height and came down close to Little Wigborough. The airship was set alight by its crew, but inspection of the wreckage provided the British with much information about the construction of Zeppelins, which was used in the design of the British R33-class airships. The next raid came on 1 October 1916. Eleven Zeppelins were launched at targets in the Midlands and at London. Only L 31, commanded by the experienced Heinrich Mathy making his 15th raid, reached London. As the airship neared Cheshunt at about 23:20 it was picked up by searchlights and attacked by three aircraft from No. 39 Squadron. 2nd lieutenant Wulstan Tempest succeeded in setting fire to the airship, which came down near Potters Bar. All 19 crew died, many jumping from the burning airship. For the next raid, on 27–28 November, the Zeppelins avoided London for targets in the Midlands. Again the defending aircraft were successful: L 34 was shot down over the mouth of the Tees and L 21 was attacked by two aircraft and crashed into the sea off Lowestoft. There were no further raids in 1916 although the Navy lost three more craft, all on 28 December: SL 12 was destroyed at Ahlhorn by strong winds after sustaining damage in a poor landing, and at Tondern L 24 crashed into the shed while landing: the resulting fire destroyed both L 24 and the adjacent L 17. 1917 To counter the increasingly effective defences new Zeppelins were introduced which had an increased operating altitude of and a ceiling of . The first of these S-class Zeppelins, LZ 91 (L 42) entered service in February 1917. They were basically a modification of the R-class, sacrificing strength and power for improved altitude. The surviving R-class Zeppelins were adapted by removing one of the engines. The improved safety was offset by the extra strain on the airship crews caused by altitude sickness and exposure to extreme cold and operating difficulties caused by cold and unpredictable high winds encountered at altitude. The first raid of 1917 did not occur until 16–17 March: the five Zeppelins encountered very strong winds and none reached their targets. This experience was repeated on 23–24 May. Two days later 21 Gotha bombers attempted a daylight raid on London. They were frustrated by heavy clouds but the effort led the Kaiser to announce that airship raids on London were to stop; under pressure he later relented to allow the Zeppelins to attack under "favorable circumstances". On 16–17 June, another raid was attempted. Six Zeppelins were to take part, but two were kept in their shed by high winds and another two were forced to return by engine failure. L 42 bombed Ramsgate, hitting a munitions store. The month-old L 48, the first U class Zeppelin, was forced to drop to where it was caught by four aircraft and destroyed, crashing near Theberton, Suffolk. After ineffective raids on the Midlands and the north of England on 21–22 August and 24–25 September, the last major Zeppelin raid of the war was launched on 19–20 October, with 13 airships heading for Sheffield, Manchester and Liverpool. All were hindered by an unexpected strong headwind at altitude. L 45 was trying to reach Sheffield, but instead it dropped bombs on Northampton and London: most fell in the north-west suburbs but three bombs fell in Piccadilly, Camberwell and Hither Green, causing most of the casualties that night. L 45 then reduced altitude to try to escape the winds but was forced back into the higher air currents by a B.E.2e. The airship then had mechanical failure in three engines and was blown over France, eventually coming down near Sisteron; it was set on fire and the crew surrendered. L 44 was brought down by ground fire over France: L 49 and L 50 were also lost to engine failure and the weather over France. L 55 was badly damaged on landing and later scrapped. There were no more raids in 1917, although the airships were not abandoned but refitted with new, more powerful, engines. 1918 There were only four raids in 1918, all against targets in the Midlands and northern England. Five Zeppelins attempted to bomb the Midlands on 12–13 March to little effect. The following night three Zeppelins set off, but two turned back because of the weather: the third bombed Hartlepool, killing eight and injuring 29. A five-Zeppelin raid on 12–13 April was also largely ineffective, with thick clouds making accurate navigation impossible. However, some alarm was caused by the other two, one of which reached the east coast and bombed Wigan, believing it was Sheffield: the other bombed Coventry in the belief that it was Birmingham. The final raid on 5 August 1918 involved four airships and resulted in the loss of L.70 and the death of its entire crew under the command of Fregattenkapitän Peter Strasser, head of the Imperial German Naval Airship Service and the Führer der Luftschiffe. Crossing the North Sea during daylight, the airship was intercepted by a Royal Air Force DH.4 biplane piloted by Major Egbert Cadbury, and shot down in flames. Technological progress Zeppelin technology improved considerably as a result of the increasing demands of warfare. The company came under government control, and new personnel were recruited to cope with the increased demand, including the aerodynamicist Paul Jaray and the stress engineer Karl Arnstein. Many of these technological advances originated from Zeppelin's only serious competitor, the Mannheim-based Schütte-Lanz company. While their dirigibles were never as successful, Professor Schütte's more scientific approach to airship design led to important innovations including the streamlined hull shape, the simpler cruciform fins (replacing the more complicated box-like arrangements of older Zeppelins), individual direct-drive engine cars, anti-aircraft machine-gun positions, and gas ventilation shafts which transferred vented hydrogen to the top of the airship. New production facilities were set up to assemble Zeppelins from components fabricated in Friedrichshafen. The pre-war M-class designs were quickly enlarged, to produce the long duralumin P-class, which increased gas capacity from , introduced a fully enclosed gondola and had an extra engine. These modifications added to the maximum ceiling, around to the top speed, and greatly increased crew comfort and hence endurance. Twenty-two P-class airships were built; the first, LZ 38, was delivered to the Army on 3 April 1915. The P class was followed by a lengthened version, the Q class. In July 1916 Luftschiffbau Zeppelin introduced the R-class, long, and with a volume of . These could carry loads of three to four tons of bombs and reach speeds of up to , powered by six Maybach engines. In 1917, following losses to the air defences over Britain, new designs were produced which were capable of flying at much higher altitudes, typically operating at around . This was achieved by reducing the weight of the airship by reducing the weight of the structure, halving the bomb load, removing the defensive armament and by reducing the number of engines to five. However these were not successful as bombers: the greater height at which they operated greatly hindered navigation, and their reduced power made them vulnerable to unfavorable weather conditions. At the beginning of the war Captain Ernst A. Lehmann and Baron Gemmingen, Count Zeppelin's nephew, developed an observation car for use by dirigibles. While the zeppelin flew invisibly within or above the clouds, the car's observer would hang from a cable below the clouds, and relay navigation and bomb dropping orders. It was equipped with a wicker chair, chart table, electric lamp and compass, with telephone line and lightning conductor part of the suspension cable. Although used by Army airships, they were not used by the Navy, since Strasser considered that their weight meant an unacceptable reduction in bomb load. End of the war The German defeat also marked the end of German military dirigibles, as the victorious Allies demanded a complete abolition of German air forces and surrender of the remaining airships as reparations. Specifically, the Treaty of Versailles contained the following articles dealing explicitly with dirigibles: Article 198"The armed forces of Germany must not include any military or naval air forces ... No dirigible shall be kept." Article 202"On the coming into force of the present Treaty, all military and naval aeronautical material ... must be delivered to the Governments of the Principal Allied and Associated Powers ... In particular, this material will include all items under the following heads which are or have been in use or were designed for warlike purposes: [...] "Dirigibles able to take to the air, being manufactured, repaired or assembled." "Plant for the manufacture of hydrogen." "Dirigible sheds and shelters of every kind for aircraft." "Pending their delivery, dirigibles will, at the expense of Germany, be maintained inflated with hydrogen; the plant for the manufacture of hydrogen, as well as the sheds for dirigibles may at the discretion of the said Powers, be left to Germany until the time when the dirigibles are handed over." On 23 June 1919, a week before the treaty was signed, many Zeppelin crews destroyed their airships in their halls in order to prevent delivery, following the example of the Scuttling of the German fleet at Scapa Flow two days earlier. The remaining dirigibles were transferred to France, Italy, Britain, and Belgium in 1920. A total of 84 Zeppelins were built during the war. Over 60 were lost, roughly evenly divided between accident and enemy action. 51 raids had been made on England alone, in which 5,806 bombs were dropped, killing 557 people and injuring 1,358 while causing damage estimated at £1.5 million. It has been argued the raids were effective far beyond material damage in diverting and hampering wartime production: one estimate is that the due to the 1915–16 raids "one sixth of the total normal output of munitions was entirely lost." After World War I Renaissance Count von Zeppelin had died in 1917, before the end of the war. Hugo Eckener, who had long envisioned dirigibles as vessels of peace rather than of war, took command of the Zeppelin business, hoping to quickly resume civilian flights. Despite considerable difficulties, they completed two small passenger airships; LZ 120 Bodensee (scrapped in July 1928), which first flew in August 1919 and in the following months transported passengers between Friedrichshafen and Berlin, and a sister-ship LZ 121 Nordstern, {Scrapped September 1926} which was intended for use on a regular route to Stockholm. However, in 1921 the Allied Powers demanded that these should be handed over as war reparations as compensation for the dirigibles destroyed by their crews in 1919. Germany was not allowed to construct military aircraft and only airships of less than were permitted. This brought a halt to Zeppelin's plans for airship development, and the company temporarily had to resort to manufacturing aluminium cooking utensils. Eckener and his co-workers refused to give up and kept looking for investors and a way to circumvent Allied restrictions. Their opportunity came in 1924. The United States had started to experiment with rigid airships, constructing one of their own, the ZR-1 USS Shenandoah, and buying the R38 (based on the Zeppelin L 70) when the British airship programme was cancelled. However, this broke apart and caught fire during a test flight above the Humber on 23 August 1921, killing 44 crewmen. Under these circumstances, Eckener managed to obtain an order for the next American dirigible. Germany had to pay for this airship itself, as the cost was set against the war reparation accounts, but for the Zeppelin company this was unimportant. LZ 126 made its first flight on 27 August 1924. On 12 October at 07:30 local time the Zeppelin took off for the US under the command of Hugo Eckener. The ship completed its voyage without any difficulties in 80 hours 45 minutes. American crowds enthusiastically celebrated the arrival, and President Calvin Coolidge invited Eckener and his crew to the White House, calling the new Zeppelin an "angel of peace". Given the designation ZR-3 USS Los Angeles and refilled with helium (partly sourced from the Shenandoah) after its Atlantic crossing, the airship became the most successful American airship. It operated reliably for eight years until it was retired in 1932 for economic reasons. It was dismantled in August 1940. Golden age With the delivery of LZ 126, the Zeppelin company had reasserted its lead in rigid airship construction, but it was not yet fully back in business. In 1926 restrictions on airship construction were relaxed, but acquiring the necessary funds for the next project proved a problem in the difficult economic situation of post–World War I Germany, and it took Eckener two years of lobbying and publicity to secure the realization of LZ 127. Another two years passed before 18 September 1928, when the new dirigible, christened Graf Zeppelin in honour of the Count, flew for the first time. With a total length of and a volume of 105,000 m3, it was the largest dirigible to have been built at the time. Eckener's initial purpose was to use Graf Zeppelin for experimental and demonstration purposes to prepare the way for regular airship traveling, carrying passengers and mail to cover the costs. In October 1928 its first long-range voyage brought it to Lakehurst, the voyage taking 112 hours and setting a new endurance record for airships. Eckener and his crew, which included his son Hans, were once more welcomed enthusiastically, with confetti parades in New York and another invitation to the White House. Graf Zeppelin toured Germany and visited Italy, Palestine, and Spain. A second trip to the United States was aborted in France due to engine failure in May 1929. In August 1929, Graf Zeppelin departed for another daring enterprise: a circumnavigation of the globe. The growing popularity of the "giant of the air" made it easy for Eckener to find sponsors. One of these was the American press tycoon William Randolph Hearst, who requested that the tour officially start in Lakehurst. As with the October 1928 flight to New York, Hearst had placed a reporter, Grace Marguerite Hay Drummond-Hay, on board: she therefore became the first woman to circumnavigate the globe by air. From there, Graf Zeppelin flew to Friedrichshafen, then Tokyo, Los Angeles, and back to Lakehurst, in 21 days 5 hours and 31 minutes. Including the initial and final trips between Friedrichshafen and Lakehurst and back, the dirigible had travelled . In the following year, Graf Zeppelin undertook trips around Europe, and following a successful tour to Recife, Brazil in May 1930, it was decided to open the first regular transatlantic airship line. This line operated between Frankfurt and Recife, and was later extended to Rio de Janeiro, with a stop in Recife. Despite the beginning of the Great Depression and growing competition from fixed-wing aircraft, LZ 127 transported an increasing volume of passengers and mail across the ocean every year until 1936. The ship made another spectacular voyage in July 1931 when it made a seven-day research trip to the Arctic. This had already been a dream of Count von Zeppelin twenty years earlier, which could not be realized at the time due to the outbreak of war. Eckener intended to follow the successful airship with another larger Zeppelin, designated LZ 128. This was to be powered by eight engines, in length, with a capacity of . However the loss of the British passenger airship R101 on 5 October 1930 led the Zeppelin company to reconsider the safety of hydrogen-filled vessels, and the design was abandoned in favour of a new project, LZ 129. This was intended to be filled with inert helium. Hindenburg, the end of an era The coming to power of the Nazi Party in 1933 had important consequences for Zeppelin Luftschiffbau. Zeppelins became a propaganda tool for the new regime: they would now display the Nazi swastika on their fins and occasionally tour Germany to play march music and propaganda speeches to the people. In 1934 Joseph Goebbels, the Minister of Propaganda, contributed two million reichsmarks towards the construction of LZ 129, and in 1935 Hermann Göring established a new airline directed by Ernst Lehmann, the Deutsche Zeppelin Reederei, as a subsidiary of Lufthansa to take over Zeppelin operations. Hugo Eckener, the father of the post-war Zeppelin renaissance, was an outspoken anti-Nazi: complaints about the use of Zeppelins for propaganda purposes in 1936 led Goebbels to declare "Dr. Eckener has placed himself outside the pale of society. Henceforth his name is not to be mentioned in the newspapers and his photograph is not to be published". On 4 March 1936 LZ 129 Hindenburg (named after former President of Germany, Paul von Hindenburg) made its first flight. The Hindenburg was the largest airship ever built. It had been designed to use non-flammable helium, but the only supplies of the rare gas were controlled by the United States, which refused to allow its export. The fatal decision was made to fill the Hindenburg with flammable hydrogen. Apart from propaganda flights, LZ 129 was used on the transatlantic service alongside Graf Zeppelin. On 6 May 1937, while landing in Lakehurst after a transatlantic flight, the tail of the ship caught fire, and within seconds, the Hindenburg burst into flames, killing 35 of the 97 people on board and one member of the ground crew. The cause of the fire was never definitively determined. The investigation into the accident concluded that static electricity had ignited hydrogen which had leaked from the gasbags, although there were allegations of sabotage. 13 passengers and 22 crew, including Ernst Lehmann, were killed. Despite the obvious danger, there remained a list of 400 people who still wanted to fly as Zeppelin passengers and had paid for the trip. Their money was refunded in 1940. Graf Zeppelin was retired one month after the Hindenburg wreck and turned into a museum. A new intended flagship Zeppelin was completed in 1938 and, inflated with hydrogen, made some test flights (the first on 14 September), but never carried passengers. Another project, LZ 131, designed to be even larger than Hindenburg and Graf Zeppelin II, never progressed beyond the production of a few ring frames. Graf Zeppelin II was assigned to the Luftwaffe and made about 30 test flights prior to the beginning of World War II. Most of those flights were carried out near the Polish border, first in the Sudeten mountains region of Silesia, then in the Baltic Sea region. During one such flight LZ 130 crossed the Polish border near the Hel Peninsula, where it was intercepted by a Polish Lublin R-XIII aircraft from Puck naval airbase and forced to leave Polish airspace. During this time, LZ 130 was used for electronic scouting missions, and was equipped with various measuring equipment. In August 1939, it made a flight near the coastline of Great Britain in an attempt to determine whether the 100 metre towers erected from Portsmouth to Scapa Flow were used for aircraft radio location. Photography, radio wave interception, magnetic and radio frequency analysis were unable to detect operational British Chain Home radar due to searching in the wrong frequency range. The frequencies searched were too high, an assumption based on the Germans' own radar systems. The mistaken conclusion was that the British towers were not connected with radar operations, but were for naval radio communications. After the beginning of the Second World War on 1 September, the Luftwaffe ordered LZ 127 and LZ 130 moved to a large Zeppelin hangar in Frankfurt, where the skeleton of LZ 131 was also located. In March 1940 Göring ordered the scrapping of the remaining airships, and on 6 May the Frankfurt hangars were demolished. Cultural influences Zeppelins have been an inspiration to music, cinematography and literature. The 1930 movie Hell's Angels, directed by Howard Hughes, features an unsuccessful Zeppelin raid on London during World War I. In 1934, the calypsonian Attila the Hun recorded "Graf Zeppelin", commemorating the airship's visit to Trinidad. Zeppelins are often featured in alternate history and parallel universe fiction. They feature prominently in the popular fantasy novels of the His Dark Materials trilogy and The Book of Dust series by Philip Pullman. In the American science fiction series, Fringe, Zeppelins are a notable historical idiosyncrasy that helps differentiate the series' two parallel universes, also used in Doctor Who in the episodes "The Rise of the Cybermen" and "The Age of Steel" when the TARDIS crashes in an alternate reality where Britain is a 'People's Republic' and Pete Tyler, Rose Tyler's father, is alive and is a wealthy inventor. They are also seen in the alternate reality 1939 plot line in the film Sky Captain and the World of Tomorrow, and have an iconic association with the steampunk subcultural movement in broader terms. In 1989, Japanese animator Miyazaki released Kiki's Delivery Service, which features a Zeppelin as a plot element. A Zeppelin was used in Indiana Jones and the Last Crusade, when Jones and his father try to escape from Germany. In 1968, English rock band Led Zeppelin chose their name after Keith Moon, drummer of The Who, told guitarist Jimmy Page that his idea to create a band would "go down like a lead balloon." Page's manager Peter Grant suggested changing the spelling of "Lead" to "Led" to avoid mispronunciation. "Balloon" was replaced with "Zeppelin" as Jimmy Page saw it as a symbol of "the perfect combination of heavy and light, combustibility and grace." For the group's self-titled debut album, Page suggested the group use a picture of the Hindenburg crashing in New Jersey in 1937, much to Countess Eva von Zeppelin's disgust. Von Zeppelin tried to sue the group for using her family name, but the case was eventually dismissed. Modern era Since the 1990s Luftschiffbau Zeppelin, a daughter enterprise of the Zeppelin conglomerate that built the original German Zeppelins, has been developing Zeppelin "New Technology" (NT) airships. These vessels are semi-rigids based partly on internal pressure, partly on a frame. The Airship Ventures company operated zeppelin passenger travel to California from October 2008 to November 2012 with one of these Zeppelin NT airships. In May 2011, Goodyear announced that they would replace their fleet of blimps with Zeppelin NTs, resurrecting their partnership that ended over 70 years ago. Goodyear placed an order for three Zeppelin NTs, which then entered service between 2014 and 2018. Modern zeppelins are held aloft by the inert gas helium, eliminating the danger of combustion illustrated by the Hindenburg. It has been proposed that modern zeppelins could be powered by hydrogen fuel cells. Zeppelin NTs are often used for sightseeing trips; for example, D-LZZF (c/n 03) was used for Edelweiss's birthday celebration performing flights over Switzerland in an Edelweiss livery, and it is now used, weather permitting, on flights over Munich.
Technology
Types of aircraft
null
34441
https://en.wikipedia.org/wiki/Zygote
Zygote
A zygote (; , ) is a eukaryotic cell formed by a fertilization event between two gametes. The zygote's genome is a combination of the DNA in each gamete, and contains all of the genetic information of a new individual organism. The sexual fusion of haploid cells is called karyogamy, the result of which is the formation of a diploid cell called the zygote or zygospore. History German zoologists Oscar and Richard Hertwig made some of the first discoveries on animal zygote formation in the late 19th century. In multicellular organisms The zygote is the earliest developmental stage. In humans and most other anisogamous organisms, a zygote is formed when an egg cell and sperm cell come together to create a new unique organism. The formation of a totipotent zygote with the potential to produce a whole organism depends on epigenetic reprogramming. DNA demethylation of the paternal genome in the zygote appears to be an important part of epigenetic reprogramming. In the paternal genome of the mouse, demethylation of DNA, particularly at sites of methylated cytosines, is likely a key process in establishing totipotency. Demethylation involves the processes of base excision repair and possibly other DNA-repair–based mechanisms. Humans In human fertilization, a released ovum (a haploid secondary oocyte with replicate chromosome copies) and a haploid sperm cell (male gamete) combine to form a single diploid cell called the zygote. Once the single sperm fuses with the oocyte, the latter completes the division of the second meiosis forming a haploid daughter with only 23 chromosomes, almost all of the cytoplasm, and the male pronucleus. The other product of meiosis is the second polar body with only chromosomes but no ability to replicate or survive. In the fertilized daughter, DNA is then replicated in the two separate pronuclei derived from the sperm and ovum, making the zygote's chromosome number temporarily 4n diploid. After approximately 30 hours from the time of fertilization, a fusion of the pronuclei and immediate mitotic division produce two 2n diploid daughter cells called blastomeres. Between the stages of fertilization and implantation, the developing embryo is sometimes termed as a preimplantation-conceptus. This stage has also been referred to as the pre-embryo in legal discourses including relevance to the use of embryonic stem cells. In the US the National Institutes of Health has determined that the traditional classification of pre-implantation embryo is still correct. After fertilization, the conceptus travels down the fallopian tube towards the uterus while continuing to divide without actually increasing in size, in a process called cleavage. After four divisions, the conceptus consists of 16 blastomeres, and it is known as the morula. Through the processes of compaction, cell division, and blastulation, the conceptus takes the form of the blastocyst by the fifth day of development, just as it approaches the site of implantation. When the blastocyst hatches from the zona pellucida, it can implant in the endometrial lining of the uterus and begin the gastrulation stage of embryonic development. The human zygote has been genetically edited in experiments designed to cure inherited diseases. Fungi In fungi, this cell may then enter meiosis or mitosis depending on the life cycle of the species. Plants In plants, the zygote may be polyploid if fertilization occurs between meiotically unreduced gametes. In land plants, the zygote is formed within a chamber called the archegonium. In seedless plants, the archegonium is usually flask-shaped, with a long hollow neck through which the sperm cell enters. As the zygote divides and grows, it does so inside the archegonium. In single-celled organisms The zygote can divide asexually by mitosis to produce identical offspring. A Chlamydomonas zygote contains chloroplast DNA (cpDNA) from both parents; such cells are generally rare, since normally cpDNA is inherited uniparentally from the mt+ mating type parent. These rare biparental zygotes allowed mapping of chloroplast genes by recombination.
Biology and health sciences
Animal reproduction
Biology
34460
https://en.wikipedia.org/wiki/Zebra
Zebra
Zebras (, ) (subgenus Hippotigris) are African equines with distinctive black-and-white striped coats. There are three living species: Grévy's zebra (Equus grevyi), the plains zebra (E. quagga), and the mountain zebra (E. zebra). Zebras share the genus Equus with horses and asses, the three groups being the only living members of the family Equidae. Zebra stripes come in different patterns, unique to each individual. Several theories have been proposed for the function of these patterns, with most evidence supporting them as a deterrent for biting flies. Zebras inhabit eastern and southern Africa and can be found in a variety of habitats such as savannahs, grasslands, woodlands, shrublands, and mountainous areas. Zebras are primarily grazers and can subsist on lower-quality vegetation. They are preyed on mainly by lions, and typically flee when threatened but also bite and kick. Zebra species differ in social behaviour, with plains and mountain zebra living in stable harems consisting of an adult male or stallion, several adult females or mares, and their young or foals; while Grévy's zebra live alone or in loosely associated herds. In harem-holding species, adult females mate only with their harem stallion, while male Grévy's zebras establish territories which attract females and the species is promiscuous. Zebras communicate with various vocalisations, body postures and facial expressions. Social grooming strengthens social bonds in plains and mountain zebras. Zebras' dazzling stripes make them among the most recognizable mammals. They have been featured in art and stories in Africa and beyond. Historically, they have been highly sought by exotic animal collectors, but unlike horses and donkeys, zebras have never been completely domesticated. The International Union for Conservation of Nature (IUCN) lists Grévy's zebra as endangered, the mountain zebra as vulnerable and the plains zebra as near-threatened. The quagga (E. quagga quagga), a type of plains zebra, was driven to extinction in the 19th century. Nevertheless, zebras can be found in numerous protected areas. Etymology The English name "zebra" derives from Italian, Spanish or Portuguese. Its origins may lie in the Latin equiferus, meaning "wild horse". Equiferus appears to have entered into Portuguese as ezebro or zebro, which was originally used for a legendary equine in the wilds of the Iberian Peninsula during the Middle Ages. In 1591, Italian explorer Filippo Pigafetta recorded "zebra" being used to refer to the African animals by Portuguese visitors to the continent. In ancient times, the zebra was called hippotigris ("horse tiger") by the Greeks and Romans. The word zebra was traditionally pronounced with a long initial vowel, but over the course of the 20th century the pronunciation with the short initial vowel became the norm in British English. The pronunciation with a long initial vowel remains standard in American English. Taxonomy Zebras are classified in the genus Equus (known as equines) along with horses and asses. These three groups are the only living members of the family Equidae. The plains zebra and mountain zebra were traditionally placed in the subgenus Hippotigris (C. H. Smith, 1841) in contrast to the Grévy's zebra which was considered the sole species of subgenus Dolichohippus (Heller, 1912). Groves and Bell (2004) placed all three species in the subgenus Hippotigris. A 2013 phylogenetic study found that the plains zebra is more closely related to Grévy's zebras than mountain zebras. The extinct quagga was originally classified as a distinct species. Later genetic studies have placed it as the same species as the plains zebra, either a subspecies or just the southernmost population. Molecular evidence supports zebras as a monophyletic lineage. Equus originated in North America and direct paleogenomic sequencing of a 700,000-year-old middle Pleistocene horse metapodial bone from Canada implies a date of 4.07 million years ago (mya) for the most recent common ancestor of the equines within a range of 4.0 to 4.5 mya. Horses split from asses and zebras around this time and equines colonised Eurasia and Africa around 2.1–3.4 mya. Zebras and asses diverged from each other close to 2 mya. The mountain zebra diverged from the other species around 1.6 mya and the plains and Grévy's zebra split 1.4 mya. A 2017 mitochondrial DNA study placed the Eurasian Equus ovodovi and the subgenus Sussemionus lineage as closer to zebras than to asses. However, other studies disputed this placement, finding the Sussemionus lineage basal to the zebra+asses group, but suggested that the Sussemionus lineage may have received gene flow from zebras. The cladogram of Equus below is based on Vilstrup and colleagues (2013) and Jónsson and colleagues (2014): Extant species Fossil record In addition to the three living species, some fossil zebras and relatives have also been identified. Equus koobiforensis is an early equine basal to zebras found in the Shungura Formation, Ethiopia and the Olduvai Gorge, Tanzania, and dated to around 2.3 mya. E. oldowayensis is identified from remains in Olduvai Gorge dating to 1.8 mya. Fossil skulls of E. mauritanicus from Algeria which date to around 1 mya appears to show affinities with the plains zebra. E. capensis, known as the Cape zebra, appeared around 2 mya and lived throughout southern and eastern Africa. Non-African equines that may have been basal to zebras include E. sansaniensis of Eurasia (circa 2.5 mya) and E. namadicus (circa 2.5 mya) and E. sivalensis (circa 2.0 mya) of the Indian subcontinent. Hybridisation Fertile hybrids have been reported in the wild between plains and Grévy's zebra. Hybridisation has also been recorded between the plains and mountain zebra, though it is possible that these are infertile due to the difference in chromosome numbers between the two species. Captive zebras have been bred with horses and donkeys; these are known as zebroids. A zorse is a cross between a zebra and a horse; a zonkey, between a zebra and a donkey; and a zoni, between a zebra and a pony. Zebroids are often born sterile with dwarfism. Characteristics As with all wild equines, zebras have barrel-chested bodies with tufted tails, elongated faces and long necks with long, erect manes. Their thin legs are each supported by a spade-shaped toe covered in a hard hoof. Their dentition is adapted for grazing; they have large incisors that clip grass blades and rough molars and premolars well suited for grinding. Males have spade-shaped canines, which can be used as weapons in fighting. The eyes of zebras are at the sides and far up the head, which allows them to look over the tall grass while feeding. Their moderately long, erect ears are movable and can locate the source of a sound. Unlike horses, zebras and asses have chestnut callosities present only on their front legs. In contrast to other living equines, zebras have longer front legs than back legs. Diagnostic traits of the zebra skull include: its relatively small size with a straight dorsal outline, protruding eye sockets, narrower rostrum, less conspicuous postorbital bar, separation of the metaconid and metastylid of the tooth by a V-shaped canal and rounded enamel wall. Stripes Zebras are easily recognised by their bold black-and-white striping patterns. The coat appears to be white with black stripes, as indicated by the belly and legs when unstriped, but the skin is black. Young or foals are born with brown and white coats, and the brown darkens with age. A dorsal line acts as the backbone for vertical stripes along the sides, from the head to the rump. On the snout they curve toward the nostrils, while the stripes above the front legs split into two branches. On the rump, they develop into species-specific patterns. The stripes on the legs, ears and tail are separate and horizontal. Striping patterns are unique to an individual and heritable. During embryonic development, the stripes appear at eight months, but the patterns may be determined at three to five weeks. For each species there is a point in embryonic development where the stripes are perpendicular to the dorsal line and spaced apart. However, this happens at three weeks of development for the plains zebra, four weeks for the mountain zebra, and five for Grévy's zebra. The difference in timing is thought to be responsible for the differences in the striping patterns of the different species. Various abnormalities of the patterns have been documented in plains zebras. In "melanistic" zebras, dark stripes are highly concentrated on the torso but the legs are whiter. "Spotted" individuals have broken up black stripes around the dorsal area. There have even been morphs with white spots on dark backgrounds. Striping abnormalities have been linked to inbreeding. Albino zebras have been recorded in the forests of Mount Kenya, with the dark stripes being blonde. The quagga had brown and white stripes on the head and neck, brown upper parts and a white belly, tail and legs. Function The function of stripes in zebras has been discussed among biologists since at least the 19th century. Popular hypotheses include the following: The crypsis hypothesis suggests that the stripes allow the animal to blend in with its environment or break up its outline. This was the earliest hypothesis and proponents argued that the stripes were particularly suited for camouflage in tall grassland and woodland habitat. Alfred Wallace also wrote in 1896 that stripes make zebras less noticeable at night. Critics note that zebras graze in open habitat and do not behave cryptically, being noisy, fast, and social and do not freeze when a predator is near. In addition, the camouflaging stripes of woodland living ungulates like bongos and bushbucks are much less vivid with less contrast with the background colour. A 1987 Fourier analysis study concluded that the spatial frequencies of zebra stripes do not line up with their environment, while a 2014 study of wild equine species and subspecies could not find any correlations between striping patterns and woodland habitats. Melin and colleagues (2016) found that lions and hyenas do not appear to perceive the stripes when they are a certain distance away at daytime or nighttime, thus making the stripes useless in blending in except when the predators are close enough by which they could smell or hear their target. They also found that the stripes do not make the zebra less noticeable than solidly coloured herbivores on the open plains. They suggested that stripes may give zebras an advantage in woodlands, as the dark stripes could line up with the outlines of tree branches and other vegetation. The confusion hypothesis states that the stripes confuse predators, be it by: making it harder to distinguish individuals in a group as well as determining the number of zebras in a group; making it difficult to determine an individual's outline when the group runs away; reducing a predator's ability to keep track of a target during a chase; dazzling an assailant so they have difficulty making contact; or making it difficult for a predator to deduce the zebra's size, speed and direction via motion dazzle. This theory has been proposed by several biologists since at least the 1970s. A 2014 computer study of zebra stripes found that they may create a wagon-wheel effect and/or barber pole illusion when in motion. The researchers concluded that this could be used against mammalian predators or biting flies. The use of the stripes for confusing mammalian predators has been questioned. The stripes of zebras could make groups seem smaller, and thus more likely to be attacked. Zebras also tend to scatter when fleeing from attackers and thus the stripes could not break up an individual's outline. Lions, in particular, appear to have no difficulty targeting and catching zebras when they get close and take them by ambush. In addition, no correlations have been found between the number of stripes and populations of mammal predators. Hughes and colleagues (2021) concluded that solidly grey and less contrasted patterns are more likely to escape being caught when in motion. The aposematic hypothesis suggests that the stripes serve as warning colouration. This hypothesis was first suggested by Wallace in 1867 and discussed in more detail by Edward Bagnall Poulton in 1890. As with known aposematic mammals, zebras are recognizable up close, live in more open environments, have a high risk of predation and do not hide or act inconspicuous. However, critics note that stripes do not work on lions because they frequently prey on zebras, though they may work on smaller predators, and zebras are not slow-moving enough to need to ward off threats. In addition, zebras do not possess adequate defenses to back up the warning pattern. The social function hypothesis states that stripes serve a role in intraspecific or individual recognition, social bonding, mutual grooming or a signal of fitness. Charles Darwin wrote in 1871 that "a female zebra would not admit the addresses of a male ass until he was painted so as to resemble a zebra" while Wallace stated in 1871 that: "The stripes therefore may be of use by enabling stragglers to distinguish their fellows at a distance". Regarding species and individual identification, critics note that zebra species have limited range overlap with each other and horses can recognise each other using visual communication. In addition, no correlation has been found between striping and social behaviour or group numbers among equines, and no link has been found between fitness and striping. The thermoregulatory hypothesis suggests that stripes help to control a zebra's body temperature. In 1971, biologist H. A. Baldwin noted that heat would be absorbed by the black stripes and reflected by the white ones. In 1990, zoologist Desmond Morris suggested that the stripes create cooling convection currents. A 2019 study supported this, finding that where the faster air currents of the warmer black stripes meet those of the white, air swirls form. The researchers also concluded that during the hottest times of the day, zebras erect their black hair to release heat from the skin and flatten it again when it gets cooler. Larison and colleagues (2015) determined that environmental temperature is a strong predictor for zebra striping patterns. Others have found no evidence that zebras have lower body temperatures than other ungulates whose habitat they share, or that striping correlates with temperature. A 2018 experimental study which dressed water-filled metal barrels in horse, zebra and cattle hides concluded that the zebra stripes had no effect on thermoregulation. The fly protection hypothesis holds that the stripes deter blood-sucking flies. Horse flies, in particular, spread diseases that are lethal to equines such as African horse sickness, equine influenza, equine infectious anemia and trypanosomiasis. In addition, zebra hair is about as long as the mouthparts of these flies. This hypothesis is the most strongly supported by the evidence. It was found that flies preferred landing on solidly coloured surfaces over those with black-and-white striped patterns in 1930 by biologist R. Harris, and this was proposed to have been a function of zebra stripes in a 1981 study. A 2014 study found a correlation between striping and overlap with horse and tsetse fly populations and activity. Other studies have found that zebras are rarely targeted by these insect species. Caro and colleagues (2019) studied captive zebras and horses and observed that neither could deter flies from a distance, but zebra stripes kept flies from landing, both on zebras and horses dressed in zebra print coats. There does not appear to be any difference in the effectiveness of repelling flies between the different zebra species; thus the difference in striping patterns may have evolved for other reasons. White or light stripes painted on dark bodies have also been found to reduce fly irritations in both cattle and humans. The effect even extends to pelts, with zebra pelts being less attractive to flies than unstriped impala pelts. How the stripes repel flies is less clear. A 2012 study concluded that they disrupt the polarised light patterns these insects use to locate water and habitat, though subsequent studies have refuted this. Stripes do not appear to work like a barber pole against flies since checkered patterns also repel them. There is also little evidence that zebra stripes confuse the insects via visual distortion or aliasing. Takács and colleagues (2022) suggest that, when the animal is in sunlight, temperature gradients between the warmer dark stripes and cooler white stripes prevent horseflies from detecting the warm blood vessels underneath. Caro and colleagues (2023) conclude that the insects are disoriented by the high colour contrast and relative thinness of the patterns. Behaviour and ecology Zebras may travel or migrate to wetter areas during the dry season. Plains zebras have been recorded travelling between Namibia and Botswana, the longest land migration of mammals in Africa. When migrating, they appear to rely on some memory of the locations where foraging conditions were best and may predict conditions months after their arrival. Plains zebras are more water-dependent and live in moister environments than other species. They usually can be found from a water source. Grévy's zebras can survive almost a week without water but will drink it every day when given the chance, and their bodies maintain water better than cattle. Mountain zebras can be found at elevations of up to . Zebras sleep for seven hours a day, standing up during the day and lying down during the night. They regularly use various objects as rubbing posts and will roll on the ground. A zebra's diet is mostly grasses and sedges, but they will opportunistically consume bark, leaves, buds, fruits, and roots. Compared to ruminants, zebras have a simpler and less efficient digestive system. Nevertheless, they can subsist on lower-quality vegetation. Zebras may spend 60–80% of their time feeding, depending on the availability of vegetation. The plains zebra is a pioneer grazer, mowing down the upper, less nutritious grass canopy and preparing the way for more specialised grazers like wildebeest, which depend on shorter and more nutritious grasses below. Zebras are preyed on mainly by lions. Leopards, cheetahs, spotted hyenas, brown hyenas and wild dogs pose less of a threat to adults. Biting and kicking are a zebra's defense tactics. When threatened by lions, zebras flee, and when caught they are rarely effective in fighting off the big cats. In one study, the maximum speed of a zebra was found to be while a lion was measured at . Zebras do not escape lions by speed alone but by sideways turning, especially when the cat is close behind. With smaller predators like hyenas and dogs, zebras may act more aggressively, especially in defense of their young. Social behaviour Zebra species have two basic social structures. Plains and mountain zebras live in stable, closed family groups or harems consisting of one stallion, several mares, and their offspring. These groups have their own home ranges, which overlap, and they tend to be nomadic. Stallions form and expand their harems by herding young mares away from their birth harems. The stability of the group remains even when the family stallion is displaced. Plains zebras groups gather into large herds and may create temporarily stable subgroups within a herd, allowing individuals to interact with those outside their group. Females in harems can spend more time feeding, and gain protection both for them and their young. The females have a linear dominance hierarchy with the high-ranking females being the ones that have lived in the group longest. While traveling, the most dominant females and their offspring lead the group, followed by the next most dominant. The family stallion trails behind. Young of both sexes leave their natal groups as they mature; females are usually herded by outside males to become part of their harems. In the more arid-living Grévy's zebras, adults have more fluid associations and adult males establish large territories, marked by dung piles, and mate with the females that enter them. Grazing and drinking areas tend to be separated in these environments and the most dominant males establish territories near watering holes, which attract females with dependent foals and those who simply want a drink, while less dominant males control territories away from water with more vegetation, and only attract mares without foals. Mares may travel through several territories but remain in one when they have young. Staying in a territory offers a female protection from harassment by outside males, as well as access to resources. In all species, excess males gather in bachelor groups. These are typically young males that are not yet ready to establish a harem or territory. With the plains zebra, the oldest males are the most dominant and group membership is stable. Bachelor groups tend to be at the boundaries of herds and during group movements, the bachelors follow behind or along the sides. Mountain zebra bachelor groups may also include young females that have left their natal group early, as well as old, former harem males. A territorial Grévy's zebra stallion may allow non-territorial bachelors in their territory, however when a mare in oestrous is present the territorial stallion keeps other stallions at bay. Bachelors prepare for their future harem roles with play fights and greeting/challenge rituals, which make up most of their activities. Fights between males usually occur over mates and involve biting and kicking. In plains zebra, stallions fight each other over recently matured mares to bring into their group and her father will fight off other males trying to abduct her. As long as a harem stallion is healthy, he is not usually challenged. Only unhealthy stallions have their harems taken over, and even then, the new stallion slowly takes over, peacefully displacing the old one. Agonistic behaviour between male Grévy's zebras occurs at the border of their territories. Communication Zebras produce a number of vocalisations and noises. The plains zebra has a distinctive, barking contact call heard as "a-ha, a-ha, a-ha" or "kwa-ha, kaw-ha, ha, ha". The mountain zebra may produce a similar sound while the call of Grévy's zebra has been described as "something like a hippo's grunt combined with a donkey's wheeze". Loud snorting and rough "gasping" in zebras signals alarm. Squealing is usually made when in pain, but can also be heard in friendly interactions. Zebras also communicate with visual displays, and the flexibility of their lips allows them to make complex facial expressions. Visual displays also consist of head, ear, and tail postures. A zebra may signal an intention to kick by dropping back its ears and whipping its tail. Flattened ears, bared teeth and a waving head may be used as threatening gestures by stallions. Individuals may greet each other by rubbing and sniffing and then mutually rub their cheeks, and move along their bodies towards each other's genitals to sniff. They then may caress their shoulders against each other and lay their heads on one another. This greeting usually occurs between harem or territorial males or among bachelor males playing. Plains and mountain zebras strengthen their social bonds with grooming. Members of a harem nibble and rake along the neck, shoulder, and back with their teeth and lips. Grooming usually occurs between mothers and foals and between stallions and mares. Grooming establishes social rank and eases aggressive behaviour, although Grévy's zebras generally do not perform social grooming. Reproduction and parenting Among plains and mountain zebras, the adult females mate only with their harem stallion, while in Grévy's zebras, mating is more polygynandrous and the males have larger testes for sperm competition. Female zebras have five to ten day long oestrous cycles; physical signs include a swollen, everted (inside out) labia and copious flows of urine and mucus. Upon reaching peak oestrous, mares spread-out their legs, lift their tails and open their mouths when in the presence of a male. Males assess the female's reproductive state with a curled lip and bared teeth (flehmen response) and the female will solicit mating by backing in. Gestation is typically around a year. A few days to a month later, mares can return to oestrus. In harem-holding species, oestrus in a female becomes less noticeable to outside males as she gets older, hence competition for older females is virtually nonexistent. Usually, a single foal is born, which is capable of running within an hour of birth. A newborn zebra will follow anything that moves, so new mothers prevent other mares from approaching their foals as they become more familiar with the mother's striping pattern, smell and voice. At a few weeks old, foals begin to graze, but may continue to nurse for eight to thirteen months. Living in an arid environment, Grévy's zebras have longer nursing intervals and young only begin to drink water three months after birth. In plains and mountain zebras, foals are cared for mostly by their mothers, but if threatened by pack-hunting hyenas and dogs, the entire group works together to protect all the young. The group forms a protective front with the foals in the centre, and the stallion will rush at predators that come too close. In Grévy's zebras, young stay in "kindergartens" when their mothers leave for water. These groups are tended to by the territorial male. A stallion may look after a foal in his territory to ensure that the mother stays, though it may not be his. By contrast, plains zebra stallions are generally intolerant of foals that are not theirs and may practice infanticide and feticide via violence to the pregnant mare. Human relations Cultural significance With their distinctive black-and-white stripes, zebras are among the most recognizable mammals. They have been associated with beauty and grace, with naturalist Thomas Pennant describing them in 1781 as "the most elegant of quadrupeds". Zebras have been popular in photography, with some wildlife photographers describing them as the most photogenic animal. They have become staples in children's stories and wildlife-themed art, such as depictions of Noah's Ark. In children's alphabet books, the animals are often used to represent the letter 'Z'. Zebra stripe patterns are popularly used for body paintings, dress, furniture and architecture. Zebras have been featured in African art and culture for millennia. They are depicted in rock art in Southern Africa dating from 28,000 to 20,000 years ago, though less often than antelope species like eland. How the zebra got its stripes has been the subject of folk tales, some of which involve it being scorched by fire. The Maasai proverb "a man without culture is like a zebra without stripes" has become popular in Africa. The San people connected zebra stripes with water, rain and lightning, and water spirits were conceived of having these markings. For the Shona people, the zebra is a totem animal and is glorified in a poem as an "iridescent and glittering creature". Its stripes have symbolised the union of male and female and at the ruined city of Great Zimbabwe, zebra stripes decorate what is believed to be a domba, a school meant to prepare girls for adulthood. In the Shona language, the name madhuve means "woman/women of the zebra totem" and is a name for girls in Zimbabwe. The plains zebra is the national animal of Botswana and zebras have been depicted on stamps during colonial and post-colonial Africa. For people of the African diaspora, the zebra represented the politics of race and identity, being both black and white. In cultures outside of its range, the zebra has been thought of as a more exotic alternative to the horse; the comic book character Sheena, Queen of the Jungle, is depicted riding a zebra and explorer Osa Johnson was photographed riding one. The film Racing Stripes features a captive zebra ostracised from the horses and ends up being ridden by a rebellious girl. Zebras have been featured as characters in animated films like Khumba, The Lion King and the Madagascar films and television series such as Zou. Zebras have been popular subjects for abstract, modernist and surrealist artists. Such art includes Christopher Wood's Zebra and Parachute, Lucian Freud's The Painter's Room and Quince on a Blue Table and the various paintings of Mary Fedden and Sidney Nolan. Victor Vasarely depicted zebras as black and white lines and connected in a jigsaw puzzle fashion. Carel Weight's Escape of the Zebra from the Zoo during an Air Raid was based on a real life incident of a zebra escaping during the bombing of London Zoo and consists of four comic book-like panels. Zebras have lent themselves to products and advertisements, notably for 'Zebra Grate Polish' cleaning supplies by British manufacturer Reckitt and Sons and Japanese pen manufacturer Zebra Co., Ltd. Captivity Zebras have been kept in captivity since at least the Roman Empire. In later times, captive zebras have been shipped around the world, often for diplomatic reasons. In 1261, Sultan Baibars of Egypt established an embassy with Alfonso X of Castile and sent a zebra and other exotic animals as gifts. In 1417, a zebra was gifted to the Chinese people by Somalia and displayed before the Yongle Emperor. The fourth Mughal emperor Jahangir received a zebra from Ethiopia in 1620 and Ustad Mansur made a painting of it. In the 1670s, Ethiopian Emperor Yohannes I exported two zebras to the Dutch governor of Jakarta. These animals would eventually be given by the Dutch to the Tokugawa Shogunate of Japan. When Queen Charlotte received a zebra as a wedding gift in 1762, the animal became a source of fascination for the people of Britain. Many flocked to see it at its paddock at Buckingham Palace. It soon became the subject of humour and satire, being referred to as "The Queen's Ass", and was the subject of an oil painting by George Stubbs in 1763. The zebra also gained a reputation for being ill-tempered and kicked at visitors. In 1882, Ethiopia sent a zebra to French president Jules Grévy, and the species it belonged to was named in his honour. Attempts to domesticate zebras were largely unsuccessful. It is possible that having evolved under pressure from the many large predators of Africa, including early humans, they became more aggressive, thus making domestication more difficult. However, zebras have been trained throughout history. In Rome, zebras are recorded to have pulled chariots during amphitheatre games starting in the reign of Caracalla (198 to 217 AD). In the late 19th century, the zoologist Walter Rothschild trained some zebras to draw a carriage in England, which he drove to Buckingham Palace to demonstrate that it can be done. However, he did not ride on them knowing that they were too small and aggressive. In the early 20th century, German colonial officers in East Africa tried to use zebras for both driving and riding, with limited success. Conservation As of 2016–2019, the IUCN Red List of mammals lists Grévy's zebra as endangered, the mountain zebra as vulnerable and the plains zebra as near-threatened. Grévy's zebra populations are estimated at less than 2,000 mature individuals, but they are stable. Mountain zebras number near 35,000 individuals and their population appears to be increasing. Plains zebra are estimated to number 150,000–250,000 with a decreasing population trend. Human intervention has fragmented zebra ranges and populations. Zebras are threatened by hunting for their hide and meat, and habitat destruction. They also compete with livestock and have their travelling routes obstruct by fences. Civil wars in some countries have also caused declines in zebra populations. By the early 20th century, zebra skins were being used to make rugs and chairs. In the 21st century, zebras may be taken by trophy hunters as zebra skin rugs sell for $1,000 to $2,000. Trophy hunting was rare among African peoples though the San were known to hunt zebra for meat. The quagga (E. quagga quagga) population was hunted by early Dutch settlers and later by Afrikaners to provide meat or for their skins. The skins were traded or used locally. The quagga was probably vulnerable to extinction due to its restricted range, and because they were easy to find in large groups. The last known wild quagga died in 1878. The last captive quagga, a female in Amsterdam's Natura Artis Magistra zoo, lived there from 9 May 1867 until it died on 12 August 1883. The Cape mountain zebra, a subspecies of mountain zebra, nearly went extinct due to hunting and habitat destruction, with less than 50 individuals left by the 1950s. Protections from South African National Parks allowed the population to rise to 2,600 by the 2010s. Zebras can be found in numerous protected areas. Important areas for Grévy's zebra include Yabelo Wildlife Sanctuary and Chelbi Sanctuary in Ethiopia and Buffalo Springs, Samburu and Shaba National Reserves in Kenya. The plains zebra inhabits the Serengeti National Park in Tanzania, Tsavo and Masai Mara in Kenya, Hwange National Park in Zimbabwe, Etosha National Park in Namibia, and Kruger National Park in South Africa. Mountain zebras are protected in Mountain Zebra National Park, Karoo National Park and Goegap Nature Reserve in South Africa as well as Etosha and Namib-Naukluft Park in Namibia.
Biology and health sciences
Perissodactyla
null
34467
https://en.wikipedia.org/wiki/ZX%20Spectrum
ZX Spectrum
The ZX Spectrum () is an 8-bit home computer developed and marketed by Sinclair Research. One of the most influential computers ever made and one of the all-time bestselling British computers, with over five million units sold. It was released in the United Kingdom on 23 April 1982, and around the world in the following years, most notably in Europe, the United States, and Eastern Bloc countries. The machine was designed by English entrepreneur and inventor Sir Clive Sinclair and his small team in Cambridge, and was manufactured in Dundee, Scotland by Timex Corporation. It was made to be small, simple, and most importantly inexpensive, with as few components as possible. The addendum "Spectrum" was chosen to highlight the machine's colour display, which differed from the black-and-white display of its predecessor, the ZX81. Rick Dickinson designed its distinctive case, rainbow motif, and rubber keyboard. Video output is transmitted to a television set rather than a dedicated monitor, while application software is loaded and saved onto compact audio cassettes. The ZX Spectrum was initially distributed by mail order, but after severe backlogs it was sold through High Street chains in the United Kingdom. It was released in the US as the Timex Sinclair 2068 in 1983, and in some parts of Europe as the Timex Computer 2048. Ultimately the Spectrum was released as seven models, ranging from the entry level with 16 KB RAM released in 1982 to the ZX Spectrum +3 with 128 KB RAM and built-in floppy disk drive in 1987. Throughout its life, the machine primarily competed with the Commodore 64, BBC Micro, Dragon 32, and the Amstrad CPC range. Over 24,000 software products were released for the ZX Spectrum. The Spectrum played a pivotal role in the early history of personal computing and video gaming, leaving a legacy that influenced generations. Its introduction led to a boom in companies producing software and hardware, the effects of which are still seen. It was among the first home computers aimed at a mainstream UK audience, with some crediting it for launching the British information technology industry. The Spectrum was Britain's top-selling computer until the Amstrad PCW surpassed it in the 1990s. It was discontinued in 1992. History The ZX Spectrum was conceived and designed by engineers at Sinclair Research, founded by English entrepreneur and inventor Clive Sinclair, who was well known for his eccentricity and pioneering ethic. On 25 July 1961, three years after passing his A-levels, he founded Sinclair Radionics Ltd as a vehicle to advertise his inventions and buy components. In 1972, Sinclair had competed with Texas Instruments to produce the world's first pocket calculator, the Sinclair Executive. By the mid 1970s, Sinclair Radionics was producing handheld electronic calculators, miniature televisions, and the ill-fated digital Black Watch wristwatch. Due to financial losses, Sinclair sought investors from the National Enterprise Board (NEB), who had bought a 43% interest in the company and streamlined his product line. Sinclair's relationship with the NEB had worsened, however, and by 1979 it opted to break up Sinclair Radionics entirely, selling off its television division to Binatone and its calculator division to ESL Bristol. After incurring a £7 million investment loss, Sinclair was given a golden handshake and an estimated £10,000 severance package. He had a former employee, Christopher Curry, establish a "corporate lifeboat" company named Science of Cambridge Ltd, in July 1977, called such as they were located near the University of Cambridge. By this time inexpensive microprocessors had started appearing on the market, which prompted Sinclair to start producing the MK14, a computer teaching kit which sold well at a very low price. Encouraged by this success, Sinclair renamed his company to Sinclair Research, and started looking to manufacture personal computers. Keeping the cost low was essential for Sinclair to avoid his products from becoming outpriced by American or Japanese equivalents as had happened to several of the previous Sinclair Radionics products. On 29 January 1980, the ZX80 home computer was launched to immediate popularity; notable for being one of the first computers available in the United Kingdom for less than £100. The company conducted no market research whatsoever prior to the launch of the ZX80; according to Sinclair, he "simply had a hunch" that the public was sufficiently interested to make such a project feasible and went ahead with ordering 100,000 sets of parts so that he could launch at high volume. On 5 March 1981, the ZX81 was launched worldwide to immense success with more than 1.5 million units sold, 60% of which was outside Britain. According to Ben Rosen, by pricing the ZX81 so low, the company had "opened up a completely new market among people who had never previously considered owning a computer". After its release, computing in Britain became an activity for the general public rather than the preserve of office workers and hobbyists. The ZX81's commercial success made Sinclair Research one of Britain's leading computer manufacturers, with Sinclair himself reportedly "amused and gratified" by the attention the machine received. Development Development of the ZX Spectrum began in September 1981, a few months after the release of the ZX81. Sinclair resolved to make his own products obsolete before his rivals developed the products that would do so. Parts of designs from the ZX80 and ZX81 were reused to ensure a speedy and cost-effective manufacturing process. The team consisted of 20 engineers housed in a small office at 6 King's Parade, Cambridge. During early production, the machine was known as the ZX81 Colour or the ZX82 to highlight the machine's colour display, which differed from the black and white of its predecessors. The addendum "Spectrum" was added later on, to emphasise its 15-colour palette. Aside from a new crystal oscillator and extra chips to add additional kilobytes of memory, the ZX Spectrum was intended to be, as quoted by Sinclair's marketing manager, essentially a "ZX81 with colour". According to Sinclair, the team also wanted to combine the ZX81's separate random-access memory sections for audio and video into a single bank. Chief engineer Richard Altwasser was responsible for the ZX Spectrum's hardware design. His main contribution was the design of the semi-custom uncommitted logic array (ULA) integrated circuit, which integrated, on a single chip, the essential hardware functions. Altwasser designed a graphics mode that required less than 7 kilobytes of memory and implemented it on the ULA. Vickers wrote most of the ROM code. Lengthy discussions between Altwasser and Sinclair engineers resulted in a broad agreement that the ZX Spectrum must have high-resolution graphics, 16 kilobytes of memory, an improved cassette interface, and an impressive colour palette. To achieve this, the team had to divorce the central processing unit (CPU) away from the main display to enable it to work at full efficiency – a method which contrasted with the ZX81's integrated CPU. The inclusion of colour to the display proved a major obstacle to the engineers. A Teletext-like approach was briefly considered, in which each line of text would have colour-change codes inserted into it. However, this was ruled out, as it was deemed unsuitable for high-resolution graphs or diagrams that involved multiple colour changes. Altwasser devised the idea of allocating a colour attribute to each character position on the screen. This ultimately used eight bits of memory for each character position; three bits to provide any one of eight foreground colours and three bits for the eight background colours, one bit for extra brightness and one bit for flashing. Overall, the system took up slightly less than 7 kilobytes of memory, leaving an additional 9 kilobytes to write programs — a figure that pleased the team. Much of the firmware was written by computer scientist Steve Vickers from Nine Tiles, who compiled all control routines to produce the Sinclair BASIC interpreter, a custom variant of the general purpose BASIC programming language. Making a custom interpreter made it possible to fit all of its functionality into a very small amount of read-only memory (ROM). The development process of the software was marked by disagreements between Nine Tiles and Sinclair Research. Sinclair placed an emphasis on expediting the release of the Spectrum, primarily by minimising alterations in the software from the ZX81, which had in turn been based on the ZX80's software. The software architecture of the ZX80, however, had been tailored for a severely constrained memory system, and in Nine Tiles' opinion was unsuitable for the enhanced processing demands of the ZX Spectrum. Sinclair favoured solving this with expansion modules on the existing framework like with the ZX81, which Nine Tiles disagreed with. Ultimately, both designs were developed, but Vickers and Nine Tiles were unable to finish their version before the launch of the Spectrum and it was not used. The distinctive case and colourful design of the ZX Spectrum was the creation of Rick Dickinson, a young British industrial designer who had been hired by Sinclair to design the ZX81. Dickinson was tasked to design a sleeker and more "marketable" appearance to the new machine, whilst ensuring all 192 BASIC functions could fit onto 40 physical keys. Early sketches from August 1981 showed the case was to be more angular and wedge-like, in similar vein to an upgraded ZX81 model. Dickinson later settled on a flatter design with a raised rear section and rounded sides in order to depict the machine as "more advanced" as opposed to a mere upgrade. In drawing up potential logos, Dickinson proposed a series of different logotypes which all featured rainbow slashes across the keyboard. The design of the Spectrum's rubber keyboard was simplified from several hundred components to a conventional moving keyboard down to "four to five" moving parts using a new technology. The keyboard was still undergoing changes as late as February 1982; some sketches included a roundel-on-square key design which was later featured on the later Spectrum+ model. Dickinson recalled in 2007 that "everything was cost driven" and that the minimalist, Bauhaus approach to the Spectrum gave it an elegant yet "[non] revolutionary" form. The drawing board on which Dickinson designed the ZX Spectrum is now on display in the Science Museum in London. The need for an improved cassette interface was apparent from the number of complaints received from ZX81 users, who encountered problems when trying to save and load programs. To increase the data transfer speed, the team significantly decreased the length of tones that represent binary data. To increase the reliability, a leading period of constant tone was introduced, which allowed the cassette recorder's automatic gain control to settle itself down, eliminating hisses on the tape. A Schmitt trigger was added inside the ULA to reduce noise of the received signal. Originally, the team aimed for data transfer speed of 1000 baud, but succeeded in getting it to work at a considerably faster 1500 baud. Unlike the ZX81, the Spectrum was able to maintain its display during loading and saving operations, and programmers took advantage of this to show a splash screen whilst loading took place in the background. As with the ZX81, the ZX Spectrum was manufactured in Dundee, Scotland, by Timex Corporation at the company's Dryburgh factory. Prior to the manufacture of the ZX81, however, Timex had little experience in assembling electronics and had not originally been an obvious choice of manufacturing subcontractor. It was a well-established manufacturer of mechanical watches but was facing a crisis at the beginning of the 1980s; profits had dwindled to virtually zero as the market for watches stagnated in the face of competition from the digital and quartz watches. Recognising the trend, Timex's director, Fred Olsen, determined that the company would diversify into other areas and signed a contract with Sinclair. Launch The ZX Spectrum was officially revealed before journalists by Sinclair at the Churchill Hotel in Marylebone, London, on 23 April 1982. Later that week, the machine was officially presented in a "blaze of publicity" at the Earl's Court Computer Show in London, and the ZX Microfair in Manchester. The ZX Spectrum was launched with two models: a 16KB 'basic' version, and an enhanced 48KB variant. The former model had an undercutting price of £125, significantly lower than its main competitor the BBC Micro, whilst the latter model's price of £175 was comparable to a third of an Apple II computer. Upon release, the keyboard surprised many users due to its use of rubber keys, described as offering the feel of "dead flesh". Sinclair himself remarked that the keyboard's rubber mould was "unusual", but consumers were undeterred. Despite very high demand, Sinclair Research was "notoriously late" in delivering the ZX Spectrum. Their practice of offering mail-order sales before units were ready ensured a constant cash flow, but meant a lacking distribution. Nigel Searle, the newly-appointed chief of Sinclair's computer division, said in June 1982 the company had no plans to stock the new machine in WHSmith, which was at the time Sinclair's only retailer. Searle explained that the mail-order system was in place due to there being no "obvious" retail outlets in the United Kingdom which could sell personal computers, and it made "better sense" financially to continue selling through mail-order. The company's conservative approach to distributing the machine was criticised, with disillusioned customers telephoning and writing letters. Demand sky-rocketed beyond Sinclair's planned 20,000 monthly unit output to a backlog of 30,000 orders by July 1982. Due to a scheduled holiday at the Timex factory that summer, the backlog had risen to 40,000 units. Sinclair issued a public apology in September that year, and promised that the backlog would be cleared by the end of that month. Supply did not return to normal until the 1982 Christmas season, however. Production of the machine rapidly increased with the arrival of the less expensive Issue 2 motherboard, a redesign of the main circuit board which addressed hardware manufacturing defects that affected production of the first model. Sales of the ZX Spectrum reached 200,000 in its first nine months, rising to 300,000 for the whole of the first year. By August 1983 total sales in Britain and Europe had exceeded 500,000, with the millionth Spectrum manufactured on 9 December 1983. By this point, an average of 50,000 units were being purchased each month. In July 1983, an enhanced version of the ZX Spectrum was launched in the United States as the Timex Sinclair 2068. Advertisements described it as offering 72 kilobytes of memory, having a full range of colour and sound for a price under $200. Despite the improvements upon its British counterpart, sales proved poor and Timex Sinclair collapsed the following year. Success and market domination A crucial part of the company's marketing strategy was to implement regular price-cutting at strategic intervals to maintain market share. Ian Adamson and Richard Kennedy noted that Sinclair's method was driven by securing his leading position through "panicking" the competition. While most companies at the time reduced prices of their products while their market share was dwindling, Sinclair Research discounted theirs shortly after sales had peaked, throwing the competition into "utter disarray". Sinclair Research made a profit of £14 million in 1983, compared to £8.5 million the previous year. Turnover doubled from £27.2 million to £54.5 million, which equated to roughly £1 million for each person employed directly by the company. Clive Sinclair became a focal point during the ZX Spectrum's marketing campaign by putting a human face onto the business. Sinclair Research was portrayed in the media as a "plucky" British challenger taking on the technical and marketing might of giant American and Japanese corporations. As David O'Reilly noted in 1986, "by astute use of public relations, particularly playing up his image of a Briton taking on the world, Sinclair has become the best-known name in micros." The media latched onto Sinclair's image; his "Uncle Clive" persona is said to have been created by the gossip columnist for Personal Computer World. The press praised Sinclair as a visionary genius, with The Sun lauding him as "the most prodigious inventor since Leonardo da Vinci". Adamson and Kennedy wrote that Sinclair outgrew the role of microcomputer manufacturer and "accepted the mantle of pioneering boffin leading Britain into a technological utopia". Sinclair's contribution to the technology sector resulted in him being knighted upon the recommendation of Margaret Thatcher in the Queen's 1983 Birthday Honours List. The United Kingdom was largely immunised from the effects of the video game crash of 1983, due to the saturation of home computers such as the ZX Spectrum. The microcomputer market continued to grow and game development was unhindered despite the turbulence in the American markets. Indeed, computer games remained the dominant sector of the British home video game market up until they were surpassed by Sega and Nintendo consoles in 1991. By the end of 1983 there were more than 450 companies in Britain selling video games on cassette, compared to 95 the year before. An estimated 10,000 to 50,000 people, mostly young men, were developing games out of their homes based on advertisements in popular magazines. The growth of video games during this period has been compared to the punk subculture, fuelled by young people making money from their games. By the mid 1980s, Sinclair Research's share of the British home computer market had climbed to a high of 40 per cent. Sales in the 1984 Christmas season were described as "extremely good". In early 1985 the British press reported the home computer boom to have ended, leaving many companies slashing prices of their hardware to anticipate lower sales. Despite this, celebration of Sinclair's success in the computing market continued at the Which Computer? show in Birmingham, where the five-millionth Sinclair machine (a gold coloured QL) was issued as a prize. Later years and company decline The ZX Spectrum's successor, the Sinclair QL, was officially announced on 12 January 1984, shortly before the Macintosh 128K went on sale. Contrasting with its predecessors, the QL was aimed at more serious, professional home users. It suffered from several design flaws; fully operational QLs were not available until the late summer, and complaints against Sinclair concerning delays were upheld by the Advertising Standards Authority (ASA) in May of that year. Particularly serious were allegations that Sinclair was cashing cheques months before machines were shipped. By autumn 1984, Sinclair was still publicly forecasting that it would be a "million seller" and that 250,000 units would be sold by the end of the year. QL production was suspended in February 1985, and the price was halved by the end of the year. It ultimately flopped, with 139,454 units being manufactured. The ZX Spectrum+, a rebranded ZX Spectrum with identical technical specifications except for the QL-like keyboard, was introduced in October 1984 and made available in WHSmith's stores the day after its launch. Retailers stocked the device in high quantities, anticipating robust Christmas sales. Nevertheless, the product did not perform as well as projected, leading to a significant drop in Sinclair's income from orders in January, as retailers were left with surplus stock. Subsequently, an upgraded model, the ZX Spectrum 128, was released in Spain in September 1985, with development financed by the Spanish distributor Investrónica. The launch of this model in the UK was postponed until January 1986 due to the substantial leftover inventory of the prior model. While the Sinclair QL was in development, Sinclair also hoped to repeat his success with the Spectrum in the fledgling electric vehicle market, which he saw as ripe for a new approach. On 10 January 1985, Sinclair unveiled the Sinclair C5, a small one-person battery electric recumbent tricycle. It marked the culmination of Sir Clive's long-running interest in electric vehicles. The C5 turned out to be a significant commercial failure, selling only 17,000 units and losing Sinclair £7 million. It has since been described as "one of the great marketing bombs of postwar British industry". The ASA ordered Sinclair to withdraw advertisements for the C5 after finding that the company's claims about its safety could not be proved or justified. The combined failures of the C5 and QL caused investors to lose confidence in Sinclair's judgement. In May 1985, Sinclair Research announced their intention to raise an additional £10 to £15 million to restructure the organisation. Given the loss of confidence in the company, securing the funds proved to be a challenging task. In June 1985, business magnate Robert Maxwell disclosed a takeover bid for Sinclair Research through Hollis Brothers, a subsidiary of his Pergamon Press. However, the deal was terminated in August 1985. The future of Sinclair Research remained uncertain until 7 April 1986, when the company sold their entire computer product range, along with the "Sinclair" brand name, to Alan Sugar's Amstrad for £5 million. The takeover sent ripples through the London Stock Exchange, but Amstrad's shares soon recovered, with one stock broker affirming that "the City appears to have taken the news in its stride". Amstrad's acquisition of the brand name saw the release of three ZX Spectrum models throughout the late 1980s, each with varying improvements. By 1990, Sinclair Research consisted of Sinclair and two other employees down from 130 employees at its peak in 1985. The ZX Spectrum was officially discontinued in 1992, after ten years on the market. Sinclair Research continued to exist as a one-man company, marketing Sir Clive Sinclair's inventions until his death in September 2021. Hardware Technical specifications The central processing unit is a Zilog Z80, an 8-bit microprocessor, with a clock rate of 3.5 MHz. The original model Spectrum has 16 KB of ROM and either 16 kB or 48 kB of RAM. Graphics Video output is channelled through an RF modulator, intended for use with contemporary television sets, to provide a simple colour graphic display. Text is displayed using a grid of 32 columns × 24 rows of characters from the ZX Spectrum character set, or from a custom set. The machine features a colour palette of 15 colours, consisting of seven saturated colours at two levels of brightness, along with black. The image resolution is 256×192 pixels, subject to the same colour limitations. To optimise memory usage, colour is stored separately from the pixel bitmap in a low resolution, 32×24 grid overlay, corresponding to the character cells. In practical terms, this means that all pixels within an 8x8 character block share one foreground colour and one background colour. Altwasser received a patent for this design. An "attribute" consists of a foreground and a background colour, a brightness level (normal or bright) and a flashing "flag" which, when set, causes the two colours to swap at regular intervals. This scheme leads to what was dubbed "colour clash" or attribute clash, where a desired colour of a specific pixel could not be selected, but only the colour attributes of an 8x8 block. This became a distinctive feature of the Spectrum, requiring programs, especially games, to be designed with this limitation in mind. In contrast, other machines available at the same time, such as the Amstrad CPC or the Commodore 64, did not suffer from this limitation. While the Commodore 64 also employed colour attributes, it utilised a special multicolour mode and hardware sprites to circumvent attribute clash. Sound Sound output is produced through a built-in beeper capable of generating a single channel with ten octaves. It is controlled by a single bit. By toggling it on and off, simple sounds are generated. This speaker, capable of producing just one note at a time, was governed by the BASIC command 'BEEP', where programmers could manipulate parameters for pitch and duration. Furthermore, the processor remained occupied exclusively with the BASIC BEEPs until their completion, limiting concurrent operations. Despite these constraints, it marked a significant step forward from the ZX81, which lacked any sound capabilities. Resourceful programmers swiftly devised workarounds; its rudimentary audio functionality compelled developers to explore unconventional methods such as programming the beeper to emit multiple pitches. Later software became available that allowed for two-channel sound playback. The machine includes an expansion bus edge connector and 3.5 mm audio in/out ports, facilitating the connection of a cassette recorder for loading and saving programs and data. The port has a higher output than the and is recommended for headphones, while the port is intended for attachment to other audio devices as a line-in source. Other integrated peripherals The ZX Spectrum integrated elements from the ZX81. The keyboard decoding and cassette interfaces are nearly identical, although the latter was programmed for higher-speed loading and saving. The central ULA integrated circuit shares some resemblance with that of the ZX81, but it features a hardware-based television raster generator with colour support. This indirectly provides the new machine with roughly four times the processing power of the ZX81, as the Z80 is relieved of video generation tasks. An initial ULA design flaw occasionally led to incorrect keyboard scanning, which was resolved by adding a small circuit board mounted upside down next to the CPU in Issue 1 ZX Spectrums. Firmware The machine's Sinclair BASIC interpreter is stored in 16 KiB ROM, along with essential system routines. The ROM code, responsible for tasks such as floating point calculations and expression parsing, exhibited significant similarities to ZX81, although a few outdated routines remained in the Spectrum ROM. The Spectrum's keyboard is imprinted with BASIC keywords. To input a command in BASIC, many keywords require a single keyboard stroke. Other keywords require a change of keyboard mode by a few keystrokes. The BASIC interpreter is derived from the one used on the ZX81. A BASIC program for ZX81 can be entered into a ZX Spectrum with minimal modifications. However, Spectrum BASIC introduced numerous additional features, enhancing its usability. The ZX Spectrum character set was expanded compared to that of the ZX81, which lacked lowercase letters. Spectrum BASIC incorporated extra keywords for better graphics and sound functionality, and support for multi-statement lines was added. The built-in ROM tape modulation software routines for cassette data storage enable data transfers at an average speed of 171 bits per second (bit/s), with a theoretical peak speed of 256 bit/s. The tape modulation is significantly more advanced than the ZX81, with approximately four times faster average speeds. Sinclair Research models ZX Spectrum 16K/48K The original ZX Spectrum is remembered for its rubber chiclet keyboard, diminutive size and distinctive rainbow motif. It was originally released on 23 April 1982 with 16 KB of RAM for or with 48 KB for ; these prices were reduced to and respectively in 1983. Owners of the 16 KB model could purchase an internal 32 KB RAM upgrade, which for early "Issue 1" machines consisted of a daughterboard. Later issue machines required the fitting of 8 dynamic RAM chips and a few TTL chips. Users could mail their 16K Spectrums to Sinclair to be upgraded to 48 KB versions. Later revisions contained 64 KB of memory but were configured such that only 48 KB were usable. External 32 KB RAM packs that mounted in the rear expansion slot were available from third parties. Both machines had 16 KB of onboard ROM. An "Issue 1" ZX Spectrum can be distinguished from Issue 2 or 3 models by the colour of the keys – light grey for Issue 1, blue-grey for later machines. Although the official service manual states that approximately 26,000 of these original boards were manufactured, subsequent serial number analysis shows that only 16,000 were produced, almost all of which fell in the serial number range 001-000001 to 001-016000. An online tool now exists to allow users to ascertain the likely issue number of their ZX Spectrum by inputting the serial number. These models experienced numerous changes to its motherboard design throughout its life; mainly to improve manufacturing efficiencies, but also to correct bugs from previous boards. Another issue was with the Spectrum's power supply. In March 1983, Sinclair issued an urgent recall warning for all owners of models bought after 1 January 1983. Plugs with a non-textured surface were at risk of causing shock, and were asked to be sent back to a warehouse in Cambridgeshire which would supply a replacement within 48 hours. ZX Spectrum+ Development of the ZX Spectrum+ began in June 1984, and was released on 15 October that year at £179. It was assembled by AB Electronics in South Wales and Samsung in South Korea. This 48 KB Spectrum introduced a new QL-style case with an injection-moulded keyboard and a reset button that functions as a switch shorting across the CPU reset capacitor. Electronically, it was identical to the previous 48 KB model. The machine outsold the rubber-key model two to one, however, some retailers reported a failure rate of up to 30%, compared with a more typical 5–6% for the older model. In early 1985, the original Spectrum was officially discontinued, and the ZX Spectrum+ was reduced in price to £129. ZX Spectrum 128 In 1985, Sinclair developed the ZX Spectrum 128 (codenamed Derby) in conjunction with their Spanish distributor Investrónica (a subsidiary of El Corte Inglés department store group). Investrónica had helped adapt the ZX Spectrum+ to the Spanish market after their government introduced a special tax on all computers with 64 KB RAM or less, and a law which obliged all computers sold in Spain to support the Spanish alphabet and show messages in Spanish. The appearance of the ZX Spectrum 128 is similar to the ZX Spectrum+, with the exception of a large external heatsink for the internal 7805 voltage regulator added to the right hand end of the case, replacing the internal heatsink in previous versions. This external heatsink led to the system's nickname, "The Toast Rack". New features included 128 KB RAM with RAM disc commands, three-channel audio via the AY-3-8912 chip, MIDI compatibility, an RS-232 serial port, an RGB monitor port, 32 KB of ROM including an improved BASIC editor, and an external keypad. The machine was simultaneously unveiled for the first time and launched in September 1985 at the SIMO '85 trade show in Spain, with a price of 44,250 pesetas. Sinclair later presented the ZX Spectrum 128 at The May Fair Hotel's Crystal Rooms in London, where he acknowledged that entertainment was the most common use of home computers. Due to the large number of unsold Spectrum+ models, Sinclair decided not to start it selling in the United Kingdom until January 1986 at a price of £179. The Zilog Z80 processor used in the Spectrum has a 16-bit address bus, which means only 64 KB of memory can be directly addressed. To facilitate the extra 80 KB of RAM the designers used bank switching so the new memory would be available as eight pages of 16 KB at the top of the address space. The same technique was used to page between the new 16 KB editor ROM and the original 16 KB BASIC ROM at the bottom of the address space. The new sound chip and MIDI out abilities were exposed to the BASIC programming language with the command PLAY and a new command SPECTRUM was added to switch the machine into 48K mode, keeping the current BASIC program intact (although there is no command to switch back to 128K mode). To enable BASIC programmers to access the additional memory, a RAM disk was created where files could be stored in the additional 80 KB of RAM. The new commands took the place of two existing user-defined-character spaces causing compatibility problems with certain BASIC programs. Unlike its predecessors, it has no internal speaker, and can only produce sound from a television speaker. Amstrad models ZX Spectrum +2 The ZX Spectrum +2 marked Amstrad's entry into the Spectrum market shortly after their acquisition of the Spectrum range and "Sinclair" brand in 1986. This machine featured a brand-new grey case with a spring-loaded keyboard, dual joystick ports, and an integrated cassette recorder known as the "Datacorder," akin to the Amstrad CPC 464. The ROM was updated so that the screen on boot-up showed an Amstrad copyright message (©️ 1986 Amstrad in place of the previous ©️ 1982 Sinclair Research Ltd) However, it was largely identical to the ZX Spectrum 128 in most technical aspects. The machine retailed for £149. The new keyboard did not feature the BASIC keyword markings seen on earlier Spectrums, except for the keywords LOAD, CODE, and RUN, which were useful for loading software. Instead, the +2 introduced a menu system, almost identical to that of the ZX Spectrum 128, allowing users to switch between 48K BASIC programming with keywords and 128K BASIC programming, where all words, both keywords and others, needed to be typed out in full (though keywords were still stored internally as one character each). Despite these changes, the layout remained identical to that of the 128. ZX Spectrum +3 The ZX Spectrum +3, which was launched in 1987, bore a resemblance to its predecessor but introduced a built-in 3-inch floppy disk drive instead of the cassette drive. Initially priced at £249, it later retailed for £199. It was the only Spectrum model capable of running the CP/M operating system without additional hardware. Unlike its predecessors, the ZX Spectrum +3 power supply utilised a DIN connector and featured "Sinclair +3" branding on the case. Significant alterations caused a series of incompatibilities, such as the removal of several lines on the expansion bus edge connector. This resulted in complications for various peripherals. Additionally, changes in memory timing led to certain RAM banks being contended, causing failures in high-speed colour-changing effects. The keypad scanning routines from the ROM were also eliminated, rendering some older 48K and 128K games incompatible with the machine. The ZX Interface 1 was also rendered incompatible due to disparities in ROM and expansion connectors, making it impossible to connect and use the Microdrive units. Production of the +3 was discontinued in December 1990, reportedly in response to Amstrad's relaunch of their CPC range, with an estimated 15% of ZX Spectrums sold being +3 models at the time. The +2B model, the only other model still in production at this point, continued to be manufactured, as it was believed not to be in direct competition with other computers in Amstrad's product range. ZX Spectrum +2A, +2B and +3B The ZX Spectrum +2A was a new version of the Spectrum +2 using the same circuit board as the Spectrum +3. It was sold from late 1988 and unlike the original grey +2 was housed inside a black case. The Spectrum +2A/+3 motherboard (AMSTRAD part number Z70830) was designed so that it could be assembled with a +2 style "datacorder" connected instead of the floppy disk controller. The power supply of the ZX Spectrum +2A used the same pinout as the +3 and has "Sinclair +2" written on the case. 1989 saw the release of the ZX Spectrum +2B and ZX Spectrum +3B. They are functionally similar in design to the Spectrum +2A and +3, though changes to the generation of the audio output signal were made to resolve problems with clipping. The +2B board has no provision for floppy disk controller circuitry, while the +3B motherboard has no provision for connecting an internal tape drive. Production of all Amstrad Spectrum models ended in 1992. Licences and clones Official licences Following Amstrad's acquisition of Sinclair Research's computer business, Sir Clive Sinclair formed the mail order company Cambridge Computer, which he used to release the portable Cambridge Z88 in February 1987. This ill-fated Spectrum clone incorporated a ULA that introduced the high-resolution video mode originally pioneered in the Timex Sinclair 2068. The Pandora was designed with a flat-screen monitor and Microdrives, intended as Sinclair's solution for portable business computing. Timex of Portugal developed and produced several branded computers, including a PAL region-compatible version of the Timex Sinclair 2068, known as the Timex Computer 2048. This variant features distinct buffers for both the ULA and the CPU, significantly enhancing compatibility with ZX Spectrum software compared to the American model. Software developed for the Portuguese-made 2048 remained fully compatible with its American counterpart, as the ROMs were left unaltered. Timex of Portugal also created a ZX Spectrum "emulator" in cartridge form. Several other upgrades were introduced, including a BASIC64 cartridge enabling it to utilise high-resolution (512x192) modes. This model saw significant success in both Portugal and Poland. In India, Deci Bells Electronics Limited based in Pune, introduced a licensed version of the Spectrum+ in 1988. Dubbed the "dB Spectrum+", it performed well in the Indian market, selling over 50,000 units and achieving an 80% market share. Unofficial clones Numerous unofficial Spectrum clones were produced, especially in Eastern Europe. Many small start-ups in the Soviet Union assembled various clones, distributed through poster adverts and street stalls. Over 50 such clone models existed in total. In Czechoslovakia, the first production ZX Spectrum clone was the Didaktik Gama, sporting two switched 32 KB memory banks and 16 KB of slower RAM containing graphical data for video output, followed by Didaktik M, with later availability of a 5.25"/3.5" floppy disk drives; and a Didaktik Kompakt clone with a built-in floppy drive. There were also clones produced in South America, such as the Brazilian-made TK90X and TK95, as well as the Argentine Czerweny CZ models. In the United Kingdom, Spectrum peripheral vendor Miles Gordon Technology (MGT) released the SAM Coupé 8-bit home computer in December 1989. It was designed to be fully compatible with the ZX Spectrum 48K, housing a Zilog Z80B processor clocked at 6 MHz and 256KB of RAM. By this point, the Amiga and Atari ST had taken hold of the market, leaving MGT in eventual receivership in June 1990. In his book Retro Tech, Peter Leigh considers the Sam Coupé to be the "true" successor of the ZX Spectrum. Peripherals Several peripherals were developed and marketed by Sinclair. The ZX Printer, a small spark printer, was already on the market upon the ZX Spectrum's release, as its computer bus was partially backward-compatible with that of its predecessor, the ZX81. It uses two electrically charged styli to burn away the surface of aluminium-coated paper to reveal the black underlay. The ZX Interface 1 add-on module, launched in 1983, includes 8 kB of ROM, an RS-232 serial port, a proprietary local area network (LAN) interface known as ZX Net, and a port for connecting up to eight ZX Microdrives — tape-loop cartridge storage devices released in July 1983, known for their speed, albeit with some reliability concerns. Sinclair Research also introduced the ZX Interface 2, which added two joystick ports and a ROM cartridge port. Although the ZX Microdrives were initially greeted with good reviews, they never became a popular distribution method due to fears over cartridge quality and piracy. Third-party hardware add-ons were available throughout the machine's life, including the Kempston joystick interface, the Morex Peripherals Centronics/RS-232 interface, the Currah Microspeech unit for speech synthesis, Videoface Digitiser, the SpecDrum drum machine, and the Multiface, a snapshot and disassembly tool from Romantic Robot. After the original ZX Spectrum's keyboard received criticism for its "dead flesh" feel, external keyboards became popular. In 1983, DK'Tronics launched a Light Pen compatible with some drawing software. The Abbeydale Designers/Watford Electronics SPDOS and KDOS disk drive interfaces were bundled with office productivity software, including the Tasword word processor, Masterfile database, and Omnicalc spreadsheet. This bundle, along with OCP's Stock Control, Finance, and Payroll systems, introduced small businesses to streamlined computerised operations. In 1987 and 1988, Miles Gordon Technology released the DISCiPLE and +D systems. These systems had the capability to store memory images as disk snapshots, allowing users to restore the Spectrum to its exact previous state. Both systems were compatible with the Microdrive command syntax, simplifying the porting of existing software. In the mid-1980s, Telemap Group launched a fee-based service allowing ZX Spectrum users to connect their machines to the Micronet 800 information provider via a Prism Micro Products VTX5000 modem. Micronet 800, hosted by Prestel, provided news and information about microcomputers and offered a form of instant messaging and online shopping. Software Most Spectrum software was originally distributed on audio cassette tapes, intended to work with a normal domestic cassette recorder. Software was mainly distributed through print media such as magazines and books. To load software onto the machine, the reader would type the BASIC program listing by hand, run it, and save it to the cassette for later use. Some magazines distributed 7" 33 rpm flexi disc records, or "Floppy ROMs", a variant of regular vinyl records which could be played on a standard record player. Some radio stations would broadcast audio stream data via frequency modulation or medium wave so that listeners could directly record it onto an audio cassette themselves. ZX Spectrum-focused radio programmes existed in the United Kingdom, which were received over long distances on domestic radio receivers. Different types of software released for the machine include programming language implementations, databases, word processors (Tasword being the most prominent), spreadsheets, drawing and painting tools (e.g. OCP Art Studio), 3D-modelling (e.g. VU-3D) and archaeology software. Over 24,000 different software titles were released for the ZX Spectrum throughout its lifespan. The ZX Spectrum had an extensive library of video games, including iconic titles such as Manic Miner, Jet Set Willy, Chuckie Egg, Elite, Sabre Wulf, Knight Lore, and The Hobbit. Many of the popular ZX81 titles were rewritten for the Spectrum to take advantage of the newer machine's colour and sound capabilities (Psion's Flight Simulation being a notable example). These games were instrumental in establishing the machine as a prominent gaming platform during the 1980s. Hardware limitations of the machine required a particular level of creativity from video game designers. From August 1982 the ZX Spectrum came bundled with Horizons: Software Starter Pack, a software compilation which included ten demonstration programs. Some ZX Spectrum games hold a number of industry-firsts and Guinness World Records. These include Ant Attack, the first home computer game to use isometric graphics, Turbo Esprit, the first open world driving game, and Redhawk, which featured the first superhero created specifically for a video game. Reception Initial reception of the ZX Spectrum was generally positive. Critics in Britain welcomed the new machine as a worthy successor to the ZX81; Robin Bradbeer of Sinclair User praised the additional keyboard functions the Spectrum had to offer, and lauded the "strength" of its ergonomic and presentable design. Tim Hartnell from Your Computer noted that Sinclair had improved on the shortcomings of the ZX80 and ZX81 by revamping the Spectrum's load and save functions, noting that it made working with the machine "a pleasure". Hartnell concluded that despite minor faults, the machine was "way ahead" of its competitors, and its specification exceeded that of the BBC Micro Model A. Computer and Video Games Terry Pratt compared the Spectrum's keyboard negatively to the typewriter-style used on the BBC Micro, opining that it was an improvement over the ZX81 but unsuited for "typists". In similar vein, David Tebbutt from Personal Computer World felt that the Spectrum's keyboard felt more like a calculator than typewriter, but praised its functional versatility. Likewise, Gregg Williams from BYTE criticised the keyboard, declaring that despite the machine's attractive price the layout "is impossible to justify" and "poorly designed" in several respects. Williams was sceptical of the computer's appeal to American consumers if sold for  — "hardly competitive with comparable low-cost American units" — and expected that Timex would sell it for . A more negative review came from Jim Lennox of Technology Week, who wrote that "after using it [...] I find Sinclair's claim that it is the most powerful computer under £500 unsustainable. Compared to more powerful machines, it is slow, its colour graphics are disappointing, its BASIC limited and its keyboard confusing". Legacy The importance of the ZX Spectrum and its role in the early history of personal computing and video gaming has left it regarded as the most important and influential computer of the 1980s. Some observers credit it as being responsible for launching the British information technology industry during a period of recession, while introducing home computing to the masses. , it is also one of the best-selling British computers of all time, with over five million units sold by the end of the Spectrum's lifespan in 1992. It retained the title of Britain's top-selling computer until the Amstrad PCW surpassed it in the 1990s, with eight million units sold by the end of the PCW's lifespan in 1998. The ZX Spectrum is affectionately known as the "Speccy" by elements of its fan following. A number of notable games developers began their careers on the ZX Spectrum. Tim and Chris Stamper founded Ultimate Play the Game in 1982, who found success with their blockbuster hits Jetpac (1983), Atic Atac (1983), Sabre Wulf (1984), and Knight Lore (1984). The Stamper brothers later founded Rare, which became Nintendo's first Western third-party developer. David Perry, the founder of Shiny Entertainment, moved from Northern Ireland to England to focus on developing games for the ZX Spectrum. Some programmers have continued to code for the platform by using emulators. A homebrew community continues into the present day, with several games being released commercially from new software houses such as Cronosoft. In 2020, a museum dedicated to the ZX Spectrum and other Sinclair products opened in Cantanhede, Portugal. Recreations In 2013, an FPGA-based redesign of the original ZX Spectrum known as the ZX Uno, was formally announced. All of its hardware, firmware and software are open source, released as Creative Commons licence Share-alike. The use of a Spartan FPGA allows the system to not only re-implement the ZX Spectrum, but many other 8-bit computers and games consoles. The device also runs modern open FPGA machines such as the Chloe 280SE. The Uno was successfully crowdfunded in 2016 and the first boards went on sale the same year. In January 2014, Elite Systems, who produced a successful range of software for the original ZX Spectrum in the 1980s, announced plans for a Spectrum-themed bluetooth keyboard that would attach to mobile devices. The company used a crowdfunding campaign to fund the Recreated ZX Spectrum, which would be compatible with games the company had already released on iTunes and Google Play. Elite Systems took down its Spectrum Collection application the following month, due to complaints from authors of the original software that they had not been paid for the content. Wired UK described the finished device, which was styled as an original Spectrum 48k keyboard, as "absolutely gorgeous" but said it was ultimately more of an expensive novelty than an actual Spectrum. In July 2019, Eurogamer reported that many of the orders had yet to be delivered due to a dispute between Elite Systems and their manufacturer, Eurotech. Later in 2014, the ZX Spectrum Vega retro video game console was announced by Retro Computers and crowdfunded on Indiegogo with the backing of Clive Sinclair. The Vega, released in 2015, took the form of a handheld TV game but the lack of a full keyboard led to criticism from reviewers due to the large number of text adventures supplied with the device. Most reviewers branded the device cheap and uncomfortable to use. The follow-up, the ZX Spectrum Vega+ was designed as a handheld game console. Despite reaching its crowdfunding target in March 2016, the company failed to fulfil the majority of orders. Reviewing the Vega+, The Register criticised numerous aspects and features of the machine, including its design and build quality and summed up by saying that the "entire feel is plasticky and inconsequential". Retro Computers Ltd was placed into liquidation in 2019. The ZX Spectrum Next is an expanded and updated version of the ZX Spectrum computer implemented with FPGA technology funded by a Kickstarter campaign in April 2017, with the board-only computer delivered to backers later that year. The finished machine, including a case designed by Rick Dickinson who died during the development of the project, was released to backers in February 2020. MagPi called it "a lovely piece of kit", noting that it is "well-designed and well-built: authentic to the original, and with technology that nods to the past while remaining functional and relevant in the modern age". PC Pro magazine called the Next "undeniably impressive" while noting that some features are "not quite ready". A further Kickstarter for an improved revision of the hardware was funded in August 2020. In August 2024, Retro Games announced that they would be releasing a recreation of the ZX Spectrum titled "The Spectrum" which would include 48 built-in games, a save game option, rewind mode and pre-owned titles load up. It is set to be released on 22 November 2024. In popular culture A running gag in Scott Pilgrim vs. the World features character Julie Powers being censored with the sound effects of the ZX Spectrum. The director of the film, Edgar Wright, who was a big fan of the Spectrum, stated that he always used to wait for Spectrum games to load when he was a teenager. On 23 April 2012, a Google doodle honoured the 30th anniversary of the Spectrum. As it coincided with St George's Day, the Google logo was of St George fighting a dragon in the style of a Spectrum loading screen. One of the alternate endings in the interactive film Black Mirror: Bandersnatch (2018) included the main character playing data tape audio that, when loaded into a ZX Spectrum software emulator, generates a QR code leading to a website with a playable version of the "Nohzdyve" game featured in the film.
Technology
Specific hardware
null
34513
https://en.wikipedia.org/wiki/0
0
0 (zero) is a number representing an empty quantity. Adding (or subtracting) 0 to any number leaves that number unchanged; in mathematical terminology, 0 is the additive identity of the integers, rational numbers, real numbers, and complex numbers, as well as other algebraic structures. Multiplying any number by 0 results in 0, and consequently division by zero has no meaning in arithmetic. As a numerical digit, 0 plays a crucial role in decimal notation: it indicates that the power of ten corresponding to the place containing a 0 does not contribute to the total. For example, "205" in decimal means two hundreds, no tens, and five ones. The same principle applies in place-value notations that uses a base other than ten, such as binary and hexadecimal. The modern use of 0 in this manner derives from Indian mathematics that was transmitted to Europe via medieval Islamic mathematicians and popularized by Fibonacci. It was independently used by the Maya. Common names for the number 0 in English include zero, nought, naught (), and nil. In contexts where at least one adjacent digit distinguishes it from the letter O, the number is sometimes pronounced as oh or o (). Informal or slang terms for 0 include zilch and zip. Historically, ought, aught (), and cipher have also been used. Etymology The word zero came into the English language via French from the Italian , a contraction of the Venetian form of Italian via ṣafira or ṣifr. In pre-Islamic time the word (Arabic ) had the meaning "empty". evolved to mean zero when it was used to translate () from India. The first known English use of zero was in 1598. The Italian mathematician Fibonacci (), who grew up in North Africa and is credited with introducing the decimal system to Europe, used the term zephyrum. This became in Italian, and was then contracted to in Venetian. The Italian word was already in existence (meaning "west wind" from Latin and Greek ) and may have influenced the spelling when transcribing Arabic . Modern usage Depending on the context, there may be different words used for the number zero, or the concept of zero. For the simple notion of lacking, the words "nothing" (although this is not accurate) and "none" are often used. The British English words "nought" or "naught", and "nil" are also synonymous. It is often called "oh" in the context of reading out a string of digits, such as telephone numbers, street addresses, credit card numbers, military time, or years. For example, the area code 201 may be pronounced "two oh one", and the year 1907 is often pronounced "nineteen oh seven". The presence of other digits, indicating that the string contains only numbers, avoids confusion with the letter O. For this reason, systems that include strings with both letters and numbers (such as Canadian postal codes) may exclude the use of the letter O. Slang words for zero include "zip", "zilch", "nada", and "scratch". In the context of sports, "nil" is sometimes used, especially in British English. Several sports have specific words for a score of zero, such as "love" in tennis – from French , "the egg" – and "duck" in cricket, a shortening of "duck's egg". "Goose egg" is another general slang term used for zero. History Ancient Near East Ancient Egyptian numerals were of base 10. They used hieroglyphs for the digits and were not positional. In one papyrus written around , a scribe recorded daily incomes and expenditures for the pharaoh's court, using the nfr hieroglyph to indicate cases where the amount of a foodstuff received was exactly equal to the amount disbursed. Egyptologist Alan Gardiner suggested that the nfr hieroglyph was being used as a symbol for zero. The same symbol was also used to indicate the base level in drawings of tombs and pyramids, and distances were measured relative to the base line as being above or below this line. By the middle of the 2nd millennium BC, Babylonian mathematics had a sophisticated base 60 positional numeral system. The lack of a positional value (or zero) was indicated by a space between sexagesimal numerals. In a tablet unearthed at Kish (dating to as early as ), the scribe Bêl-bân-aplu used three hooks as a placeholder in the same Babylonian system. By , a punctuation symbol (two slanted wedges) was repurposed as a placeholder. The Babylonian positional numeral system differed from the later Hindu–Arabic system in that it did not explicitly specify the magnitude of the leading sexagesimal digit, so that for example the lone digit 1 () might represent any of 1, 60, 3600 = 602, etc., similar to the significand of a floating-point number but without an explicit exponent, and so only distinguished implicitly from context. The zero-like placeholder mark was only ever used in between digits, but never alone or at the end of a number. Pre-Columbian Americas The Mesoamerican Long Count calendar developed in south-central Mexico and Central America required the use of zero as a placeholder within its vigesimal (base-20) positional numeral system. Many different glyphs, including the partial quatrefoil were used as a zero symbol for these Long Count dates, the earliest of which (on Stela 2 at Chiapa de Corzo, Chiapas) has a date of 36 BC. Since the eight earliest Long Count dates appear outside the Maya homeland, it is generally believed that the use of zero in the Americas predated the Maya and was possibly the invention of the Olmecs. Many of the earliest Long Count dates were found within the Olmec heartland, although the Olmec civilization ended by the , several centuries before the earliest known Long Count dates. Although zero became an integral part of Maya numerals, with a different, empty tortoise-like "shell shape" used for many depictions of the "zero" numeral, it is assumed not to have influenced Old World numeral systems. Quipu, a knotted cord device, used in the Inca Empire and its predecessor societies in the Andean region to record accounting and other digital data, is encoded in a base ten positional system. Zero is represented by the absence of a knot in the appropriate position. Classical antiquity The ancient Greeks had no symbol for zero (μηδέν, pronounced 'midén'), and did not use a digit placeholder for it. According to mathematician Charles Seife, the ancient Greeks did begin to adopt the Babylonian placeholder zero for their work in astronomy after 500 BC, representing it with the lowercase Greek letter ό (όμικρον: omicron). However, after using the Babylonian placeholder zero for astronomical calculations they would typically convert the numbers back into Greek numerals. Greeks seemed to have a philosophical opposition to using zero as a number. Other scholars give the Greek partial adoption of the Babylonian zero a later date, with neuroscientist Andreas Nieder giving a date of after 400 BC and mathematician Robert Kaplan dating it after the conquests of Alexander. Greeks seemed unsure about the status of zero as a number. Some of them asked themselves, "How can not being be?", leading to philosophical and, by the medieval period, religious arguments about the nature and existence of zero and the vacuum. The paradoxes of Zeno of Elea depend in large part on the uncertain interpretation of zero. By AD150, Ptolemy, influenced by Hipparchus and the Babylonians, was using a symbol for zero () in his work on mathematical astronomy called the Syntaxis Mathematica, also known as the Almagest. This Hellenistic zero was perhaps the earliest documented use of a numeral representing zero in the Old World. Ptolemy used it many times in his Almagest (VI.8) for the magnitude of solar and lunar eclipses. It represented the value of both digits and minutes of immersion at first and last contact. Digits varied continuously from 0 to 12 to 0 as the Moon passed over the Sun (a triangular pulse), where twelve digits was the angular diameter of the Sun. Minutes of immersion was tabulated from 00 to 3120 to 00, where 00 used the symbol as a placeholder in two positions of his sexagesimal positional numeral system, while the combination meant a zero angle. Minutes of immersion was also a continuous function (a triangular pulse with convex sides), where d was the digit function and 3120 was the sum of the radii of the Sun's and Moon's discs. Ptolemy's symbol was a placeholder as well as a number used by two continuous mathematical functions, one within another, so it meant zero, not none. Over time, Ptolemy's zero tended to increase in size and lose the overline, sometimes depicted as a large elongated 0-like omicron "Ο" or as omicron with overline "ō" instead of a dot with overline. The earliest use of zero in the calculation of the Julian Easter occurred before AD311, at the first entry in a table of epacts as preserved in an Ethiopic document for the years 311 to 369, using a Ge'ez word for "none" (English translation is "0" elsewhere) alongside Ge'ez numerals (based on Greek numerals), which was translated from an equivalent table published by the Church of Alexandria in Medieval Greek. This use was repeated in 525 in an equivalent table, that was translated via the Latin ("none") by Dionysius Exiguus, alongside Roman numerals. When division produced zero as a remainder, nihil, meaning "nothing", was used. These medieval zeros were used by all future medieval calculators of Easter. The initial "N" was used as a zero symbol in a table of Roman numerals by Bede—or his colleagues—around AD725. In most cultures, 0 was identified before the idea of negative things (i.e., quantities less than zero) was accepted. China The Sūnzĭ Suànjīng, of unknown date but estimated to be dated from the 1st to , describe how the Chinese counting rods system enabled one to perform decimal calculations. As noted in the Xiahou Yang Suanjing (425–468 AD), to multiply or divide a number by 10, 100, 1000, or 10000, all one needs to do, with rods on the counting board, is to move them forwards, or back, by 1, 2, 3, or 4 places. The rods gave the decimal representation of a number, with an empty space denoting zero. The counting rod system is a positional notation system. Zero was not treated as a number at that time, but as a "vacant position". Qín Jiǔsháo's 1247 Mathematical Treatise in Nine Sections is the oldest surviving Chinese mathematical text using a round symbol ‘〇’ for zero. The origin of this symbol is unknown; it may have been produced by modifying a square symbol. Chinese authors had been familiar with the idea of negative numbers by the Han dynasty , as seen in The Nine Chapters on the Mathematical Art. India Pingala ( or 2nd century BC), a Sanskrit prosody scholar, used binary sequences, in the form of short and long syllables (the latter equal in length to two short syllables), to identify the possible valid Sanskrit meters, a notation similar to Morse code. Pingala used the Sanskrit word śūnya explicitly to refer to zero. The concept of zero as a written digit in the decimal place value notation was developed in India. A symbol for zero, a large dot likely to be the precursor of the still-current hollow symbol, is used throughout the Bakhshali manuscript, a practical manual on arithmetic for merchants. In 2017, researchers at the Bodleian Library reported radiocarbon dating results for three samples from the manuscript, indicating that they came from three different centuries: from AD 224–383, AD 680–779, and AD 885–993. It is not known how the birch bark fragments from different centuries forming the manuscript came to be packaged together. If the writing on the oldest birch bark fragments is as old as those fragments, it represents South Asia's oldest recorded use of a zero symbol. However, it is possible that the writing dates instead to the time period of the youngest fragments, AD 885–993. The latter dating has been argued to be more consistent with the sophisticated use of zero within the document, as portions of it appear to show zero being employed as a number in its own right, rather than only as a positional placeholder. The Lokavibhāga, a Jain text on cosmology surviving in a medieval Sanskrit translation of the Prakrit original, which is internally dated to AD 458 (Saka era 380), uses a decimal place-value system, including a zero. In this text, śūnya ("void, empty") is also used to refer to zero. The Aryabhatiya ( 499), states sthānāt sthānaṁ daśaguṇaṁ syāt "from place to place each is ten times the preceding". Rules governing the use of zero appeared in Brahmagupta's Brahmasputha Siddhanta (7th century), which states the sum of zero with itself as zero, and incorrectly describes division by zero in the following way: A positive or negative number when divided by zero is a fraction with the zero as denominator. Zero divided by a negative or positive number is either zero or is expressed as a fraction with zero as numerator and the finite quantity as denominator. Zero divided by zero is zero. Epigraphy A black dot is used as a decimal placeholder in the Bakhshali manuscript, portions of which date from AD 224–993. There are numerous copper plate inscriptions, with the same small in them, some of them possibly dated to the 6th century, but their date or authenticity may be open to doubt. A stone tablet found in the ruins of a temple near Sambor on the Mekong, Kratié Province, Cambodia, includes the inscription of "605" in Khmer numerals (a set of numeral glyphs for the Hindu–Arabic numeral system). The number is the year of the inscription in the Saka era, corresponding to a date of AD 683. The first known use of special glyphs for the decimal digits that includes the indubitable appearance of a symbol for the digit zero, a small circle, appears on a stone inscription found at the Chaturbhuj Temple, Gwalior, in India, dated AD 876. Middle Ages Transmission to Islamic culture The Arabic-language inheritance of science was largely Greek, followed by Hindu influences. In 773, at Al-Mansur's behest, translations were made of many ancient treatises including Greek, Roman, Indian, and others. In AD 813, astronomical tables were prepared by a Persian mathematician, Muḥammad ibn Mūsā al-Khwārizmī, using Hindu numerals; and about 825, he published a book synthesizing Greek and Hindu knowledge and also contained his own contribution to mathematics including an explanation of the use of zero. This book was later translated into Latin in the 12th century under the title Algoritmi de numero Indorum. This title means "al-Khwarizmi on the Numerals of the Indians". The word "Algoritmi" was the translator's Latinization of Al-Khwarizmi's name, and the word "Algorithm" or "Algorism" started to acquire a meaning of any arithmetic based on decimals. Muhammad ibn Ahmad al-Khwarizmi, in 976, stated that if no number appears in the place of tens in a calculation, a little circle should be used "to keep the rows". This circle was called ṣifr. Transmission to Europe The Hindu–Arabic numeral system (base 10) reached Western Europe in the 11th century, via Al-Andalus, through Spanish Muslims, the Moors, together with knowledge of classical astronomy and instruments like the astrolabe. Gerbert of Aurillac is credited with reintroducing the lost teachings into Catholic Europe. For this reason, the numerals came to be known in Europe as "Arabic numerals". The Italian mathematician Fibonacci or Leonardo of Pisa was instrumental in bringing the system into European mathematics in 1202, stating: After my father's appointment by his homeland as state official in the customs house of Bugia for the Pisan merchants who thronged to it, he took charge; and in view of its future usefulness and convenience, had me in my boyhood come to him and there wanted me to devote myself to and be instructed in the study of calculation for some days. There, following my introduction, as a consequence of marvelous instruction in the art, to the nine digits of the Hindus, the knowledge of the art very much appealed to me before all others, and for it I realized that all its aspects were studied in Egypt, Syria, Greece, Sicily, and Provence, with their varying methods; and at these places thereafter, while on business. I pursued my study in depth and learned the give-and-take of disputation. But all this even, and the algorism, as well as the art of Pythagoras, I considered as almost a mistake in respect to the method of the Hindus []. Therefore, embracing more stringently that method of the Hindus, and taking stricter pains in its study, while adding certain things from my own understanding and inserting also certain things from the niceties of Euclid's geometric art. I have striven to compose this book in its entirety as understandably as I could, dividing it into fifteen chapters. Almost everything which I have introduced I have displayed with exact proof, in order that those further seeking this knowledge, with its pre-eminent method, might be instructed, and further, in order that the Latin people might not be discovered to be without it, as they have been up to now. If I have perchance omitted anything more or less proper or necessary, I beg indulgence, since there is no one who is blameless and utterly provident in all things. The nine Indian figures are: 9 8 7 6 5 4 3 2 1. With these nine figures, and with the sign 0... any number may be written. From the 13th century, manuals on calculation (adding, multiplying, extracting roots, etc.) became common in Europe where they were called after the Persian mathematician al-Khwārizmī. One popular manual was written by Johannes de Sacrobosco in the early 1200s and was one of the earliest scientific books to be printed, in 1488. The practice of calculating on paper using Hindu–Arabic numerals only gradually displaced calculation by abacus and recording with Roman numerals. In the 16th century, Hindu–Arabic numerals became the predominant numerals used in Europe. Symbols and representations Today, the numerical digit 0 is usually written as a circle or ellipse. Traditionally, many print typefaces made the capital letter O more rounded than the narrower, elliptical digit 0. Typewriters originally made no distinction in shape between O and 0; some models did not even have a separate key for the digit 0. The distinction came into prominence on modern character displays. A slashed zero () is often used to distinguish the number from the letter (mostly in computing, navigation and in the military, for example). The digit 0 with a dot in the center seems to have originated as an option on IBM 3270 displays and has continued with some modern computer typefaces such as Andalé Mono, and in some airline reservation systems. One variation uses a short vertical bar instead of the dot. Some fonts designed for use with computers made the "0" character more squared at the edges, like a rectangle, and the "O" character more rounded. A further distinction is made in falsification-hindering typeface as used on German car number plates by slitting open the digit 0 on the upper right side. In some systems either the letter O or the numeral 0, or both, are excluded from use, to avoid confusion. Mathematics The concept of zero plays multiple roles in mathematics: as a digit, it is an important part of positional notation for representing numbers, while it also plays an important role as a number in its own right in many algebraic settings. As a digit In positional number systems (such as the usual decimal notation for representing numbers), the digit 0 plays the role of a placeholder, indicating that certain powers of the base do not contribute. For example, the decimal number 205 is the sum of two hundreds and five ones, with the 0 digit indicating that no tens are added. The digit plays the same role in decimal fractions and in the decimal representation of other real numbers (indicating whether any tenths, hundredths, thousandths, etc., are present) and in bases other than 10 (for example, in binary, where it indicates which powers of 2 are omitted). Elementary algebra The number 0 is the smallest nonnegative integer, and the largest nonpositive integer. The natural number following 0 is 1 and no natural number precedes 0. The number 0 may or may not be considered a natural number, but it is an integer, and hence a rational number and a real number. All rational numbers are algebraic numbers, including 0. When the real numbers are extended to form the complex numbers, 0 becomes the origin of the complex plane. The number 0 can be regarded as neither positive nor negative or, alternatively, both positive and negative and is usually displayed as the central number in a number line. Zero is even (that is, a multiple of 2), and is also an integer multiple of any other integer, rational, or real number. It is neither a prime number nor a composite number: it is not prime because prime numbers are greater than 1 by definition, and it is not composite because it cannot be expressed as the product of two smaller natural numbers. (However, the singleton set {0} is a prime ideal in the ring of the integers.) The following are some basic rules for dealing with the number 0. These rules apply for any real or complex number x, unless otherwise stated. Addition: x + 0 = 0 + x = x. That is, 0 is an identity element (or neutral element) with respect to addition. Subtraction: x − 0 = x and 0 − x = −x. Multiplication: x · 0 = 0 · x = 0. Division: = 0, for nonzero x. But is undefined, because 0 has no multiplicative inverse (no real number multiplied by 0 produces 1), a consequence of the previous rule. Exponentiation: x0 = = 1, except that the case x = 0 is considered undefined in some contexts. For all positive real x, . The expression , which may be obtained in an attempt to determine the limit of an expression of the form as a result of applying the lim operator independently to both operands of the fraction, is a so-called "indeterminate form". That does not mean that the limit sought is necessarily undefined; rather, it means that the limit of , if it exists, must be found by another method, such as l'Hôpital's rule. The sum of 0 numbers (the empty sum) is 0, and the product of 0 numbers (the empty product) is 1. The factorial 0! evaluates to 1, as a special case of the empty product. Other uses in mathematics The role of 0 as the smallest counting number can be generalized or extended in various ways. In set theory, 0 is the cardinality of the empty set (notated as "{ }", "", or "∅"): if one does not have any apples, then one has 0 apples. In fact, in certain axiomatic developments of mathematics from set theory, 0 is defined to be the empty set. When this is done, the empty set is the von Neumann cardinal assignment for a set with no elements, which is the empty set. The cardinality function, applied to the empty set, returns the empty set as a value, thereby assigning it 0 elements. Also in set theory, 0 is the lowest ordinal number, corresponding to the empty set viewed as a well-ordered set. In order theory (and especially its subfield lattice theory), 0 may denote the least element of a lattice or other partially ordered set. The role of 0 as additive identity generalizes beyond elementary algebra. In abstract algebra, 0 is commonly used to denote a zero element, which is the identity element for addition (if defined on the structure under consideration) and an absorbing element for multiplication (if defined). (Such elements may also be called zero elements.) Examples include identity elements of additive groups and vector spaces. Another example is the zero function (or zero map) on a domain . This is the constant function with 0 as its only possible output value, that is, it is the function defined by for all in . As a function from the real numbers to the real numbers, the zero function is the only function that is both even and odd. The number 0 is also used in several other ways within various branches of mathematics: A zero of a function f is a point x in the domain of the function such that . In propositional logic, 0 may be used to denote the truth value false. In probability theory, 0 is the smallest allowed value for the probability of any event. Category theory introduces the idea of a zero object, often denoted 0, and the related concept of zero morphisms, which generalize the zero function. Physics The value zero plays a special role for many physical quantities. For some quantities, the zero level is naturally distinguished from all other levels, whereas for others it is more or less arbitrarily chosen. For example, for an absolute temperature (typically measured in kelvins), zero is the lowest possible value. (Negative temperatures can be defined for some physical systems, but negative-temperature systems are not actually colder.) This is in contrast to temperatures on the Celsius scale, for example, where zero is arbitrarily defined to be at the freezing point of water. Measuring sound intensity in decibels or phons, the zero level is arbitrarily set at a reference value—for example, at a value for the threshold of hearing. In physics, the zero-point energy is the lowest possible energy that a quantum mechanical physical system may possess and is the energy of the ground state of the system. Computer science Modern computers store information in binary, that is, using an "alphabet" that contains only two symbols, usually chosen to be "0" and "1". Binary coding is convenient for digital electronics, where "0" and "1" can stand for the absence or presence of electrical current in a wire. Computer programmers typically use high-level programming languages that are more easily intelligible to humans than the binary instructions that are directly executed by the central processing unit. 0 plays various important roles in high-level languages. For example, a Boolean variable stores a value that is either true or false, and 0 is often the numerical representation of false. 0 also plays a role in array indexing. The most common practice throughout human history has been to start counting at one, and this is the practice in early classic programming languages such as Fortran and COBOL. However, in the late 1950s LISP introduced zero-based numbering for arrays while Algol 58 introduced completely flexible basing for array subscripts (allowing any positive, negative, or zero integer as base for array subscripts), and most subsequent programming languages adopted one or other of these positions. For example, the elements of an array are numbered starting from 0 in C, so that for an array of n items the sequence of array indices runs from 0 to . There can be confusion between 0- and 1-based indexing; for example, Java's JDBC indexes parameters from 1 although Java itself uses 0-based indexing. In C, a byte containing the value 0 serves to indicate where a string of characters ends. Also, 0 is a standard way to refer to a null pointer in code. In databases, it is possible for a field not to have a value. It is then said to have a null value. For numeric fields it is not the value zero. For text fields this is not blank nor the empty string. The presence of null values leads to three-valued logic. No longer is a condition either true or false, but it can be undetermined. Any computation including a null value delivers a null result. In mathematics, there is no "positive zero" or "negative zero" distinct from zero; both −0 and +0 represent exactly the same number. However, in some computer hardware signed number representations, zero has two distinct representations, a positive one grouped with the positive numbers and a negative one grouped with the negatives. This kind of dual representation is known as signed zero, with the latter form sometimes called negative zero. These representations include the signed magnitude and ones' complement binary integer representations (but not the two's complement binary form used in most modern computers), and most floating-point number representations (such as IEEE 754 and IBM S/390 floating-point formats). An epoch, in computing terminology, is the date and time associated with a zero timestamp. The Unix epoch begins the midnight before the first of January 1970. The Classic Mac OS epoch and Palm OS epoch begin the midnight before the first of January 1904. Many APIs and operating systems that require applications to return an integer value as an exit status typically use zero to indicate success and non-zero values to indicate specific error or warning conditions. Programmers often use a slashed zero to avoid confusion with the letter "O". Other fields Biology In comparative zoology and cognitive science, recognition that some animals display awareness of the concept of zero leads to the conclusion that the capability for numerical abstraction arose early in the evolution of species. Dating systems In the BC calendar era, the year 1BC is the first year before AD1; there is not a year zero. By contrast, in astronomical year numbering, the year 1BC is numbered 0, the year 2BC is numbered −1, and so forth.
Mathematics
Counting and numbers
null
34522
https://en.wikipedia.org/wiki/Zwitterion
Zwitterion
In chemistry, a zwitterion ( ; ), also called an inner salt or dipolar ion, is a molecule that contains an equal number of positively and negatively charged functional groups. 1,2-dipolar compounds, such as ylides, are sometimes excluded from the definition. Some zwitterions, such as amino acid zwitterions, are in chemical equilibrium with an uncharged "parent" molecule. Betaines are zwitterions that cannot isomerize to an all-neutral form, such as when the positive charge is located on a quaternary ammonium group. Similarly, a molecule containing a phosphonium group and a carboxylate group cannot isomerize. Amino acids Tautomerism of amino acids follows this stoichiometry: The ratio of the concentrations of the two species in solution is independent of pH. It has been suggested, on the basis of theoretical analysis, that the zwitterion is stabilized in aqueous solution by hydrogen bonding with solvent water molecules. Analysis of neutron diffraction data for glycine showed that it was in the zwitterionic form in the solid state and confirmed the presence of hydrogen bonds. Theoretical calculations have been used to show that zwitterions may also be present in the gas phase for some cases different from the simple carboxylic acid-to-amine transfer. The pKa values for deprotonation of the common amino acids span the approximate range . This is also consistent with the zwitterion being the predominant isomer that is present in an aqueous solution. For comparison, the simple carboxylic acid propionic acid () has a pKa value of 4.88. Other compounds Sulfamic acid crystallizes in the zwitterion form. In crystals of anthranilic acid there are two molecules in the unit cell. One molecule is in the zwitterion form, the other is not. In the solid state, H4EDTA is a zwitterion with two protons having been transferred from carboxylic acid groups to the nitrogen atoms. In psilocybin, the proton on the dimethyl amino group is labile and may jump to the phosphate group to form a compound which is not a zwitterion. Theoretical studies Insight to the equilibrium in solution may be gained from the results of theoretical calculations. For example, pyridoxal phosphate, a form of vitamin B6, in aqueous solution is predicted to have an equilibrium favoring a tautomeric form in which a proton is transferred from the phenolic -OH group to the nitrogen atom. Because tautomers are different compounds, they sometimes have different enough structures that they can be detected independently in their mixture. This allows experimental analysis of the equilibrium. Betaines and similar compounds The compound trimethylglycine, which was isolated from sugar beet, was named as "betaine". Later, other compounds were discovered that contain the same structural motif, a quaternary nitrogen atom with a carboxylate group attached to it via a –CH2– link. At the present time, all compounds whose structure includes this motif are known as betaines. Betaines do not isomerize because the chemical groups attached to the nitrogen atom are not labile. These compounds may be classed as permanent zwitterions, as isomerisation to a molecule with no electrical charges does not occur, or is very slow. Other examples of permanent zwitterions include phosphatidylcholines, which also contain a quaternary nitrogen atom, but with a negatively-charged phosphate group in place of a carboxylate group; sulfobetaines, which contain a quaternary nitrogen atom and a negatively charged sulfonate group; and pulmonary surfactants such as dipalmitoylphosphatidylcholine. Lauramidopropyl betaine is the major component of cocamidopropyl betaine. Conjugated zwitterions Strongly polarized conjugated compounds (conjugated zwitterions) are typically very reactive, share diradical character, activate strong bonds and small molecules, and serve as transient intermediates in catalysis. Donor-acceptor entities are of vast use in photochemistry (photoinduced electron transfer), organic electronics, switching and sensing.
Physical sciences
Salts and ions: General
Chemistry